Vous êtes sur la page 1sur 802

V3.1.0.

cover

 Front cover

HACMP System
Administration I: Planning and
Implementation
(Course Code AU54)

Student Notebook
ERC 6.0

IBM Certified Course Material


Student Notebook

Trademarks
IBM is a registered trademark of International Business Machines Corporation.
The following are trademarks of International Business Machines Corporation in the United
States, or other countries, or both:
AIX AIX5L DB2
DB2 Universal Database eServer Enterprise Storage Server
HACMP iSeries NetView
Notes POWER4+ POWER5
pSeries Redbooks RS/6000
SP Tivoli TotalStorage
WebSphere
Think is a trademark or registered trademark of Lenovo in the United States, other
countries, or both.
Java and all Java-based trademarks are trademarks of Sun Microsystems, Inc. in the
United States, other countries, or both.
Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft
Corporation in the United States, other countries, or both.
UNIX is a registered trademark of The Open Group in the United States and other
countries.
Other company, product and service names may be trademarks or service marks of others.

October 2005 Edition

The information contained in this document has not been submitted to any formal IBM test and is distributed on an as is basis without
any warranty either express or implied. The use of this information or the implementation of any of these techniques is a customer
responsibility and depends on the customers ability to evaluate and integrate them into the customers operational environment. While
each item may have been reviewed by IBM for accuracy in a specific situation, there is no guarantee that the same or similar results will
result elsewhere. Customers attempting to adapt these techniques to their own environments do so at their own risk.

Copyright International Business Machines Corporation 1998, 2005. All rights reserved.
This document may not be reproduced in whole or in part without the prior written permission of IBM.
Note to U.S. Government Users Documentation related to restricted rights Use, duplication or disclosure is subject to restrictions
set forth in GSA ADP Schedule Contract with IBM Corp.
V3.1.0.1
Student Notebook

TOC Contents
Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv

Course Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii

Agenda . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xix

Course Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Intro1


AU54 - HACMP System Administration I: Planning and Implementation . . . . . .Intro-2
Course Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Intro-3
Course Agenda . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Intro-4
Course Agenda . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Intro-5
Course Agenda . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Intro-6
Course Agenda . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Intro-7
Course Agenda . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Intro-8
Lab Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Intro-9
Student Guide Font Conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Intro-10
Course Overview Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Intro-11

Unit 1. Introduction to HACMP for AIX 5L . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-1


Unit Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-2
1.1 High Availability Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-3
Topic 1 Objectives: High Availability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-4
So, What Is High Availability? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-5
So Why Is Planned Downtime Important? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-6
Continuous Availability Is the Goal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-8
Eliminating Single Points of Failure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-9
Availability - from Simple to Complex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-10
The Stand-alone System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-11
Stand-alone Enhanced Storage System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-12
High Availability Clusters (HACMP base) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-13
Fault-Tolerant Computing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-15
Availability Solutions Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-16
So, What About Site Failure? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-17
Why Might I Need High Availability? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-19
Benefits of High-Availability Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-21
Other Considerations for High Availability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-22
Lets Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-24
1.2 HACMP Basics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-25
Topic 2 Objectives: HACMP for AIX 5L Basics . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-26
IBM's HA Solution for AIX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-27
A Highly Available Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-28
HACMP's Topology Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-30
Nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-32
Supported Networking Environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-35

Copyright IBM Corp. 1998, 2005 Contents iii


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

HACMP's Resource Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1-37


Solution Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1-39
AIX's Contribution to High Availability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1-40
Supported Shared Storage Environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1-41
Some Clusters Do Not Have Shared Disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1-42
So What Is HACMP Really? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1-43
Additional Features of HACMP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1-45
Some Assembly Required . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1-47
Lets Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1-48
1.3 What Does HACMP Do . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-49
Topic 3 Objectives: What Does HACMP Do? . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1-50
Just What Does HACMP Do? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1-51
What Happens When Something Fails? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1-52
What Happens When a Problem Is Fixed? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1-53
Standby: With Fallback . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1-54
Standby: Without Fallback . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1-56
Takeover: 2 sided (mutual) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1-57
Concurrent: Multiple Active Nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1-59
Fundamental HACMP Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1-61
Points to Ponder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1-62
HACMP 5.3 Limits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1-63
Things HACMP Does Not Do . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1-65
When Is HACMP Not the Correct Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1-66
What Do We Plan to Achieve This Week? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1-68
Overview of the Installation Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1-69
Hints to get Started . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1-70
Sources of HACMP Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1-71
Checkpoint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1-72
Unit Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1-73

Unit 2. Shared Storage Considerations for High Availability . . . . . . . . . . . . . . . 2-1


Unit Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2-2
2.1 Fundamental Shared Storage Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-3
Topic 1 Objectives: Fundamental Shared Storage Concepts . . . . . . . . . . . . . . . . . .2-4
What Is Shared Storage? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2-5
What Is Private Storage? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2-7
Access to Shared Data Must be Controlled . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2-8
Who Owns the Storage? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2-9
Reserve/Release-based Protection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2-11
Reserve/Release Voluntary Disk Takeover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2-13
Reserve/Release Involuntary Disk Takeover . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2-14
Reserve/Release Ghost Disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2-16
RSCT-based Shared Storage Protection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2-18
RSCT-based Voluntary Fast Disk Takeover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2-20
RSCT-based Involuntary Fast Disk Takeover . . . . . . . . . . . . . . . . . . . . . . . . . . . .2-21
Enabling RSCT-based Fast Disk Takeover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2-22
Fast Disk Takeover Additional Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2-23

iv HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

TOC Lets Review Topic 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-24


2.2 Shared Disk Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-25
Topic 2 Objectives: Shared Disk Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-26
SCSI Technology and HACMP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-27
SCSI Continued . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-29
SSA Technology and HACMP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-31
Configuring SSA for Maximum Availability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-34
SSA Adapters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-36
ESS Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-37
ESS Continued . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-39
Fibre Channel Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-40
Fibre Channel Continued . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-41
Physical Volume IDs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-42
hdisk Inconsistency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-43
Removing hdisk Inconsistencies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-45
Support for OEM Disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-47
Lets Review Topic 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-51
2.3 Shared Storage from the AIX Perspective . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-53
Topic 3 Objectives: Shared Storage from the AIX Perspective . . . . . . . . . . . . . . . 2-54
Logical Volume Manager (LVM) Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-55
LVM Relationships . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-57
LVM Mirroring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-58
Steps to Creating a Mirrored File system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-60
Adding a Shared Volume Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-62
Quorum Checking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-65
Disabling Quorum Checking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-67
Eliminating Quorum Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-69
The Quorum Buster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-71
HACMP Forced Varyon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-73
Recommendations for Forced Varyon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-75
Enhanced Concurrent Volume Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-76
Active Varyon versus Passive Varyon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-78
lsvg vg_name . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-80
LVM and HACMP Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-81
Support for OEM Volume Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-83
Support for OEM File Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-85
Checkpoint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-87
Unit Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-88

Unit 3. Networking Considerations for High Availability . . . . . . . . . . . . . . . . . . . 3-1


Unit Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-2
3.1 How HACMP Uses Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-3
Topic 1 Objectives: How HACMP Uses Networks . . . . . . . . . . . . . . . . . . . . . . . . . 3-4
How Does HACMP Use Networks? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-5
Providing HA Client Access to the Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-7
What HACMP Detects and Diagnoses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-9
Heartbeat Packets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-11

Copyright IBM Corp. 1998, 2005 Contents v


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Failure Detection versus Failure Diagnosis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-13


Failure Diagnosis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-14
What If All Heartbeat Packets Stop? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-15
All Clusters REQUIRE a non-IP Network! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-17
An Important Implementation Detail . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-19
Failure Recovery and Reintegration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-20
IP Address Takeover (IPAT) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-21
IPAT After a Node Failure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-23
IPAT After a NIC Failure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-24
Lets Review Topic 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-25
3.2 HACMP Concepts and Configuration Rules . . . . . . . . . . . . . . . . . . . . . . . . . . 3-27
Topic 2 Objectives: HACMP Concepts and Configuration Rules . . . . . . . . . . . . . .3-28
HACMP Networking Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-29
Network Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-30
HACMP Topology Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-32
On the Naming of Nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-34
HACMP 5.X Network Terms (1 of 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-36
HACMP 5.X Network Terms (2 of 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-38
IP Network Configuration Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-40
IP Network Configuration Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-43
Non-IP Network Configuration Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-44
Persistent Node IP Labels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-47
Lets Review Topic 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-49
3.3 Implementing IP Address Takeover (IPAT) . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-51
Topic 3 Objectives: Implementing IP Address Takeover (IPAT) . . . . . . . . . . . . . . .3-52
Two Ways to Implement IPAT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-53
IPAT via IP Aliasing Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-55
IPAT via IP Aliasing in Operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-58
IPAT via IP Aliasing After an Interface Fails . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-59
IPAT via IP Aliasing After a Node Fails . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-60
IPAT via IP Aliasing: Distribution preference for service IP label aliases . . . . . . . .3-61
IPAT via IP Aliasing Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-63
IPAT via IP Replacement Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-65
Changes to AIX Start Sequence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-67
Changes to /etc/inittab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-68
Talk to Your Network Administrator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-70
Adopt IP Address Numbering Conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-71
Adopt Labeling/Naming Conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-72
An IPAT via IP Aliasing Convention . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-73
An IPAT via IP Replacement Convention . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-74
The /etc/hosts file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-75
Service IP Address Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-77
Common TCP/IP Configuration Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-78
Single IP Adapter Nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-79
Lets Review Topic 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-80
3.4 The Impact of IPAT on Clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-81
Topic 4 Objectives: The Impact of IPAT on Clients . . . . . . . . . . . . . . . . . . . . . . . .3-82

vi HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

TOC How are Users Affected? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-83


What About the Users's Computers? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-85
Local or Remote Client? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-86
Gratuitous ARP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-87
Gratuitous ARP Support Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-88
What if Gratuitous ARP is Not Supported? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-89
Option 1: clinfo on the Client . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-90
Option 2: clinfo From Within the Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-91
clinfo.rc script (extract) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-92
Option 3: Hardware Address Takeover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-94
Checkpoint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-96
Unit Summary (1 of 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-97
Unit Summary (2 of 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-98

Unit 4. Planning for Applications and Resource Groups. . . . . . . . . . . . . . . . . . . 4-1


Unit Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-2
How to Define an Application to HACMP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-3
Application Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-4
Writing Start and Stop Scripts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-7
Where Should Data Go? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-9
Shared Storage Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-11
Resource Group Policies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-12
Startup Policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-14
Online On All Available Nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-16
Fallover Policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-18
Fallback Policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-20
Valid Combinations of policies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-22
Dependent Applications/Resource Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-23
Checkpoint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-25
Unit Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-26

Unit 5. HACMP Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-1


Unit Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-2
5.1 Installing the HACMP 5.3 Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-5
Topic 1 Objectives: Installing the HACMP Software . . . . . . . . . . . . . . . . . . . . . . . . 5-6
Steps for Successful Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-7
Where Are We in the Implementation? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-9
Before All Else Fails... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-10
What Is On the CD? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-11
Install the HACMP Filesets (1 of 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-12
Install the HACMP Filesets (2 of 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-14
Dont Forget the Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-15
Verify That You Have the Required APARs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-16
Some Final Things to Check . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-17
Install HACMP Client Machine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-19
Lets Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-20
5.2 What Was Installed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-21

Copyright IBM Corp. 1998, 2005 Contents vii


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Topic 2 Objectives: What Was Installed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5-22


The Layered Look . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5-23
HACMP Components and Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5-25
Cluster Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5-27
Cluster Secure Communication Subsystem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5-28
Cluster Communication Daemon (clcomd) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5-30
clcomd Standard Connection Authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5-32
RSCT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5-34
HACMP from an RSCT Perspective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5-35
Heartbeat Rings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5-37
HACMPs SNMP Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5-38
Cluster Information Daemon (clinfo) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5-39
Highly Available NFS Server Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5-41
Shared External Disk Access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5-42
Checkpoint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5-43
Unit Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5-44

Unit 6. Initial Cluster Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-1


Unit Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6-2
What We Are Going to Achieve? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6-3
Where Are We in the Implementation? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6-4
The Topology Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6-5
Configuration Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6-6
Plan Two-Node Configuration Assistant . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6-7
Starting at the Very Beginning... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6-8
Almost There... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6-9
The Top-Level HACMP smit Menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6-10
The Standard Configuration Menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6-11
Two-Node Cluster Configuration Assistant . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6-13
Let's See What We've Done . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6-14
The Standard Configuration Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6-16
Add Nodes to an HACMP Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6-17
What Did We Get . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6-18
What Is Left To Do? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6-19
Where Are We in the Implementation? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6-21
Starting Cluster Services (1 of 4) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6-22
Starting Cluster Services (2 of 4) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6-23
Starting Cluster Services (3 of 4) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6-24
Starting Cluster Services (4 of 4) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6-25
Stopping Cluster Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6-26
Configuring Mutual Takeover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6-27
Prepare for Adventure RG Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6-28
Configuring the Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6-29
Adding the Second Service IP Label . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6-30
Adding Adventure Service Label (1 of 3) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6-31
Adding Adventure Service Label (2 of 3) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6-32
Adding Adventure Service Label (3 of 3) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6-33
Adding Adventure Application Server (1 of 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6-34

viii HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

TOC Adding Adventure Application Server (2 of 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-35


Adding the Second Resource Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-37
Setting Name, Nodes, and Policies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-38
Adding Resources to the Adventure RG (1 of 2) . . . . . . . . . . . . . . . . . . . . . . . . . . 6-39
Adding Resources to the Adventure RG (2 of 2) . . . . . . . . . . . . . . . . . . . . . . . . . . 6-40
Synchronize and Test the Changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-41
Extended Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-43
Extended Topology Configuration Menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-44
Communication Interfaces and Devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-45
Defining a Non-IP Network (1 of 3) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-46
Defining a Non-IP Network (2 of 3) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-47
Defining a Non-IP Network (3 of 3) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-48
Defining Persistent Node IP Labels (1 of 3) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-50
Defining Persistent Node IP Labels (2 of 3) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-51
Defining Persistent Node IP Labels (3 of 3) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-52
Synchronize . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-53
Save Configuration: snapshot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-55
Save Configuration: xml file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-56
Removing a Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-57
We're There! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-58
Checkpoint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-59
Break Time! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-60
Unit Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-61

Unit 7. Basic HACMP Administration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-1


Unit Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-2
7.1 Topology and Resource Group Management . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-3
Topic 1 Objectives: Topology and Resource Group Management . . . . . . . . . . . . . 7-4
Yet Another Resource Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-5
Adding the Third Resource Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-6
Adding a Third Service IP Label (1 or 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-7
Adding a Third Service IP Label (2 of 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-9
Adding a Third Application Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-10
Adding Resources to the Third RG (1 of 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-11
Adding Resources to the Third RG (2 of 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-12
Synchronize Your Changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-13
Expanding the Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-14
Adding a New Cluster Node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-15
Add Node -- Standard Path . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-16
Add Node -- Standard Path (In Progress) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-18
Add Node -- Extended Path . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-19
Define the Non-IP rs232 Networks (1 of 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-20
Define the Non-IP rs232 Networks (2 of 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-22
Synchronize Your Changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-23
Start HACMP on the New Node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-24
Add the Node to a Resource Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-25
Synchronize Your Changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-26

Copyright IBM Corp. 1998, 2005 Contents ix


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Shrinking the Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-27


Removing a Cluster Node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-28
Removing an Application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-29
Removing a Resource Group (1 of 3) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-30
Removing a Resource Group (2 of 3) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-32
Removing a Resource Group (3 of 3) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-33
Synchronize Your Changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-34
Implementing Target Mode SSA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-35
Setting the SSA Node Number . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-36
Configuring the tmssa Devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-38
Rediscover the HACMP Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-40
Defining a Non-IP tmssa Network (1 of 3) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-41
Defining a Non-IP tmssa Network (2 of 3) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-42
Defining a Non-IP tmssa Network (3 of 3) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-43
Synchronize Your Changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-44
Removing a Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-45
Lets Review: Topic 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-46
7.2 Cluster Single Point of Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-47
Topic 1 Objectives: Cluster Single Point of Control . . . . . . . . . . . . . . . . . . . . . . . .7-48
Administering a High Availability Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-49
Recommendations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-50
Change Control or Change Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-51
Change Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-53
Masking or Eliminating Planned Downtime . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-54
Cluster Single Point of Control (C-SPOC) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-56
The Top-Level C-SPOC Menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-58
Adding a User to the Cluster (1 of 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-59
Adding a User to the Cluster (2 of 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-61
Removing a User from the Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-62
Passwords in an HACMP Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-63
Adding a Physical Disk to a Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-66
Managing Shared LVM Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-67
Creating a Shared Volume Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-69
Discover, Add VG to Resource Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-70
Creating a Shared File System (1 of 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-71
Creating a Shared File System (2 of 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-73
LVM Change Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-74
LVM Changes, Manual . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-76
LVM Changes, Lazy Update . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-77
LVM Changes, C-SPOC Synchronization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-78
The Best Method: C-SPOC LVM Changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-79
RSCT (for ECM Volume Groups) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-80
LVM Changes, Select Your File system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-81
Update the Size of a File system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-82
Stopping Cluster Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-83
HACMP Resource Group Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-84
Priority Override Location (POL) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-85

x HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

TOC Moving a Resource Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-87


Bring a Resource Group Offline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-89
Taking a Resource Group Offline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-90
Bring a Resource Group Back Online . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-91
Log Files Generated by HACMP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-93
Lets Review Topic 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-94
7.3 Dynamic Automatic Reconfiguration Event Facility. . . . . . . . . . . . . . . . . . . . . . 7-95
Topic 3 Objectives: Dynamic Automatic Reconfiguration Event Facility . . . . . . . . 7-96
Dynamic Reconfiguration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-97
What Can DARE Do? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-98
What Limitations Does DARE Have? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-99
So How Does DARE Work? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-100
Verifying and Synchronizing (Standard) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-102
Verifying and Synchronizing (Extended) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-103
Discarding Unwanted Changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-106
Rolling Back from a DARE Operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-108
What If DARE Fails? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-110
Dynamic Reconfiguration Lock . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-112
Lets Review: Topic 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-113
7.4 WebSMIT. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-115
Topic 4 Objectives: Implementing WebSMIT . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-116
Web-enabled SMIT (WebSMIT) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-117
WebSMIT Index Page . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-118
WebSMIT Cluster Configuration and Management . . . . . . . . . . . . . . . . . . . . . . . 7-119
WebSMIT Tree View . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-120
WebSMIT Online Documentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-121
WebSMIT Cluster Status Page . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-122
WebSMIT FastPath . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-123
WebSMIT Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-124
Checkpoint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-129
Unit Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-130

Unit 8. Events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-1


Unit Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-2
8.1 HACMP Events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-3
Topic 1 Objectives: HACMP Events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-4
What is an HACMP Event? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-5
HACMP Basic Event Flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-6
Recovery Programs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-7
Recovery Program Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-8
Event Scripts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-9
First Node Starts HACMP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-10
Another Node Joins the Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-11
Node Leaves the Cluster (Stopped) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-12
Lets Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-13
8.2 Cluster Customization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-15
Topic 2 Objectives: Event Customization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-16

Copyright IBM Corp. 1998, 2005 Contents xi


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Event Processing Customization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8-17


Adding/Changing Cluster Events (1 of 3) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8-19
Adding/Changing Cluster Events (2 of 3) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8-20
Adding/Changing Cluster Events (3 of 3) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8-22
Recovery Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8-23
Adding/Changing Recovery Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8-24
HACMP Events One More Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8-25
Points to Note . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8-26
Editing an HACMP Event Script (1 of 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8-27
Editing an HACMP Event Script (2 of 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8-28
RG_Move Event and Selective Fallover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8-29
Customizing Event Flow for Other Devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8-31
Error Notification within smit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8-32
Configuring Automatic Error Notification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8-33
Listing Automatic Error Notification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8-34
Adding Error Notification Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8-36
Emulating Errors (1 of 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8-38
Emulating Errors (2 of 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8-40
What Will This Cause? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8-41
Checkpoint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8-42
Unit Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8-43

Unit 9. Integrating NFS into HACMP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-1


Unit Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9-2
So, What Is NFS? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9-3
NFS Background Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9-4
Combining NFS with HACMP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9-5
NFS Fallover with HACMP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9-6
Configuring NFS for High Availability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9-7
Cross-mounting NFS Filesystems (1 of 3) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9-9
Cross-mounting NFS Filesystems (2 of 3) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9-11
Cross-mounting NFS Filesystems (3 of 3) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9-12
Choosing the Network for Cross-mounts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9-13
Configuring HACMP for Cross-mounting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9-14
Syntax for Specifying Cross-mounts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9-15
Ensuring the VG Major Number Is Unique . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9-16
NFS with HACMP Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9-17
Checkpoint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9-18
Unit Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9-19

Unit 10. Problem Determination and Recovery . . . . . . . . . . . . . . . . . . . . . . . . . 10-1


Unit Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10-2
Why Do Good Clusters Turn Bad? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10-3
Test Your Cluster before Going Live! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10-5
Tools to Help You Diagnose a Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10-7
Tools Available From smit Menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10-8
Automatic Cluster Configuration Monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10-9
Automatic Connection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10-10

xii HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

TOC HACMP Cluster Test Tool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-13


Event Emulation Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-15
Listing the Events That Can Be Emulated . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-16
Emulating a Node Down Event . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-17
Emulating a Network Down Event . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-18
Checking Cluster Processes (1 of 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-19
Checking Cluster Processes (2 of 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-21
Testing Your Network Connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-22
Dead Man's Switch (DMS) Time-out . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-24
Avoiding Deadman Switch Time-outs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-25
Setting Performance Tuning Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-27
Enabling I/O Pacing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-28
Changing the Frequency of syncd . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-29
SRC Halts a Node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-30
Partitioned Clusters and Node Isolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-31
Avoiding Partitioned Clusters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-33
Please Check Event Status Message . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-34
Changing the Time-outs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-36
Recovering From an Event Script Failure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-37
Recovering from an Event Failure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-38
A Troubleshooting Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-39
Contacting IBM for Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-42
Checkpoint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-43
Unit Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-44

Unit 11. HACMP Migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-1


Unit Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-2
Okay, So You Want to Upgrade . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-3
Upgrade - Cold or Warm? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-4
What Are We Going to Achieve? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-6
Upgrade Steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-7
Risks? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-8
Step 1/Roll: Backup and Snapshots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-9
Step 2/Roll: Planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-10
Step 3/Roll: Migrate Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-13
Step 4/Roll: Stop HACMP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-14
Step 5/Roll: Upgrade AIX and RSCT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-15
Step 6/Roll: Upgrade HACMP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-16
Step 7/Roll: Convert ODM Classes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-17
Step 8/Roll: Verify Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-18
Step 9/Roll: Restart HACMP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-19
Step 10/Roll: Migrate Resource Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-20
What About Failures? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-21
Steps 11 to 19/Roll . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-22
Step 20/Roll: Backup and Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-23
What About the Other Upgrade Methods? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-24
Snapshot/Cold Convert . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-25
Steps for a Snapshot Convert . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-26

Copyright IBM Corp. 1998, 2005 Contents xiii


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Step 1/Snap: Backup and Snapshots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11-27


Step 2/Snap: Planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11-28
Step 3/Snap: Stop HACMP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11-29
Step 4/Snap Uninstall HACMP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11-30
Step 5/Snap: Install HACMP 5.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11-31
Step 6/Snap: Apply Snapshot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11-32
Step 7/Snap: Reinstall Event Customization . . . . . . . . . . . . . . . . . . . . . . . . . . . .11-33
Step 8/Snap: Synchronize and Restart . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11-34
Step 9/Snap: Verify Cluster Operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11-35
But I Can't Get the Down Time... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11-36
Node by Node Migration Steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11-37
Step 1/Node. Backup and Snapshots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11-39
Step 2/Node. Planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11-40
Step 3/Node. Stop HACMP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11-41
Step 4/Node. Install HACMP 5.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11-42
Step 5/Node. Restart HACMP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11-43
Step 6/Node. Stop HACMP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11-44
Step 7/Node. Install HACMP 5.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11-45
Step 8/Node. Restart HACMP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11-46
Step 9/Node. Backup and Snapshots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11-47
Whats New in HACMP 5.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11-48
Checkpoint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11-51
Exercise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11-52
Unit Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11-53

Appendix A. Checkpoint Solutions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-1

Appendix B. Release Notes for HACMP 5.3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . B-1

Appendix C. IPAT via IP Replacement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . C-1

xiv HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

TMK Trademarks
The reader should recognize that the following terms, which appear in the content of this
training document, are official trademarks of IBM or other companies:
IBM is a registered trademark of International Business Machines Corporation.
The following are trademarks of International Business Machines Corporation in the United
States, or other countries, or both:
AIX AIX 5L DB2
DB2 Universal Database eServer Enterprise Storage Server
HACMP iSeries NetView
Notes POWER4+ POWER5
pSeries Redbooks RS/6000
SP Tivoli TotalStorage
WebSphere
Think is a trademark or registered trademark of Lenovo in the United States, other
countries, or both.
Java and all Java-based trademarks are trademarks of Sun Microsystems, Inc. in the
United States, other countries, or both.
Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft
Corporation in the United States, other countries, or both.
UNIX is a registered trademark of The Open Group in the United States and other
countries.
Other company, product and service names may be trademarks or service marks of others.

Copyright IBM Corp. 1998, 2005 Trademarks xv


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

xvi HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

pref Course Description


HACMP System Administration I: Planning and Implementation

Duration: 5 days

Purpose
This course is designed to prepare students to install and configure a
highly available cluster using HACMP for AIX 5L.

Audience
The audience for this course is students who are experienced AIX
system administrators with TCP/IP networking and AIX LVM
experience who are responsible for the planning and installation of an
HACMP 5.3 cluster on an IBM Server pSeries server running AIX 5L
V5.2 or later.

Prerequisites
Students should ideally be qualified as IBM Certified Specialists -
pSeries AIX System Support or pSeries AIX System Administration
and in addition have TCP/ IP, LVM storage and disk hardware
implementation skills. These skills are addressed in the following
courses (or can be obtained through equivalent education and
experience):
AU16: AIX 5L System Administration II: Problem Determination
AP05: AIX 5L Essentials: AIX 5L TCP/IP Communications (CBT)
AU20: AIX Ver. 4 System Administration IV: Storage Management
AU07: AIX V4 Configuring TCP/IP and Accessing the Internet
Objectives
After completing this course, you should be able to:
Explain what high availability is.
Outline the capabilities of HACMP for AIX 5L.
Design and plan a highly available cluster.
Install and configure HACMP for AIX in the following modes of
operation:
- Single resource group on a primary node with standby node
- Two resource groups in a mutual takeover configuration
Configure resource group startup, fallover, and fallback policies
Perform basic system administration tasks for HACMP.

Copyright IBM Corp. 1998, 2005 Course Description xvii


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Perform basic customization for HACMP.


Perform basic problem determination and recovery.

Curriculum relationship
This course should be taken before AU57
AU57: HACMP System Administration II: Administration and
Problem Determination
AU20: AIX Ver. 4 System Administration IV: Storage Management

xviii HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

pref Agenda
Day 1
Welcome
Unit 1 - Introduction to HACMP for AIX 5L
Exercise 1
Unit 2 - Shared Storage Considerations for High-Availability
Exercise 2

Day 2
Unit 3 - Networking Considerations for High-Availability
Unit 4 - Planning for Applications and Resource Groups
Unit 5 - HACMP Installation
Exercise 3
Exercise 4
Exercise 5

Day 3
Unit 6 - Initial Cluster Configuration
Exercise 6
Unit 7 - Basic HACMP Administration

Day 4
Exercise 7
Unit 8 - Events
Exercise 8
Unit 9 - Integrating NFS Into HACMP
Exercise 9

Day 5
Unit 10 - Problem Determination and Recovery
Exercise 10
Unit 11 - HACMP Migration
Exercise 11

Copyright IBM Corp. 1998, 2005 Agenda xix


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Text highlighting
The following text highlighting conventions are used throughout this book:
Bold Identifies file names, file paths, directories, user names and
principals.
Italics Identifies links to Web sites, publication titles, is used where the
word or phrase is meant to stand out from the surrounding text,
and identifies parameters whose actual names or values are to
be supplied by the user.
Monospace Identifies attributes, variables, file listings, SMIT menus, code
examples of text similar to what you might see displayed,
examples of portions of program code similar to what you might
write as a programmer, and messages from the system.
Monospace bold Identifies commands, daemons, menu paths, and what the user
would enter in examples of commands and SMIT menus.

xx HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty Course Introduction

What This Unit Is About


This unit describes the content of this course.

What You Should Be Able to Do


After completing this unit, you should understand the aim of this
course.

Copyright IBM Corp. 1998, 2005 Course Introduction Intro-1


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Welcome to:
AU54 - HACMP System Administration I:
Planning and Implementation

Copyright IBM Corporation 2005


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.

Figure Intro-1. AU54 - HACMP System Administration I: Planning and Implementation AU546.0

Notes:

Intro-2 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
Course Objectives

After completing this unit, you should be able to:


Explain what high availability is
Outline the capabilities of HACMP for AIX
Design and plan a highly available cluster
Install and configure HACMP in the following modes of
operation:
Single resource group on a primary node with a standby
node
Two resource groups in a mutual takeover configuration
Perform basic system administration tasks for HACMP
Perform basic problem determination and recovery

Copyright IBM Corporation 2005

Figure Intro-2. Course Objectives AU546.0

Notes:

Copyright IBM Corp. 1998, 2005 Course Introduction Intro-3


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Course Agenda
Day 1
Welcome
Unit 1 - Introduction to HACMP for AIX 5L
Exercise 1
Unit 2 - Shared Storage Considerations for High-
Availability
Exercise 2

Copyright IBM Corporation 2005

Figure Intro-3. Course Agenda AU546.0

Notes:

Intro-4 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
Course Agenda
Day 2
Unit 3 - Networking Considerations for High-
Availability
Unit 4 - Planning for Applications and Resource
Groups
Unit 5 - HACMP Installation
Exercise 3
Exercise 4
Exercise 5

Copyright IBM Corporation 2005

Figure Intro-4. Course Agenda AU546.0

Notes:

Copyright IBM Corp. 1998, 2005 Course Introduction Intro-5


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Course Agenda
Day 3
Unit 6 - Initial Cluster Configuration
Exercise 6
Unit 7 - Basic HACMP Administration

Copyright IBM Corporation 2005

Figure Intro-5. Course Agenda AU546.0

Notes:

Intro-6 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
Course Agenda
Day 4
Exercise 7
Unit 8 - Events
Exercise 8
Unit 9 - Integrating NFS Into HACMP
Exercise 9

Copyright IBM Corporation 2005

Figure Intro-6. Course Agenda AU546.0

Notes:

Copyright IBM Corp. 1998, 2005 Course Introduction Intro-7


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Course Agenda
Day 5
Unit 10 - Problem Determination and Recovery
Exercise 10
Unit 11 - HACMP Migration
Exercise 11

Copyright IBM Corporation 2005

Figure Intro-7. Course Agenda AU546.0

Notes:

Intro-8 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
Lab Exercises
Points to note :
Work as a team and split the workload.
Manuals are available online.
HACMP software has been loaded and may have
already been installed.
TCP/IP and LVM have not been configured.
Each lab must be completed successfully before
continuing on to the next lab, as each lab is a
prerequisite for the next one.
Any questions, ask your instructor.

Copyright IBM Corporation 2005

Figure Intro-8. Lab Exercises AU546.0

Notes:

Copyright IBM Corp. 1998, 2005 Course Introduction Intro-9


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Student Guide Font Conventions

The following text highlighting conventions


are used throughout this book:
Bold Identifies file names, file paths, directories, user names,
principals, menu paths and menu selections. Also
identifies graphical objects such as buttons, labels and
icons that the user selects.
Italics Identifies links to web sites, publication titles, is used
where the word or phrase is meant to stand out from the
surrounding text, and identifies parameters whose
actual names or values are to be supplied by the user.
Monospace Identifies attributes, variables, file listings, SMIT menus,
code examples and command output that you would
see displayed on a terminal, and messages from the
system.
Monospace bold Identifies commands, subroutines, daemons, and text
the user would type.

Copyright IBM Corporation 2005

Figure Intro-9. Student Guide Font Conventions AU546.0

Notes:

Intro-10 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
Course Overview Summary

Having completed this unit, you should understand that:


yThere is ample time for the lab exercises.
yThorough design, planning and teamwork are essential.
yPrior AIX, LVM, Storage Management and TCP/IP experience is
assumed and required.

Copyright IBM Corporation 2005

Figure Intro-10. Course Overview Summary AU546.0

Notes:

Copyright IBM Corp. 1998, 2005 Course Introduction Intro-1


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Intro-12 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty Unit 1. Introduction to HACMP for AIX 5L

What This Unit Is About


This unit introduces the concepts of high availability and HACMP for
AIX 5L.

What You Should Be Able to Do


After completing this unit, you should be able to:
Explain what high availability is and why it is needed
Outline the various options for implementing high availability
List the key considerations when designing and implementing a
high availability cluster.
Outline the features and benefits of HACMP for AIX 5L
Describe the components of an HACMP for AIX 5L cluster
Explain how HACMP for AIX 5L operates in typical cases

How You Will Check Your Progress


Accountability:
Checkpoint

References
SC23-4864-06 HACMP for AIX 5L, Version 5.3: Concepts and
Facilities Guide
www.ibm.com/servers/eserver/pseries/library/hacmp_docs.html
HACMP for AIX 5L Manuals

Copyright IBM Corp. 1998, 2005 Unit 1. Introduction to HACMP for AIX 5L 1-1
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Unit Objectives
After completing this unit, you should be able to:
Explain what high availability is and why it is needed.
Outline the various options for implementing high
availability.
List the key considerations when designing and
implementing a high availability cluster.
Outline the features and benefits of HACMP for AIX 5L.
Describe the components of an HACMP for AIX 5L cluster.
Explain how HACMP for AIX 5L operates in typical cases.

Copyright IBM Corporation 2005

Figure 1-1. Unit Objectives AU546.0

Notes:

Objectives
In this unit we introduce the concept of high availability, examine why you might want to
implement a high availability solution, and compare high availability with some
alternative availability technologies.

HACMP terminology
In this course we will use the following terminology:
- HACMP will mean any version and release of the HACMP product.
- HACMP x will mean version x and any release of that version
- HACMP x.y will mean a specific version and release
- HACMP/ES will refer to HACMP 4 with enhanced scalability

1-2 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty 1.1 High Availability Concepts

Copyright IBM Corp. 1998, 2005 Unit 1. Introduction to HACMP for AIX 5L 1-3
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Topic 1 Objectives: High Availability


After completing this topic, you should be able to:
Describe what High Availability is
List increasing levels of implementing high availability
Compare costs of different high availability solutions
State the benefits of a high availability solution

Copyright IBM Corporation 2005

Figure 1-2. Topic 1 Objectives: High Availability AU546.0

Notes:

1-4 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
So, What is High Availability
High Availability characteristics...
The masking or elimination of both planned and unplanned
downtime
The elimination of single points of failure (SPOFs)
Fault resilience, but NOT fault tolerance

Workload Fallover

WAN

client production Standby


Copyright IBM Corporation 2005

Figure 1-3. So, What Is High Availability? AU546.0

Notes:

High Availability characteristics...


A High Availability Solution ensures that the failure of any component of the solution, be
it hardware, software, or system management, does not cause the application and its
data to be inaccessible to the user community. This is achieved through the elimination
or masking of both planned and unplanned downtime. High availability solutions should
eliminate single points of failure (SPOF) through appropriate design, planning, selection
of hardware, configuration of software, and carefully controlled change management
discipline. High Availability does not mean no interruption to the application thus we say,
fault resistance instead of tolerant.

Copyright IBM Corp. 1998, 2005 Unit 1. Introduction to HACMP for AIX 5L 1-5
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

So Why Is Planned Downtime Important?


Downtime
Planned downtime: Unplanned downtime:

Hardware upgrades Administrator Error


Repairs Application failure
Software updates Hardware faults
Backups Environmental Disasters
Testing
Development Hardware Failure 1%

Other Unplanned
Downtme 14%
Planned Downtime
85%

High availability solutions should reduce both


planned and unplanned downtime.
Copyright IBM Corporation 2005

Figure 1-4. So Why Is Planned Downtime Important? AU546.0

Notes:

Downtime
You may be surprised to know that hardware component failure represents an
extremely small proportion of overall system downtime. By far, the largest single
contributor to system downtime is planned downtime.

Planned downtime
When you shut down a computer for the weekend, thats planned downtime. When you
stop the application to take a level 0 backup, thats planned downtime. When you
schedule a period of maintenance over the weekend, thats planned downtime.

1-6 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty Unplanned downtime


Unplanned downtime is downtime you did not expect to happen, this includes
application software failure, operating system faults, and hardware failure. The single
greatest contributor to unplanned downtime is user fiddle factor.

Copyright IBM Corp. 1998, 2005 Unit 1. Introduction to HACMP for AIX 5L 1-7
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Continuous Availability Is the Goal


Elimination of Downtime

Continuous
Availability

Continuous High
Operations Availability

Masking or elimination of Masking or elimination of


planned downtime unplanned downtime

Copyright IBM Corporation 2005

Figure 1-5. Continuous Availability Is the Goal AU546.0

Notes:

Continuous availability goal


The goal for our high availability solution should be to achieve continuous availability;
that is, no downtime. In order to achieve this, we must not only implement a high
availability solution, but also reduce our planned downtime through disciplined and
documented change management.
As you learn in this course, the real work in a high availability solution comes after
installation and configuration, when you have to administer the solution in a carefully
controlled manner.

1-8 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
Eliminating Single Points of Failure
Cluster Object Eliminated as a single point of failure by . . .

Node Using multiple nodes

Power Source Using multiple circuits or uninterruptible power supplies

Network adapter Using redundant network adapters


Network Using multiple networks to connect nodes
TCP/IP Subsystem Using non-IP networks to connect adjoining nodes and
clients
Disk adapter Using redundant disk adapter or multipath hardware
Disk Using multiple disks with mirroring or raid
Application Add node for takeover; configure application monitor

Administrator Add backup or very detailed operations guide

Site Add additional site

A fundamental design goal of (successful) cluster design is


the elimination of single points of failure (SPOFs).
Copyright IBM Corporation 2005

Figure 1-6. Eliminating Single Points of Failure AU546.0

Notes:

Eliminating Single Points of Failure


Each of the items in the left-hand column is a physical or logical component which, if it
fails, renders the HA clusters application unavailable.
Keep in mind that there are generally SPOFs which are not eliminated. For example,
most clusters are not designed to deal with the server room being flooded with water, or
with the entire city being without electrical power for two weeks. Site recovery would be
a possible solution here using HACMP/XD.
Focus on the art of the possible. In other words, spend your efforts dealing with what
can be reasonably dealt with.
Document the SPOFs which you have decided to not deal with as this allows them to be
reviewed from time to time in order to consider whether some of them now need to be
dealt with (for example, site failures if cluster becomes very important)

Copyright IBM Corp. 1998, 2005 Unit 1. Introduction to HACMP for AIX 5L 1-9
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Availability - from Simple to Complex

Range of availability options

Fault
Tolerant

High
Availability
Cluster
Stand-alone
Enhanced
Storage

Stand-alone

Copyright IBM Corporation 2005

Figure 1-7. Availability - from Simple to Complex AU546.0

Notes:

Range of availability options


A range of possible solutions could be implemented to increase system availability.
These start with the simple stand-alone server many single points of failure and
progress through to a fault-tolerant system, designed and manufactured to survive
component failure.

1-10 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
Stand-alone System
The stand-alone system
limited availability benefits:
9Journaled Filesystem
9Dynamic CPU Deallocation
9Service Processor
9Redundant Power
9Redundant Cooling
9ECC Memory
9Hot Swap Adapters
9Dynamic Kernel
9Disk mirroring
9Dynamic LPAR

Drawback: single points of failure:


Disk Adapter/ Data Paths
No Hot Swap Storage
Power for Storage Arrays
Cooling for Storage Arrays
Hot Spare Storage
Node/Operating System
Network
Network Adapter
Application
Site Failures
Copyright IBM Corporation 2005

Figure 1-8. The Stand-alone System AU546.0

Notes:

The stand-alone system


The stand-alone system may offer certain availability benefits. We emphasize many,
because many factors depend upon the operating system and server in question. For
example, an advertisement placed in the Wall Street Journal showed that IBMs
EServer pseries servers offer distinct availability benefits over comparable HP and Sun
UNIX servers. Certainly the majority of IBM pSeries servers offer all the availability
benefits list above (and more).

Drawback
Ultimately the stand-alone server suffers from one enormous availability drawback, that
is, it is a stand-alone system with no means of surviving failure of a critical system
component, for example, the operating system.

Copyright IBM Corp. 1998, 2005 Unit 1. Introduction to HACMP for AIX 5L 1-11
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Stand-alone Enhanced Storage System


The Stand-alone enhanced storage system
Increased availability benefits:
9Journaled Filesystem
9Dynamic CPU Deallocation
9Service Processor
9Redundant Power
9Redundant Cooling
9ECC Memory
9Hot Swap Adapters
9Dynamic Kernel
9Disk Mirroring
9Redundant Disk adapters/multiple paths
9Hot Swap Storage
9Redundant Power for Storage Arrays
9Redundant Cooling for Storage Arrays
9Hot Spare Storage

Drawback: single points of failure:

Node/Operating System
Network Adapter
Network
Application
Site Failures
Copyright IBM Corporation 2005

Figure 1-9. Stand-alone Enhanced Storage System AU546.0

Notes:

The stand-alone enhanced storage system


The enhanced system may implement mirroring of data, using volume management
software (such as LVM, SVM or Suns Online Disk Suite), or RAID technology.
Depending on the disk subsystem concerned, the enhanced system may also offer
multiple paths to disk.
As with the stand-alone system, the enhanced system suffers from one major
drawback; that is, there is typically only one of each component in the solution (with the
exception of data copies on disk).

1-12 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
High-Availability Clusters (HACMP Base)
High-Availability Clustering

9Journaled Filesystem
9Dynamic CPU Deallocation
9Service Processor
9Redundant Power
9Redundant Cooling
9ECC Memory
9Hot Swap Adapters
9Dynamic Kernel
9Redundant Data Paths
9Data Mirroring
9Hot Swap Storage
9Redundant Power for Storage Arrays
9Redundant Cooling for Storage Arrays
9Hot Spare Storage
9Dual Disk Adapters
9Redundant nodes (operating system)
9Redundant Network Adapters
9Redundant Networks
9Application Monitoring
9Site Failure (limited)

Drawback single point of failure:

Site Failure (extended)


Copyright IBM Corporation 2005

Figure 1-10. High Availability Clusters (HACMP base) AU546.0

Notes:

High Availability Clustering


The high availability solution addresses the fundamental weakness of both the
stand-alone and stand-alone enhanced storage systems; that is, it has two of
everything. If any component of the solution should fail, a redundant backup component
is waiting to take over the workload.
Do feel free to examine the high-availability solutions offered by our competitors. IBMs
HACMP product has been ranked (and continues to be ranked) the leading
high-availability solution for UNIX servers by D.H. Brown Associates
(www.dhbrown.com) for many years. We are confident that by the end of this course,
youll also agree that HACMP 5.3 is a mature, robust, and feature-rich product which
delivers significantly improved availability on the IBM EServer power 5 platform.

Copyright IBM Corp. 1998, 2005 Unit 1. Introduction to HACMP for AIX 5L 1-13
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Drawback
The base product HACMP 5.2 and later only partially solves the site SPOF in the case
where data does not have to be replicated. This can be done with LVM mirroring using
SAN technology.

1-14 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
Fault-Tolerant Computing
Fault-tolerant solutions should not fail:

9Lock Step CPUs


9Hardened Operating System
9Hot Swap Storage
9Continuous Restart

Drawback: single points of failure:

Site Failure (limited, extended )


Application software errors

Copyright IBM Corporation 2005

Figure 1-11. Fault-Tolerant Computing AU546.0

Notes:

Fault-Tolerant
A fault-tolerant solution is specifically designed not to fail. Fault-tolerant solutions are
extremely expensive when compared with high availability solutions, and the few
manufacturers that produce fault-tolerant systems offer a limited range of solutions
(which means poor scalability). Price/performance is an issue with fault tolerance, as is
the proprietary nature of the hardware and associated operating system. Application
availability is also a consideration.
That said, fault-tolerant systems are designed to NOT fail. If you cannot suffer any
downtime, no matter how small, then a fault-tolerant solution may be for you.

Drawback
Even this solution does not account for site failures or software failures

Copyright IBM Corp. 1998, 2005 Unit 1. Introduction to HACMP for AIX 5L 1-15
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Availability Solutions Summary


simple complex
Stand-alone Enhanced High Availability Fault-tolerant
Standalone Clusters Computers

Solutions

Journaled Filesystem Redundant Data Paths Redundant Servers Lock Step CPUs
Dynamic CPU Data Mirroring Redundant Networks Hardened Operating
Deallocation Hot Swap Storage Redundant Network System
Service Processor Redundant Power for Adapters Redundant Memory
Availability Redundant Power Storage Arrays Heartbeat Monitoring Continuous Restart
benefits Redundant Cooling Redundant Cooling for Failure Detection
ECC Memory Storage Arrays Failure Diagnosis
Hot Swap Adapters Hot Spare Storage Automated Fallover
Dynamic Kernel Automated
Reintegration

Depends, but
Downtime Couple of days Couple of hours In theory, none!
typically 3 mins

Data Good as your


Last transaction Last transaction No loss of Data
Availability last full backup

Relative Cost* 1 1.5 2-3 10+

Copyright IBM Corporation 2005

Figure 1-12. Availability Solutions Summary AU546.0

Notes:

Availability Solutions Summary


This table summarizes the availability benefits of all the systems we have seen so far. It
should be noted that the cost multiples are indicative only, and that your mileage may
vary.
The three-minute downtime figure for a high-availability solution should be taken as a
rough ballpark figure and cannot be quoted as representative of any particular scenario;
again, your mileage may vary.

1-16 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
So, What About Site Failure?
Limited distance (LVM mirroring and/or SAN): HACMP for AIX 5L

Extended distance: Geographic Clustering Solution (that is, HACMP/XD)


Distance unlimited
Application, disk and network independent
Automated site failover and reintegration
A single cluster across two sites

ESS PPRC
GLVM
HAGEO

Toronto Brussels
Data Replication
Copyright IBM Corporation 2005

Figure 1-13. So, What About Site Failure? AU546.0

Notes:

Limited distance
The base product HACMP 5.3 (and V5.2) allows you to create sites as long as you can
use LVM mirroring for redundancy. Using SAN technology you can get limited distance
support for site failures.

Extended distance
The HACMP/XD (Extended Distance) priced feature provides three distinct software
solutions for disaster recovery. These solutions enable an HACMP cluster to operate
over extended distances at two sites.
a. HACMP/XD for ESS PPRC increases data availability for IBM TotalStorage
Enterprise Storage Server (ESS) volumes that use Peer-to-Peer Remote Copy
(PPRC) to copy data to a remote site for disaster recovery purposes. HACMP/XD for
ESS PPRC takes advantage of the PPRC fallover/fallback functions and HACMP

Copyright IBM Corp. 1998, 2005 Unit 1. Introduction to HACMP for AIX 5L 1-17
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

cluster management to reduce downtime and recovery time during disaster


recovery. When PPRC is used for data mirroring between sites, the physical
distance between sites is limited to the capabilities of the ESS hardware.
b. HACMP/XD for Geographic Logical Volume Manager (GLVM) increases data
availability for IBM volumes that use GLVM to copy data to a remote site for disaster
recovery purposes. HACMP/XD for GLVM takes advantage of the following
components to reduce downtime and recovery time during disaster recovery:
AIX 5L GLVM data mirroring and synchronization
TCP/IP-based unlimited distance network support
HACMP for AIX 5L cluster management.
c. HACMP/XD for HAGEO Technology uses the TCP/IP network to enable unlimited
distance for data mirroring between sites. (Note that although the distance is
unlimited, practical restrictions exist on the bandwidth and throughput capabilities of
the network). This technology is based on the IBM High Availability Geographic
Cluster for AIX (HAGEO) v 2.4 product. HACMP/XD for HAGEO Technology
extends an HACMP for AIX 5L cluster to encompass two physically separate data
centers. Data entered at one site is sent across a point-to-point IP network and
mirrored at a second, geographically distant location.
HACMP/XD is independent of the application, disk technology, data, and distances
between the sites. HACMP/XD can work across any network that supports TCP/IP and
offers automated fallover of applications and data from one site to another (maximum
two sites) in the event of a site disaster.
Customers considering implementing a solution based upon HACMP/XD for HAGEO
Technology are strongly recommended to attend the course AU52 - HACMP/ XD for
AIX Implementation Workshop. You can be assured, it is money very well spent.
Until very recently, HACMP XD was called HAGEO. You should probably try both
names if youre searching for information on HACMP/XD.

1-18 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

HACMP XD

Uempty
Why Might I Need High Availability?
24X7 operations
60% of all large companies now operate round the clock (7x24)
Lose of Revenue $M
Downtime losses: 200

Per hour $50K (46%), $1M (8%), $330K (industry average) 150
Peak losses: 130,000 $US per minute (telephone network) 100
Loss of customer loyalty
50
Loss of customer confidence
0
No Disaster recovery:
50% of affected companies will never reopen
90% of affected companies are out of business in less than two
years

Note: High Availability is NOT a Disaster Recovery solution.

$
Copyright IBM Corporation 2005

Figure 1-14. Why Might I Need High Availability? AU546.0

Notes:

24X7 Operations
More and more companies need applications to be available all the time.

Downtime losses
These figures were extracted from a report produced by Tandem Corporation in 1995, so
they are a little out of date, however, the currency of the figures is not important since the
principles remain true today and in the future. Downtime costs money.
Many companies invest in a high-availability solution not only to protect against the
immediate revenue impact of failure, but also to avoid the embarrassment of bad press,
loss of customer confidence, and loyalty. In todays Internet-based economy, your nearest
competitor is only a URL away.

Copyright IBM Corp. 1998, 2005 Unit 1. Introduction to HACMP for AIX 5L 1-19
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

No Disaster recovery
Figures for company failures following site disasters are taken from IBM Business
Recovery Services. Where a company relies on access to its data in order to do business,
and loses that data as a result of a site disaster (and has no means of recovery), the
chances of business failure are exceptionally high. It should be noted, however, that high
availability is NOT the same as disaster recovery. Disaster recovery deals with site failure,
and the partial or complete loss of computing resources in one location.

1-20 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

II

Uempty
Benefits of High-Availability Solutions

High-Availability Solution Benefits:

Standard components (no specialized hardware)


Can be built from existing hardware (no need to invest in new kit)
Work with just about any application
Work with wide range of disk and network types
No specialized operating system or microcode
Excellent availability at low cost

+ =

Standard Components High Availability Solution


Copyright IBM Corporation 2005

Figure 1-15. Benefits of High-Availability Solutions AU546.0

Notes:

High Availability Solution benefits


IBMs high-availability solution for the EServer pSeries offers some distinct benefits
over our competitors. These are detailed above. Its important to note that existing
systems can be used to build a highly available cluster of pSeries servers, and that
HACMP is largely independent of the disk type, network, and application chosen for
your environment.

Copyright IBM Corp. 1998, 2005 Unit 1. Introduction to HACMP for AIX 5L 1-21
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Other Considerations for High-Availability


Design, Planning, Testing
Focus on service and availability
Apply appropriate risk analysis
Disciplined system administration practices
Documented operational procedures

People
Systems
Management

High
availability Data

Networking Continuous
availability

Continuous
operation
Hardware
Environment

Software

Copyright IBM Corporation 2005

Figure 1-16. Other Considerations for High Availability AU546.0

Notes:

Design, Planning, Testing


Design, planning, and testing are all critical steps that cannot be skipped when
implementing a high-availability solution. As you learn this week, there should be no
shortage of time spent designing, planning, and documenting your proposed cluster
solution. Time well spent in these areas of the project reduces the amount of unneeded
administration time required to manage your cluster solution.
Unfortunately, its too often the case that there is never enough time to do it right first
time, but always time enough to do it over when things go wrong.
It is important to keep in mind that the reason why we worry about node failures and
disk failures and such is not because we are particularly concerned with their actual
failure, but rather we are concerned with the impact that their failure might have.

1-22 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty Focus on service and availability


Focus on making the service highly available and view the hardware and software as
the tools that you use in accomplishing this goal. Users are not interested in highly
available hardware or software per se. They are interested in availability of services. So
hardware and software should be used to make the services highly available. Cluster
design decisions should be based on whether they contribute to availability (that is,
eliminate a SPOF) or detract from availability (gratuitously complex)

Apply appropriate risk analysis


Since it is probably not possible to fix all SPOFs, the risk analysis process can be used
for deciding if a defensive measure is warranted. The process can be applied to identify
those which simply must be dealt with as well as those which can be tolerated.
Risk analysis involves:
a. Identify relevant policies. What existing risk tolerance policies are available.
b. Study current environment. An example would be that the server room is on a
properly sized UPS but there is no disk mirroring today.
c. Perform requirements analysis. How much availability is required and what is the
acceptable likelihood of a long outage.
d. Hypothesize vulnerabilities. What could go wrong.
e. Identify and quantify risks. Estimate the cost of a failure versus the probability that it
occurs.
f. Evaluate counter measures. What does it take to reduce the risk or consequence to
an acceptable level.
g. Make decisions, create a budget and plan the cluster.
h. Dont be fooled by the apparent determinism (that is, formula that always seems to
come up with an answer) of risk analysis:
It simply isnt possible to predict all the possible or even likely vulnerabilities.
Estimating the likelihood of a vulnerability occurring can be extremely difficult.
Some vulnerabilities dont lend themselves to any sort of quantifiable analysis.
For example, if there is a genuine risk that someone dies then the cost of this
sort of failure may be irrelevant in any meaningful sense.
Finally, do not get trapped into a mode of thinking in which all conceivable risk of
outages must be eliminated. Such a goal is, in general, simply impossible to attain with
any technology.

Disciplined system administration practice


In a cluster environment, it is very easy to do commands that interfere with availability
software or to not propagate changes or to have a person take over that does not
understand the cluster environment. So, discipline and documentation are required.

Copyright IBM Corp. 1998, 2005 Unit 1. Introduction to HACMP for AIX 5L 1-23
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Lets Review
1. Which of the following is a characteristic of high availability?
a. High availability always requires specially designed hardware
components.
b. High availability solutions always require manual intervention to
ensure recovery following failover.
c. High availability solutions never require customization.
d. High availability solutions offer excellent price performance when
compared with Fault Tolerant solutions.
2. True or False?
High availability solutions never fail.
3. True or False?
A thorough design and detailed planning is required for all high availability
solutions.
4. True or False?
Hardware failures are the most common cause of downtime.
5. A proposed cluster with a two year life (for planning purposes) has a
vulnerability which is likely to occur twice per year at a cost of
$10,000 per occurrence. It costs $25,000 in additional hardware
costs to eliminate the vulnerability. Should the vulnerability be
eliminated?
a. Yes
b. No

Copyright IBM Corporation 2005

Figure 1-17. Lets Review AU546.0

Notes:

1-24 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty 1.2 HACMP Basics

Copyright IBM Corp. 1998, 2005 Unit 1. Introduction to HACMP for AIX 5L 1-25
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Topic 2 Objectives: HACMP Basics


After completing this topic, you should be able to:
Outline the features and benefits for HACMP for AIX
Describe the HACMP concepts of topology and resources
Give examples of topology components and resources
Provide a brief description of the software and hardware
components of a typical HACMP cluster

Copyright IBM Corporation 2005

Figure 1-18. Topic 2 Objectives: HACMP for AIX 5L Basics AU546.0

Notes:
In this topic, we take a look at the basic concepts and features of HACMP.

1-26 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
IBM's HA Solution for AIX
HACMP for AIX 5L Characteristics
High Availability Cluster Multiprocessing
Based on cluster technology
Provides two environments:
Serial (High Availability): the process of ensuring an application is
available for use through the use of serially accessible shared data
and duplicated resources
Parallel (Cluster Multiprocessing): concurrent access to shared
data

Copyright IBM Corporation 2005

Figure 1-19. IBM's HA Solution for AIX AU546.0

Notes:

HACMP characteristics
IBMs HACMP product is a mature and robust technology for building a high-availability
solution. A high-availability solution based upon HACMP provides automated failure
detection, diagnosis, recovery and reintegration. With an appropriate application, HACMP
can also work in a concurrent access or parallel processing environment, thus offering
excellent horizontal scalability.

Copyright IBM Corp. 1998, 2005 Unit 1. Introduction to HACMP for AIX 5L 1-27
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

A Highly Available Cluster


Fundamental Concepts

clstrmgr clstrmgr

urce
Reso group
Shared Storage
Node Node
Fallover
Cluster is comprised of physical components (topology) and logical
components (resource groups and resources).

Copyright IBM Corporation 2005

Figure 1-20. A Highly Available Cluster AU546.0

Notes:

Fundamental concepts
HACMP is based on the fundamental concepts of cluster, resource group, and cluster
manager (clstrmgr).

Cluster
A cluster is comprised basically of nodes, networks and network adapters. These objects
are referred to as Topology objects.

Resource group
A resource group is typically comprised of an application, network address, and volume
group using shared disks. These objects are referred to as Resource objects.

1-28 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty clstrmgr
The cluster managers together are the software components that communicate with each
other to control on which node a resource group is activated or where the resource group is
moved on a fallover based on parameters set up by the administrator. The clstrmgr runs on
all the nodes of the cluster
Here is a simple diagram of a two-node cluster, using shared disk, and providing fallover for
a single application.

Copyright IBM Corp. 1998, 2005 Unit 1. Introduction to HACMP for AIX 5L 1-29
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

HACMP's Topology Components

N tw
Ne
IP ork

on or
N
tw

e
-IP k
Communication
Interface

n
atio
m unic
Com Device

e No
Nod de
ter
C lus

The Topology components consist of a cluster, nodes and the


technology which connects them together.
Copyright IBM Corporation 2005

Figure 1-21. HACMP's Topology Components AU546.0

Notes:

Topology Components
A clusters topology is the cluster, nodes (pSeries servers), networks (connections
between the nodes), the communication interfaces (for example, Ethernet or token-ring
network adapters) and the communication devices (/dev/tty for RS232 for example).

Nodes
In the context of HACMP, the term node means any IBM EServer pSeries system
which is a member of a high-availability cluster running HACMP.

Networks
Networks consist of IP and Non-IP networks. The non-IP networks ensure cluster
monitoring can be done if there is a total loss of IP communication. Non-IP networks are
strongly recommended to be configured in an HACMP.

1-30 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty Networks can also be logical or physical. Logical networks have been used with the IBM
SP environments when different frames were in different subnets but needed to be
treated as if they were in the same network for HACMP purposes.

Copyright IBM Corp. 1998, 2005 Unit 1. Introduction to HACMP for AIX 5L 1-31
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Nodes
IBM Eserver pSeries High-end IBM

IBM

IBM

Midrange IBM

IBM

IBM pSeries 690


pSeries 680

Entry Deskside IBM

pSeries 670
IBM
pSeries
pSeries 590 pSeries 595
pSeries 660
pSeries

pSeries 650
IBM

pSeries 655
pSeries
server
pSeries 620
Models 6F0, 6F1
server
pSeries 630 IBM

Model 6E4 pSeries

pSeries 610 pSeries 520 pSeries 575


Model 6E1 H C R U6

IBM

pSeries 550
server
pSeries 570
IBM
pSeries 630
serv er
IBM

Model 6C4
pSeries 640
pSeries

pSeries 610 Model B80


Model 6C1
Entry Rack

All pSeries systems work with HACMP in any combination of nodes


within a cluster. However, a minimum of four free adapter slots is
recommended.
Copyright IBM Corporation 2005

Figure 1-22. Nodes AU546.0

Notes:

Supported nodes
As you can see, the range of systems that supports HACMP is, well - everything. The
only requirement is that the system should have at least four adapter slots spare (two
for network adapters and two for disk adapters). Any other adapters (for example,
graphics adapters) occupy additional slots. The internal Ethernet adapter fitted to most
entry level pSeries 600 servers cannot be included in the calculations. It should be
noted that even with four adapter slots free, there is still be a single point of failure as
the cluster is only able to accommodate a single TCP/IP local area network between the
nodes.
HACMP 5 works with pSeries and iSeries servers in a no-single-point-of-failure server
configuration. HACMP for AIX 5L supports the pSeries, Cluster 1600, RS/6000, and
iSeries models which are designed for server applications and which meet the minimum
requirements for internal memory, internal disk, and I/O slots. As of July 2005, the

1-32 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty following pSeries, iSeries, and RS/6000 models and their corresponding upgrades are
supported in HACMP 5.2 and later:
- PCI desktop systems: Models 140, 150, 170, 240, 260, and 270
- PCI deskside systems: Models E20, E30, F30, F40, F50, F80, 6F0, and 6F1
(pSeries 620). Note: Heartbeat network is only supported with serial port 3 or 4 on
the Model 6Fx.
- .. Entry systems: Models 25S, 250, and 25T
- .. Compact server systems: Models C10 and C20
- .. Desktop systems: Models 370, 380, 390, 397, and 39H
- .. Deskside systems: Models 570, 57F, 580, 58F, 58H, 590, 59H, 591, 595,
7028-6E1 (pSeries
- 610), 7029-6E3 (pSeries 615), 7025-6F1 (pSeries 620), and 7028-6E4 (pSeries
630)
- Rack systems: Models 98B, 98E, 98F, 990, 99E, 99F, 99J, 99K, B80, R10, R20,
R21, R24,
- R50, R5U, S70, S7A, H10, H50, H70, H80, M80, 7028-6C1 (pSeries 610),
7029-6C3 (pSeries 615), 7028-6C4 (pSeries 630), 6H0, 6H1, 6M1 (pSeries 660),
7039-651 (pSeries 655), and 7038-6M2 (pSeries 650), including models with PCI-X
Expansion Drawer (7311-D10 and 7311-D20)
- High-end servers: Models 7040-681 (pSeries 690), 7040-671 (pSeries 670),
including models with POWER4+ processors, and PCI-X Planar 10 slot I/O drawer
(7040-61D feature 6571)
POWER5 servers: pSeries models 9110-510, 9111-520, 9113-550, 9117-570,
9119-590, and 9119-595. iSeries models running AIX 9406-520, 9406-550, 9406-570,
9406-590, and 9406-595 with required APARs (see the Sales manual at
www.ibm.com/common/ssi)

Unsupported nodes and adapters


With the introduction of AIX 5L V5.2, the micro channel range of systems is excluded,
as AIX 5L V5.2 does not support micro channel systems. However, you can still run AIX
5L V5.1 with HACMP 5.2 (and earlier) on micro channel systems.
On both IBM EServer p5 and IBM EServer i5, the 520, 550, and 570 integrated serial
ports are not enabled when the HMC ports are connected to a Hardware Management
Console. Either the HMC ports or the integrated serial ports can be used, but not both.
Moreover, the integrated serial ports are supported only for modem and async terminal
connections. Any other applications using serial ports, including HACMP, require a
separate serial port adapter to be installed in a PCI slot.

Copyright IBM Corp. 1998, 2005 Unit 1. Introduction to HACMP for AIX 5L 1-33
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

LPAR support
There is also support for dynamically adding LPAR resources in AIX 5L V5.2 or later
LPAR environments to take advantage of Capacity Upgrade of Demand (CUoD).
HACMP 5.2 (and later) supports Virtual SCSI (VSCSI) and Virtual LAN (VLAN) on
POWER5 (IBM EServer p5 and IBM EServer i5). See
http://www-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/FLASH10390 for
more details.

1-34 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
Supported Networking Environments

Ethernet Etherchannel
Token Ring PC
PC Server

Server Server
Traffic Flow Server

Server
FDDI
Server Server
Server Non -IP
Traffic Flow Server
RS232/422
TMSSA
heartbeat over disk

Copyright IBM Corporation 2005

Figure 1-23. Supported Networking Environments AU546.0

Notes:

Supported networks
HACMP 5.3 (and V5.2) supports client users on a LAN using TCP/IP. HACMP monitors
and performs IP address switching for the following TCP/IP-based communications
adapters on cluster nodes:
- Ethernet
- EtherChannel
- Token ring
- FDDI
- SP Switches
- ATM
- ATM LAN Emulation
HACMP also supports non-IP networks such as RS232/442, Target Mode SCSI
(TMSCSI), Target Mode SSA (TMSSA), and Enhanced Concurrent Mode.

Copyright IBM Corp. 1998, 2005 Unit 1. Introduction to HACMP for AIX 5L 1-35
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

It is highly recommended to have both IP and non-IP networks defined to HACMP. For a
list of specific adapters you can consult the Sales Manual

Not supported networks


The following networks are not supported:
- Serial Optical Channel Converter (SOCC)
- SLIP
- Fibre Channel Switch (FCS)
- 802_ether
- Virtual IP Address (VIPA) facility of AIX 5L
The pseudo IP address provided by VIPA cannot be reliably monitored by RSCT
or HACMP. The failure of the underlying devices that are used to service the
pseudo device cannot be coordinated with HACMP recovery processing. VIPA
can be configured and used outside of HACMP, but when using these facilities
on an HACMP cluster node ensure that they are configured on the subnets that
are completely different from the subnets used by HACMP. If any VIPA
addresses are configured on the same subnet that is used for an HACMP
network, HACMP may not be able to properly detect failures and manage
recovery.
- Aggregate IP Interface with the SP Switch2
With the SP Switch2 you have css0 and css1, PSSP allows you to configure an
Aggregate IP switch. This is an ml0 interface with its own IP address. This ml0
interface is not supported by HACMP.
- IP V6

Adapters versus devices


HACMP distinguishes between communication adapters and communication devices
for network support. For IP networks, the term adapter is used. For non-IP adapters, the
term communication device is used. This will be discussed further in the networking unit
of this course.

1-36 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
HACMP's Resource Components

Resources

Ap
pl

ou e
i ca

Gr lum
p
tio

Vo
n
Se
le

rv
Se
Fi tem

e
Ad rvic

r
s
dr e I Sy
es P
s

Resource Group
Nodes
Run Policies
Resources

Copyright IBM Corporation 2005

Figure 1-24. HACMP's Resource Components AU546.0

Notes:

Resource Group
A resource group is a collection of resources treated as a unit along with what nodes
they can potentially be activated on and what policies the cluster manager should use to
decide which node to choose during startup, fallover, and fallback. A cluster may have
more than one resource group (usually one for each application), thus allowing for very
flexible configurations. Resource groups will be covered in more detail in Unit 4.

Resources
Resources are logical components that can be put into a resource group. Because they
are logical components, they can be moved without human intervention.
The resources shown in the visual are a typical set of resources used in resource
groups such as:

Copyright IBM Corp. 1998, 2005 Unit 1. Introduction to HACMP for AIX 5L 1-37
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Service IP Address - users need to be able to connect to the application. Typically,


the users are given an IP address or label to connect to. This IP address/label
becomes a resource in the resource group as it must be associated with the same
node that is running the application.
Volume Group - if the application requires shared disk storage, this storage is
contained within volume groups.
Filesystem - an application often requires that certain filesystems be mounted.
Application Server - the application itself must be part of the resource group
(strictly speaking, the application server actually consists of scripts which start and
stop the application as required by HACMP).
In addition to the resources listed in the figure, you can also associate with a resource
group the following:
NFS mounts - an application might require that an NFS filesystem be mounted by
the node running the application
NFS exports - an resource group might be configured to provide NFS server
services by NFS exporting some of its filesystems.
Finally, there are attributes that can be assigned such a Force vary on of volume
groups. These will be covered later in this course in Unit 6.

1-38 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
Solution Components
Not Just HACMP

FAStT
RS/6000

RS/6000

IBM

SAN
ESS RS/6000
Fibre

Copyright IBM Corporation 2005

Figure 1-25. Solution Components AU546.0

Notes:

Not just HACMP


The final high-availability solution is more than just HACMP. A high-availability solution
comprises a reliable operating system (AIX), applications that are tested to work in a
high-availability cluster, storage devices, appropriate selection of hardware, trained
administrators, and thorough design and planning.

Copyright IBM Corp. 1998, 2005 Unit 1. Introduction to HACMP for AIX 5L 1-39
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

AIX's Contribution to High Availability


9Object Data Manager (ODM)
9System Resource Controller (SRC)
9Logical Volume Manager (LVM)
9Journaled File System (JFS)
9Online JFS Backup (splitlvcopy)
9Work Load Manager (WLM)
9Quality of Service (Qos)
9External Boot
9Software Installation Management (installp, smit, websmit)
9Reliable Scalable Cluster Technology (RSCT, RMC)

Copyright IBM Corporation 2005

Figure 1-26. AIX's Contribution to High Availability AU546.0

Notes:

AIX contributions
AIX offers many availability benefits, for example, logical volume manager mirroring of all
data types, the journaled filesystem, and AIXs capability to execute an online JFS backup.
AIX has a dynamic, multithreaded kernel which allows configuration changes to the
operating system while the computer is running.
How often have you had to reboot an AIX system in order to configure a new hardware
device or extend a filesystem?

1-40 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
Supported Shared Storage Environments
SSA Adapter

RS/6000
RS/6000

FAStT

SSA SAN
IBM

Maximum 25m
Host
System
SCSI
Controller
ESS Fibre
RS/6000

SCSI SCSI SCSI SCSI


Module Module Module Module
Disk
Drive
Disk
Drive
Disk
Drive
Disk
Drive
FIBER CHANNEL

SCSI
Most HACMP clusters require shared storage. Disk technologies
which support multihost connections include: SCSI, SSA and FC
(with or without RAID).
Copyright IBM Corporation 2005

Figure 1-27. Supported Shared Storage Environments AU546.0

Notes:

Shared storage environments


HACMP is largely unconcerned about the disk storage that you select. Supported
technologies include Serial Storage Architecture (SSA), SCSI and Fibre Channel.
Although there is no formal agreement, third-party storage such as EMC and Hitachi
can be used, some custom modifications may be required.
For a complete list of supported devices see the Sales Manual pages:
http://www.ibm.com/common/ssi.

Copyright IBM Corp. 1998, 2005 Unit 1. Introduction to HACMP for AIX 5L 1-41
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Some Clusters Do Not Have Shared Disks

Local
Network

Clusters providing firewall services do not usually have shared disks.


Can you think of any other examples?
Copyright IBM Corporation 2005

Figure 1-28. Some Clusters Do Not Have Shared Disks AU546.0

Notes:

Shared disk not required


Think about the solutions that do not require shared disk. Its not always the case that a
highly-available cluster requires disk.
Clusters providing firewall services do not usually have shared disks as this ensures that if
an attacker is able to crash one of the firewall boxes then they are presented with a fresh
box on the backup node. While it is true that if they can crash the first node then they can
probably crash the second node, there is little point in making it easier by allowing them to
start with a node that possibly retains some of the cracks that theyve managed to expand
in the first node.

1-42 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
So What is HACMP Really?
An application which:
Controls where resource groups run
Monitors and reacts to events
Does fallover
Does reintegration
Provides tools for cluster wide configuration and
synchronization
Cluster Manager Subsystem

clcomdES Topology Resource Event SNMP


manager manager manager manager

RSCT RMC snmpd clinfoES


clstat

Copyright IBM Corporation 2005

Figure 1-29. So What Is HACMP Really? AU546.0

Notes:

HACMP core components


HACMP comprises of a number of software components:
- The cluster manager clstrmgr is the core process which monitors cluster
membership. The cluster manager includes a topology manager to manage the
topology components, a resource manager to manage resource groups, an event
manager with event scripts that works through the RMC facility and RSCT to react to
failures.
- In HACMP 5.3, the cluster manager also contains an SNMP manager which allows
for SNMP based monitoring to be done using an SNMP manager such as NetView.
- The clinfo process provides an API for communicating between cluster manager
and your application. Clinfo also provides remote monitoring capabilities and can
run a script in response to a status change in the cluster. Clinfo is an optional
process which can run on both servers and clients (the source code is provided).

Copyright IBM Corp. 1998, 2005 Unit 1. Introduction to HACMP for AIX 5L 1-43
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

The clstat command uses clinfo to display status via ASCII, Xwindow, or Web
browser interfaces.
- In HACMP 5, clcomdES allows the cluster managers to communicate in a secure
manner without using rsh and rhost files.

1-44 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
Additional Features of HACMP

OLPW Configuration
smit via web assistant

ClstrmgrES verification
CSPOC Auto tests
DARE SNMP

Application
Tivoli
Monitoring
Netview

OnLine Planning Worksheets (OLPW)


HACMP is shipped with utilities to simplify configuration, monitoring,
customization and cluster administration.
Copyright IBM Corporation 2005

Figure 1-30. Additional Features of HACMP AU546.0

Notes:

Additional features
HACMP also has additional software to provide facilities for administration, testing,
remote monitoring, and verification:
- Application monitoring can be used to monitor the clusters applications and restart
them should they fail. Multiple monitors can be defined for an application including
monitoring the startup (only available in HACMP/ES and HACMP 5).
- Configuration changes can be made to the cluster while the cluster is running. This
facility is known as Dynamic Automatic Reconfiguration Event (or DARE for short).
Such dynamic changes in cluster configuration can also be emulated to test the
outcome.
- C-SPOC is a series of SMIT menus that allow AIX related cluster tasks to be
propagated across all nodes in the cluster. It includes an RG_Move facility which
allows a resource group to be placed offline or on another node without stopping the
cluster manager (RG_Move only available in HACMP 4.5 and later).

Copyright IBM Corp. 1998, 2005 Unit 1. Introduction to HACMP for AIX 5L 1-45
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

- Administration is made easier by the use of Online Planning Worksheets (OLPW)


and a Web based SMIT interface (only available in HACMP 4.5 and later).
- A two node configuration assist facility allows you to configure an HACMP cluster
with very little input. It will be used in this course (only available in HACMP V5).
- Verification is provided both at HACMP startup time, as part of synchronization, as a
manual process and a daily Automatic Cluster automatic check.
- There is an automatic correction facility which will be covered in more detail in the
HACMP Administration II: Administration and Problem Determination course (only
available in HACMP 5.2 and later).

1-46 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
Some Assembly Required

Customized
Pre-event scripts

Application HACMP core events

Application start script


Application stop script Customized
post-event scripts

HACMP is not an out of the box solution


HACMP's flexibility allows for complex customization in order to
meet availability goals

Copyright IBM Corporation 2005

Figure 1-31. Some Assembly Required AU546.0

Notes:

Customization required
HACMP is shipped with event scripts (Korn Shell scripts) which handle most failure
scenarios. If you have a requirement to customize some special fallover behavior, then this
is achieved through pre- and post-event scripts.

Copyright IBM Corp. 1998, 2005 Unit 1. Introduction to HACMP for AIX 5L 1-47
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Lets Review
1. Which of the following are examples of topology components in
HACMP (select all that apply)?
a. Node
b. Network
c. Service IP label
d. Hard disk drive
2. True or False?
All clusters require shared disk for storage of HACMP log files.
3. True or False?
All nodes in an HACMP cluster must have roughly equivalent performance
characteristics
4. True or False?
HACMP requires rhost files.

Copyright IBM Corporation 2005

Figure 1-32. Lets Review AU546.0

Notes:

1-48 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty 1.3 What Does HACMP Do

Copyright IBM Corp. 1998, 2005 Unit 1. Introduction to HACMP for AIX 5L 1-49
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Topic 3 Objectives: What Does HACMP Do?


After completing this topic, you should be able to:
Describe the failures which HACMP detects directly
Provide an overview of the standby and takeover cluster
configuration options in HACMP
Describe some of the considerations and limits of an
HACMP cluster

Copyright IBM Corporation 2005

Figure 1-33. Topic 3 Objectives: What Does HACMP Do? AU546.0

Notes:
In this topic, we take a closer look at what HACMP actually does.

1-50 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
Just What Does HACMP Do?

HACMP basic functions


Monitors Communication adapters/devices for 3 failures
Moves resource groups
HACMP can also monitor applications, processor load and available
disk capacity.

Copyright IBM Corporation 2005

Figure 1-34. Just What Does HACMP Do? AU546.0

Notes:

HACMP basic functions


HACMP detects three kinds of network related failures.
i. A communications adapter or device failure
ii. A node failure (all communication adapters/devices on a given node)
iii. A network failure (all communication adapters/devices on a given network
HACMP (V4.5 and later) also interfaces to the AIX 5L error log to respond to the loss of
quorum for a volume group when the loss is detected by the LVM. Most other failures
are handled outside of HACMP, either by AIX or LVM and can be handled in HACMP via
customization.

Copyright IBM Corp. 1998, 2005 Unit 1. Introduction to HACMP for AIX 5L 1-51
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

What Happens When Something Fails?

How the cluster responds to a failure depends on what has failed, what
the resource group's fallover policy is, and if there are any resource
group dependencies
The cluster's configuration is determined by the application's
requirements
Typically another equivalent component takes over duties of failed
component (for example, another node takes over from a failed node)

Copyright IBM Corporation 2005

Figure 1-35. What Happens When Something Fails? AU546.0

Notes:

How HACMP responds to a failure


HACMP generally responds to a failure by using an equivalent but still available
component to take over the duties of the failed component. For example, if a node fails,
then HACMP initiates a fallover, an action which consists of moving the resource
groups which were previously on the failed node to a surviving node. If a Network
Interface Card (NIC) fails, HACMP usually moves any IP addresses being used by
clients to another available NIC. If there are no remaining available NICs, HACMP
initiates a fallover. If only one resource group is affected then only the one resource
group is moved to another node.

1-52 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
What Happens When a Problem Is Fixed?

?
How the cluster responds to the recovery of a failed component depends
on what has recovered, what the resource group's fallback policy is, and
what resource group dependencies there are
The cluster's configuration is determined by the application's
requirements
Cluster administrator may need to indicate/confirm that the fixed
component is approved for use

Copyright IBM Corporation 2005

Figure 1-36. What Happens When a Problem Is Fixed? AU546.0

Notes:

How HACMP responds to a recovery


When a previously failed component recovers, it must be reintegrated back into the
cluster (reintegration is the process of HACMP recognizing that the component is
available for use again). Some components, like NICs, are automatically reintegrated
when they recover. Other components, like nodes, are not reintegrated until the cluster
administrator explicitly requests the reintegration (by starting the HACMP daemons on
the recovered node).

Copyright IBM Corp. 1998, 2005 Unit 1. Introduction to HACMP for AIX 5L 1-53
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Standby: With Fallback

Node Halifax fails Node Vancouver fails


Halifax Vancouver

AA
A
Halifax Vancouver
Halifax Vancouver (no change)
One node is primary

Halifax returns Vancouver returns

A AA

Halifax Vancouver
Halifax Vancouver

Copyright IBM Corporation 2005

Figure 1-37. Standby: With Fallback AU546.0

Notes:

Standby
Standby configurations are configurations where one (or more) nodes have no
workload.

Standby node with one node primary


In a two-node cluster, there is a single application (that is, resource group) which must
run as much as possible on a primary or home node and the node with no workload is
the secondary, standby or backup node. To accomplish this, there would be a startup
policy to indicate which node is primary (or home), a fallover policy to allow fallover if
the primary node fails and a fallback policy is set so that the resource group
automatically falls back to the primary node when the primary node recovers.

1-54 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty Drawbacks
- One node is not used (this is ideal for availability but not from a utilization
perspective).
- A second outage on fallback

Extending this concept to more nodes


This concept can be extended to multiple nodes in 2 ways:
i. All nodes except one have applications and the one node is a standby node.
This could lead to performance problems if more than one application must be
moved to the standby node
ii. The resource group could be configured to have multiple layers of backup nodes.
The resource group would usually be configured to run on the highest priority
(most preferred) available node.
Multiple layers of backup nodes are possible--fallover policy determines which
node. For example: primary -> secondary -> tertiary -> quaternary -> quinary ->
senary -> septenary -> octonary -> nonary -> denary ...
A tidbit for the wordsmiths in the audience: the sequence which starts primary,
secondary, tertiary continues with quaternary, quinary, senary, septenary,
octonary, nonary, and denary. There is no generally accepted word for eleventh
order although duodenary means twelfth order. The word for twentieth order is
vigenary.

Copyright IBM Corp. 1998, 2005 Unit 1. Introduction to HACMP for AIX 5L 1-55
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Standby: Without Fallback

A Vancouver returns
Halifax fails
Halifax Vancouver

A minimize downtime A
Halifax Vancouver Halifax Vancouver

Halifax returns Vancouver fails

A
Halifax Vancouver

Copyright IBM Corporation 2005

Figure 1-38. Standby: Without Fallback AU546.0

Notes:

Minimize downtime
A resource group can be configured to not fall back to the primary node (or any other
higher priority node) when it recovers. This avoids the second outage which results
when the fallback occurs.
The cluster administrator can request that HACMP move the resource group back to
the higher priority node at an appropriate time or it can simply be left on its current node
indefinitely (an approach which calls into question the terms primary and secondary, but
which is actually quite a reasonable approach in many situations).

Extending to more nodes


This can result in multiple applications ending up on the node that stays up the longest.

1-56 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
Takeover: Two Sided (Mutual)

A B
Halifax fails Vancouver fails
Halifax Vancouver

AB
B A very common
Halifax Vancouver
Halifax Vancouver

Halifax returns Vancouver returns


(with Fallback) (with Fallback)
A B
Halifax Vancouver

Copyright IBM Corporation 2005

Figure 1-39. Takeover: 2 sided (mutual) AU546.0

Notes:

Takeover
Takeover configurations imply that there is workload on all nodes which may or may not
be under the control of HACMP but that a node can takeover the work of another node
in the cluster.

Mutual takeover
An extension of the primary node with a secondary node configuration is to have two
resource groups, one failing from right to left and the other failing from left to right. This
is referred to as mutual takeover.
Mutual takeover configurations are very popular configurations for HACMP since they
support two highly available applications at a cost which is not that much more than
would be required to run the two applications in separate stand-alone configurations.

Copyright IBM Corp. 1998, 2005 Unit 1. Introduction to HACMP for AIX 5L 1-57
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Additional costs
Note that there are at least a few additional costs:
- Each cluster node probably needs to be somewhat larger than the stand-alone
nodes as they must each be capable of running both applications, possibly in a
slightly degraded mode, should one of the nodes fail.
- There may be additional software licenses required for the applications when they
run on their respective backup nodes (this is a potentially significant cost item which
is often forgotten in the early cluster planning stages).
- HACMP for AIX license fees.
- This is not intended to be an all inclusive list of additional costs.

1-58 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
Concurrent: Multiple Active Nodes
Halifax, Regina and
Vancouver are all
running Application A,
each using a separate A A A
Service IP Address Regina Vancouver
Halifax

If nodes fail, the application remains


continuously available as long as there
A A A are surviving nodes to run on.
Halifax Regina Vancouver
Fixed nodes resume running their copy of
the application.

Application must be designed to run simultaneously on


multiple nodes.
This has the potential for essentially zero downtime.
Copyright IBM Corporation 2005

Figure 1-40. Concurrent: Multiple Active Nodes AU546.0

Notes:

Concurrent mode
HACMP also supports resource groups in which the application is active on multiple
nodes simultaneously. In such a resource group, all nodes run a copy of the application
and share simultaneous access to the disk. This style of cluster is often referred to as a
concurrent access cluster or concurrent access environment.

Service labels
Since the application is active on multiple nodes, each node has its own service IP
label. The client systems must be configured to randomly (or otherwise) select which
service IP address to communicate with, and be prepared to switch to another service
IP address should the one that theyre dealing with stop functioning (presumably,
because the node with the service IP address has failed). It is also possible to configure
an IP multiplexer between the clients and the cluster which redistributes the client

Copyright IBM Corp. 1998, 2005 Unit 1. Introduction to HACMP for AIX 5L 1-59
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

sessions to the cluster nodes, although care must be taken to ensure that the IP
multiplexer does not itself become a single point of failure.

How to choose
Whether this mode of operation can be used for your application is a function of the
application, not of HACMP. At present, Oracle RAC (Real Application Clusters) is the only
commercial application tested to work in a concurrent access environment.

1-60 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
Fundamental HACMP Concepts
Topology: networking components
Resources: the entities which are being made highly available
Resource group: a collection of resources which HACMP
controls as a single unit
A given resource may only appear in at most one resource
group
Resource group policies:
Startup policy: which node the resource group is activated on
Fallover policy: determines target when there is a failure
Fallback policy: determines target when fallback occurs
Customization is the process of augmenting HACMP, typically
via implementing scripts which HACMP invokes at appropriate
times

Copyright IBM Corporation 2005

Figure 1-41. Fundamental HACMP Concepts AU546.0

Notes:

Terminology
A clear understanding of the above concepts and terms is important as they appear
over and over again both in the remainder of the course and throughout the HACMP
documentation, log files and smit screens.

Copyright IBM Corp. 1998, 2005 Unit 1. Introduction to HACMP for AIX 5L 1-61
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Points to Ponder
Resource groups:
Must be serviced by at least two nodes
Can have different policies
Can be migrated (manually or automatically) to rebalance loads
Clusters:
Must have at least one IP network and one non-IP network
Need not have any shared storage
Can have any combination of supported nodes *
Cluster may be split across two sites
May or may not require replicating data (HACMP/XD).
Applications:
Can be restarted via monitoring
Must be restartable via scripts

* Application performance requirements and other operational issues


almost certainly impose practical constraints on the size and
complexity of a given cluster.

Copyright IBM Corporation 2005

Figure 1-42. Points to Ponder AU546.0

Notes:

Importance of planning
Planning, designing, configuring, testing and operating a successful HACMP cluster
requires considerable attention to detail. In fact, a careful methodical approach to all the
phases of the clusters lifecycle is probably the most important factor in determining the
ultimate success of the cluster.

Methodical approach
A careful methodical approach takes into account the relevant points above, and many
other issues which are discussed this week or which are discussed in the HACMP
documentation.

1-62 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
HACMP 5.3 Limits
Cluster limits:
32 nodes in a cluster
64 resource groups per cluster
256 IP addresses known to HACMP (for example, service and
boot IP labels)
RSCT limit:
48 heartbeat rings

Copyright IBM Corporation 2005

Figure 1-43. HACMP 5.3 Limits AU546.0

Notes:

Cluster Limits
HACMP 5.3 supports clusters with up to:
- 64 resource groups
- 256 interfaces
- 32 AIX5L/HACMP images (pSeries servers, SP nodes, RS/6000 systems, or
LPARs).

RSCT limit
HACMP uses the Topology Services component of RSCT for monitoring networks and
network interfaces. Topology Services organizes all the interfaces in the topology into
different heartbeat rings. The current version of RSCT Topology services has a limit of
48 heartbeat rings, which is usually sufficient to monitor networks and network

Copyright IBM Corp. 1998, 2005 Unit 1. Introduction to HACMP for AIX 5L 1-63
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

interfaces. Roughly speaking, the number of heartbeat rings is (usually) very close to
the number of network adapters on the node with the most adapters.
These limits do not tend to be a major concern in most clusters. Refer to the HACMP
documentation for additional information if you are planning a cluster which might
approach some of these limits.

1-64 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
Things HACMP Does Not Do

TSM

Backup and restoration


Time synchronization
Application specific configuration
System Administration tasks unique to each node
Copyright IBM Corporation 2005

Figure 1-44. Things HACMP Does Not Do AU546.0

Notes:

Things HACMP does not do


HACMP does not automate your backups, neither does it keep time in sync between
the cluster nodes. These tasks do require further configuration and software. For
example, Tivoli Storage Manager for backup and a time protocol such as xntp for time
synchronization.

Copyright IBM Corp. 1998, 2005 Unit 1. Introduction to HACMP for AIX 5L 1-65
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

When Is HACMP Not the Correct Solution?

Zero downtime required


Maybe a fault tolerant system is the correct choice.
7x24x365, HACMP occasionally needs to be shutdown for
maintenance.
Life Critical environments
Security Issues
Too little security
lots of people with the ability to change the environment.
Too much security
C2 and B1 environments may not allow HACMP to function as designed.
Unstable Environments
HACMP cannot make an unstable and poorly managed
environment stable.
HACMP tends to reduce the availability of poorly managed
systems

Copyright IBM Corporation 2005

Figure 1-45. When Is HACMP Not the Correct Solution AU546.0

Notes:

Zero downtime
An example of zero down time may be the intensive care room. Also HACMP is not
designed to handle many failures at once

Security issues
One security issue that is now addressed is the need to eliminate .rhost files. Also
there is better encryption possible with inter node communications but this may not be
enough for some security environments.

Unstable environments
The prime cause of problems with HACMP is poor design, planning, implementation,
and administration. If you have an unstable environment, with poorly trained

1-66 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty administrators, easy access to the root password, and a lack of change control,
HACMP is not the solution for you.
With HACMP, the only thing more expensive than employing a professional to plan,
design, install, configure, customize, and administer the cluster is employing an
amateur.
Other characteristics of poorly managed systems are:
- Lack of change control
- Failure to treat cluster as single entity
- Too many cooks
- Lack of documented operational procedures

Copyright IBM Corp. 1998, 2005 Unit 1. Introduction to HACMP for AIX 5L 1-67
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

What Do We Plan to Achieve This Week?


Your mission this week is to build a two-node highly available cluster
using two previously separate pSeries systems, each of which has
an application which needs to be made highly available.

A A

B B

Copyright IBM Corporation 2005

Figure 1-46. What Do We Plan to Achieve This Week? AU546.0

Notes:

Goals
During this week you will design, plan, configure, customize, and administer a two-node
high-availability cluster running HACMP 5.3 on an IBM EServer pSeries server.
You will learn how to build a standby environment for one application as well as a mutual
takeover environment for two applications. In the mutual takeover environment each
system will eventually be running its own highly available application, and providing fallover
backup for the other system.

1-68 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
Overview of the Implementation Process
Plan and configure AIX
Eliminate single points of failure
Storage (adapters, LVM volume group, filesystem)
Networks (IP interfaces, /etc/hosts, non-IP networks and
devices)
Applications start and stop scripts
Install the HACMP filesets and reboot
Configure the HACMP environment
Topology
Cluster, node names, HACMP IP and non-IP networks
Resources and Resource groups:
Identify name, nodes, policies
Resources: Application Server, service label, VG, filesystem
Synchronize then start HACMP

Copyright IBM Corporation 2005

Figure 1-47. Overview of the Installation Process AU546.0

Notes:

Implementation process
Look at AIX environment
- For storage, plan for adapters and LVM components required for application
- For networks, plan and for communication interfaces, devices, name resolution via
/etc/hosts and service address for the application
- For application build start and stop script and test outside of the control of HACMP
Install the HACMP for AIX 5L software and reboot
Configure the topology and resource groups (and resources)
Synchronize, start and test

Copyright IBM Corp. 1998, 2005 Unit 1. Introduction to HACMP for AIX 5L 1-69
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Hints to Get Started


HACMP Cluster
user
for community
the ABC company

hints
Node A IP Label IP Address Netmask Node A IP Label IP Address Netmask
Service database 192.168.9.3 255.255.255.0 Service webserv 192.168.9.5 255.255.255.0
Boot nodeaboot 192.168.9.4 255.255.255.0 Boot nodebboot 192.168.9.6 255.255.255.0
Standby nodeastand 192.168.254.3 255.255.255.0 Standby nodebstand 192.168.254.3 255.255.255.0

Public Network

Node Name = nodea Node Name =nodeb


Resource group = dbrg Resource group = httprg
Applications = database Applications = http
Resources = cascading Resources = cascading
A-B B-A
Priority = 1,2 Priority = 2,1
CWOF = yes CWOF = yes

Diagram Label
Device
= a_tmssa
= /dev/tmssa1
tmssa network
Label
Device
= b_tmssa
= /dev/tmssa2

(Online) Planning sheets Label


Device
= a_tty
= /dev/tty1
serial network
Label
Device
= a_tty
= /dev/tty1

No Single Point of Failure (SPOF)


Non-IP network rootvg
raid1
rootvg
VG =httpvg raid1

Mirror across power and buses 9.1GB


Raid1
9GB
9.1GB

Test plan VG = dbvg


Raid5
100GB

Be methodical
Resource Group httprg contains Resource Group databaserg contains
Volume Group = httpvg Volume Group = dbvg
hdisk2,hdisk8 hdisk3, hdisk4, hdisk5, hdisk6, hdisk7
Major # = 50 Major # = 51
JFS Log = httplvlog JFS Log = dblvlog
Logical Volume = httplv Logical Volume = dblv1, dblv2
FS Mount Point = /http FS Mount Point = /db, /dbdata

Copyright IBM Corporation 2005

Figure 1-48. Hints to get Started AU546.0

Notes:

Hints
Create a cluster diagram -- a picture is worth 10 thousand words (due to inflation a
thousand is not enough!).
Use the Online Planning Worksheets. They can be used without installing HACMP and
can be used to generate AND save HACMP configurations.
Try to reduce Single Points of Failure (SPOFs).
Always include a non IP network.
Mirror across power and buses.
Document test plan. HACMP also provides test scripts called auto test
Be methodical.
Execute the test plan prior to placing the cluster into production!

1-70 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
Sources of HACMP Information
HACMP manuals come with the product READ THEM!
Sales Manual: www.ibm.com/common/ssi
/usr/es/sbin/cluster/release_notes
IBM courses:
HACMP Administration I: Planning and Implementation (AU54)
HACMP Administration II: Administration and Problem Determination
(AU57)**
HACMP Administration III: Problem Determination and Recovery (AU59)*
HACMP Administration IV: Master Class (AU56)*
Implementing High Availability on eServer Cluster 1600 (AU58)
IBM Geographic High Availability HAGEO (AU52)
IBM Web Site:
http://www-03.ibm.com/servers/eserver/pseries/ha/
Non-IBM sources (not endorsed by IBM but probably worth a look):
http://www.matilda.com/hacmp/
http://groups.yahoo.com/group/hacmp/

*Not offered after December 2005


** Replaces HACMP Administration II: Maintenance and Migration

Copyright IBM Corporation 2005

Figure 1-49. Sources of HACMP Information AU546.0

Notes:

Manuals on CD
The HACMP 5.3 manuals are:
SC23-4867-05 HACMP for AIX, Version 5.3: Master Glossary
SC23-4864-06 HACMP for AIX, Version 5.3: Concepts and Facilities Guide
SC23-4861-06 HACMP for AIX, Version 5.3: Planning and Installation Guide
SC23-4862-06 HACMP for AIX, Version 5.3: Administration Guide
SC23-5177-00 HACMP for AIX, Version 5.3: Troubleshooting Guide
SC23-4865-06 HACMP for AIX, Version 5.3: Programming Client Applications

Additional Web sites for storage


http://www.storage.ibm.com
http://www-1.ibm.com/servers/storage/support/software/sdd.html
ftp://ftp.software.ibm.com/storage/fastt/fastt500/HACMP_config_info.pdf

Copyright IBM Corp. 1998, 2005 Unit 1. Introduction to HACMP for AIX 5L 1-71
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Checkpoint
1. True or False?
Resource Groups may be moved from node to node
2. True or False?
HACMP XD is a complete solution for building
geographically distributed clusters.
3. Which of the following capabilities does HACMP not
provide (select all that apply)?:
a. Time synchronization.
b. Automatic recovery from node and network adapter failure.
c. System Administration tasks unique to each node. Backup and
restoration.
d. Fallover of just a single resource group.
4. True or False?
Resource Groups may be moved from node to node.
5. True or False?
All nodes in a resource group must have equivalent
performance characteristics.

Copyright IBM Corporation 2005

Figure 1-50. Checkpoint AU546.0

Notes:

1-72 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
Unit Summary

After completing this unit, you should be able to:


Explain what high availability is and why it is needed.
Outline the various options for implementing high
availability.
List the key considerations when designing and
implementing a high availability cluster.
Outline the features and benefits of HACMP for AIX 5L.
Describe the components of an HACMP for AIX 5L cluster.
Explain how HACMP for AIX 5L operates in typical cases.

Copyright IBM Corporation 2005

Figure 1-51. Unit Summary AU546.0

Notes:

Copyright IBM Corp. 1998, 2005 Unit 1. Introduction to HACMP for AIX 5L 1-73
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

1-74 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty Unit 2. Shared Storage Considerations for High


Availability

What This Unit Is About


This unit discusses the issue of shared storage in an high-availability
environment with a particular emphasis, of course, on shared storage
in an HACMP context.

What You Should Be Able to Do


After completing this unit, you should be able to:
Discuss the shared storage concepts that apply within an HACMP
cluster
Describe the capabilities of various disk technologies as they
related to HACMP clusters
Describe the shared storage related facilities of AIX and how to use
them in an HACMP cluster

How You Will Check Your Progress


Checkpoint questions
Pencil and paper planning exercises
Machine exercises

References
SC23-4867-05 HACMP for AIX: HACMP Master Glossary
SC23-4864-06 HACMP for AIX: Concepts and Facilities Guide
SC23-4861-06 HACMP for AIX: Planning and Installation Guide
SC23-4862-06 HACMP for AIX: Administration Guide
SC23-5177-00 HACMP for AIX: Troubleshooting Guide
http://www-03.ibm.com/servers/storage
http://www.redbooks.ibm.com

Copyright IBM Corp. 1998, 2005 Unit 2. Shared Storage Considerations for High Availability 2-1
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Unit Objectives
After completing this unit, you should be able to:
Discuss the shared storage concepts that apply within an
HACMP cluster
Describe the capabilities of various disk technologies as
they related to HACMP clusters
Describe the shared storage related facilities of AIX and
how to use them in an HACMP cluster

Copyright IBM Corporation 2005

Figure 2-1. Unit Objectives AU546.0

Notes:

2-2 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty 2.1 Fundamental Shared Storage Concepts

Copyright IBM Corp. 1998, 2005 Unit 2. Shared Storage Considerations for High Availability 2-3
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Topic 1 Objectives:
Fundamental Shared Storage Concepts
After completing this topic, you should be able to:
Explain the distinction between shared storage and private
storage
Describe how shared storage is used within an HACMP
cluster
Discuss the importance of controlled access to an HACMP
cluster's shared storage
Describe how access to shared storage is controlled in an
HACMP cluster

Copyright IBM Corporation 2005

Figure 2-2. Topic 1 Objectives: Fundamental Shared Storage Concepts AU546.0

Notes:

2-4 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
What Is Shared Storage?

SCSI
disks
Node SSA Node
1 2
disks

ESS
storage

rootvg
rootvg rootvg
rootvg

Copyright IBM Corporation 2005

Figure 2-3. What Is Shared Storage? AU546.0

Notes:

Application storage requirements


A computer application always requires at least a certain amount of disk storage space.
For example, even the most minimal application requires disk space to store the
applications binaries. Most applications also require storage space for configuration
files and whatever application data the application is responsible for.
-
When such an application is placed into a high-availability cluster, any of the
applications data which changes must be stored in a location which is accessible to
whichever node the application is currently running on. This storage which is accessible
to multiple nodes is called shared storage.

Copyright IBM Corp. 1998, 2005 Unit 2. Shared Storage Considerations for High Availability 2-5
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Non-concurrent access
In a non-concurrent access environment, the disks are owned by only one node at a
time. If the owner node fails, the cluster node with the next highest priority in the
resource group node list acquires ownership of the shared disks as part of fallover
processing. This ensures that the data stored on the disks remains accessible to client
applications.
In a non-concurrent access environment, a highly available application runs on only one
node for possibly quite extended periods of time. Only one disk connection is active at a
time and the shared storage is not shared in any real time sense. Rather, it is storage
which can be associated automatically (without human intervention) with whichever
node the application is currently running on. Non-concurrent access mode is also
sometimes called serial access mode, since only one node has access to the shared
storage at a time.
We will focus on non-concurrent shared storage in this unit.

Concurrent access
In concurrent access environments, the shared disks are actively connected to more
than one node simultaneously. Therefore, when a node fails, disk takeover is not
required. In this case, access to the shared storage must be controlled by some locking
mechanism in the application.

Shared storage physical connection


In order to associate the storage with whichever node is running the application, the
storage technology must support and the actual configuration must physically connect
the storage to the relevant nodes. This ability to connect a storage subsystem to more
than one computer at a time is commonly called twin tailing. This capability is supported
by a variety of storage technologies including SCSI, SSA, and Fibre Channel as well
see shortly.

Shared resource example


P.S. The graphic in the lower right-hand corner is a shared telephone.

2-6 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
What Is Private Storage?

SCSI
disks
Node SSA Node
1 2
disks

ESS
storage

rootvg
rootvg rootvg
rootvg

Copyright IBM Corporation 2005

Figure 2-4. What Is Private Storage? AU546.0

Notes:

Private storage
Private storage is, of course, accessible to only a single cluster node. It might be
physically located within each systems box or externally in a rack or even in an ESS
system. The key point is that private storage is not physically accessible from more than
one cluster node.

Private resource example


P.S. The graphic in the lower right-hand corner is a private telephone.

Copyright IBM Corp. 1998, 2005 Unit 2. Shared Storage Considerations for High Availability 2-7
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Access to Shared Data Must Be Controlled


Consider:
Data is placed in shared storage to facilitate
access to the data from whichever node the
application is running on
The application is typically running on only one
node at a time*
Updating the shared data from another node (that
is not the node that the application is running on)
could result in data corruption
Viewing the shared data from another node could
yield an inconsistent view of the data
Therefore, only the node actually running the
application should be able to access the data.

Copyright IBM Corporation 2005

Figure 2-5. Access to Shared Data Must be Controlled AU546.0

Notes:

Why
The shared storage is physically connected to each node that the application might run
on. In a non-concurrent access environment, the application actually runs on only one
node at a time and modification or even access to the data from any other node during
this time could be catastrophic (the data could be corrupted in ways which take days or
even weeks to notice).

Issues for concurrent access


Some clusters have instances of the application active on more than one node at a time
(for example, parallel databases). Such clusters require simultaneous access to the
shared disks and must be designed to carefully control or coordinate their access to the
shared data. Concurrent access applications also require controlled, or at least
coordinated access to the shared data. This mechanism must be provided by the
application.

2-8 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
Who Owns the Storage?

Node A B Node
1 2
ODM ODM
ODM ODM

C D

Normally varyonvg/varyoffvg is used to control ownership


varyonvg/varyoffvg uses either:
Reserve/release-based shared storage protection
Used with normal volume groups or classic concurrent
volume groups
RSCT-based shared storage protection
Used with enhanced concurrent volume groups

Copyright IBM Corporation 2005

Figure 2-6. Who Owns the Storage? AU546.0

Notes:

Introduction
There are two mechanisms to control ownership of shared storage. Although these two
mechanisms do not seem to have formal names, in this unit, we refer to them as the:
- Reserve/release-based shared storage protection mechanism and the
- RSCT-based shared storage protection mechanism
We use the term protection rather than access control both because it is a bit shorter
and because it reminds us that the purpose of the mechanism is to protect the shared
storage.

Reserve/release-based shared storage protection


Prior to HACMP 5.1, the AIX logical volume manager invoked disk-based
reserve/release as the shared storage protection mechanism, which was appropriate
for shared storage which was assigned to a single node for extended periods of time.

Copyright IBM Corp. 1998, 2005 Unit 2. Shared Storage Considerations for High Availability 2-9
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

RSCT-based shared storage protection


AIX 5L V5.1 introduced a new mechanism to be used with enhanced concurrent volume
groups. This mechanism uses an AIX component called Reliable Scalable Cluster
Technology (RSCT). We will be discussing RSCT in greater detail later in the week.
HACMP 5.x uses this mechanism when enhanced concurrent volume groups are in use
(more on enhanced concurrent volume groups later in this unit).

Normal, concurrent and enhanced concurrent volume groups


History
Concurrent mode volume groups were created to allow multiple nodes to access the
same logical volumes concurrently.
The original concurrent mode volume groups are only supported on Serial DASD and
SSA disks in conjunction with the 32-bit kernel.
Beginning with AIX 5L Version 5.1, the enhanced concurrent mode volume group was
introduced to extend the concurrent mode support to all other disk types and to the
64-bit kernel. Enhanced concurrent volume groups can also be used in a
non-concurrent environment to provide RSCT-based shared storage protection.
Concurrent access environment
If you need concurrent access, you must use concurrent volume groups. You should
convert classical concurrent volume groups to enhanced concurrent mode whenever
possible to make use of its flexibility. Also, support for classical concurrent volume
groups is being withdrawn (see below).
Non-concurrent access environment
While both normal and classical concurrent volume groups are supported for
reserve/release-based shared storage protection, usually you would use normal volume
groups.
However, as we shall see, there are a number of advantages to using the RSCT-based
shared storage protection, which requires the use of enhanced concurrent volume
groups.
Support for the classical concurrent volume groups is being removed
- AIX 5L V5.1 introduced enhanced concurrent volume groups, but still allowed you to
create and use the classical concurrent volume groups. When concurrent volume
groups are created on AIX 5L v.5.1 and up, they are created as enhanced
concurrent mode volume groups by default.
- AIX 5L V5.2 does not allow you to create classical concurrent volume groups, but
you can still use them in AIX 5L V5.2.
- AIX 5L V5.3 removes the support for classical concurrent volume groups entirely;
only enhanced concurrent volume groups are supported.

2-10 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
Reserve/Release-based Protection

Node A B varyonvg Node


1 2
ODM
ODM ODM

varyonvg C D

Reserve/release-based shared storage protection relies on


hardware support for disk reservation
Disks are physically reserved to a node when varied on
Disks are released when varied off
LVM is unable to varyon a volume group whose disks are reserved
to another node
Not all shared storage systems support disk reservation
Copyright IBM Corporation 2005

Figure 2-7. Reserve/Release-based Protection AU546.0

Notes:

Disk reservation
Reserve/release-based shared storage protection relies on the disk technology
supporting a mechanism called disk reservation. Disks which support this mechanism
can be, in effect, told to refuse to accept almost all commands from any node other than
the one which issued the reservation. AIXs LVM automatically issues a reservation
request for each disk in a volume group when the volume group is varied online by the
varyonvg command. The varyonvg command fails for any disks which are currently
reserved by other nodes. If it fails for enough disks, which it almost certainly does since
if one disk is reserved by another node, the others presumably are also, then the varyon
of the volume group fails.

LVM change management: keeping the ODM and VGDA in sync


When multiple nodes are sharing a volume group using reserve/release-based storage
protection, the volume group is imported, but not varied on for the inactive nodes. There

Copyright IBM Corp. 1998, 2005 Unit 2. Shared Storage Considerations for High Availability 2-11
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

must be some mechanism to ensure that any meta-data changes made to the volume
group on the active node will be updated in the ODM on the inactive nodes in the
cluster. For example, if you change the size of a logical volume on the active node, the
other nodes ODMs will still list the logical volume at the original size. When an inactive
node is made active and if the volume group were varied on without updating the ODM,
the information in the ODM on the node and the VGDA on the disks would disagree.
This will cause problems.
When using reserve/release-based shared storage protection, HACMP provides a
mechanism called lazy update to update the ODM on the takeover node at the time of
fallover.

Lazy update
Lazy update works by using the volume group timestamp in the ODM. When HACMP
needs to varyon a volume group, it compares the ODM timestamp to the timestamp in
the VGDA. If the timestamps disagree, lazy update does an exportvg/importvg to
recreate the ODM on the node. If the timestamps agree, no extra steps are required.
It is, of course, possible to update the ODM on inactive nodes when the change to the
meta-data is made. In this way, extra time at fallover is avoided. The ODM can be
updated manually or you can use Cluster Single Point of Control (C-SPOC) which can
automate this task. Lazy update and the various options for updating ODM information
on inactive nodes are discussed in detail in a later unit in this course.

2-12 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
Reserve/Release
Voluntary Disk Takeover
Node httpvg Node
A B varyonvg
1 2
ODM
ODM ODM
ODM

dbvg C D
varyonvg

Node A B Node
1 2
ODM
ODM ODM
ODM
Node2:
varyoffvg httpvg
dbvg C D
varyonvg

Node httpvg A B Node


1 varyonvg 2
ODM
ODM ODM
ODM Node1:
varyonvg httpvg
dbvg C D
varyonvg

Copyright IBM Corporation 2005

Figure 2-8. Reserve/Release Voluntary Disk Takeover AU546.0

Notes:

Voluntary takeover
With reserve/release-based shared storage protection, HACMP passes volume groups
between nodes by issuing a varyoffvg command on one node and a varyonvg
command on the other node. The coordination of these commands (ensuring that the
varyoffvg is performed before the varyonvg) is the responsibility of HACMP.

Copyright IBM Corp. 1998, 2005 Unit 2. Shared Storage Considerations for High Availability 2-13
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Reserve/Release
Involuntary Disk Takeover

Node Node
A B varyonvg
1 2
ODM
ODM ODM
ODM

varyonvg C D

Node A B Node
varyonvg
1 2
ODM ODM
ODM

varyonvg C D

Copyright IBM Corporation 2005

Figure 2-9. Reserve/Release Involuntary Disk Takeover AU546.0

Notes:

Involuntary disk takeover


The right node has failed with the shared disks still reserved to the right node. When
HACMP encounters a reserved disk in this context, it uses a special utility program to
break the disk reservation. It then varies on the volume group which causes the disks to
be reserved to the takeover node.

Implications
Note that if the right node had not really failed then it would lose access to the shared
disks (rather abruptly) when the left node varied them on. Although this may seem
rather rude and generally unpleasant, it is far preferable to the alternative that both
nodes access and update the data on the disks (each believing that it is the only node
accessing and updating the data). An involuntary takeover isnt possible unless all
paths used by HACMP to communicate between the two nodes have been severed.

2-14 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty How do we know the other node has failed?


Involuntary disk takeover will only take place when a node believes that the active node
has failed. HACMP uses communication between the nodes to determine if each node
is still active. In other words, it is important to ensure that there is sufficient redundancy
in these communication paths to ensure that loss of all communication with another
node implies that the other node has truly failed.

Copyright IBM Corp. 1998, 2005 Unit 2. Shared Storage Considerations for High Availability 2-15
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Reserve/Release Ghost Disks

hdisk0
Node Node hdisk1
A B hdisk2
1 varyonvg 2 hdisk3
ODM ODM hdisk4
hdisk5
hdisk6
hdisk7
C D hdisk8
varyonvg
hdisk9

Copyright IBM Corporation 2005

Figure 2-10. Reserve/Release Ghost Disks AU546.0

Notes:

What is a ghost disk


During the AIX boot sequence, the configuration manager (cfgmgr) accesses all the
shared disks (and all other disks and devices). Each time it accesses a physical volume
at a particular hardware address, it tries to determine if the physical volume is the same
actual physical volume that was last seen at the particular hardware address. With
SCSI disks, it does this by attempting to read the physical volumes ID (PVID) from the
disk. This operation fails if the disk is currently reserved to another node. Consequently,
the configuration manager is not sure if the physical volume is the one it expects or is a
different physical volume. In order to be safe, it assumes that it is a different physical
volume and assigns it a temporary hdisk name. This temporary hdisk name is called a
ghost disk.
When the volume group is eventually online, the question of whether each physical
volume is the expected physical volume is resolved. If it is, then the ghost disk is
deleted. If it isnt, then the ghost disk remains. Whether or not the online of the volume

2-16 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty group ultimately succeeds depends on whether or not the LVM is able to find enough of
the volume groups physical volumes (and other factors like whether or not quorum
checking is enabled on the volume group).

Ghost disk issues


Time
Dealing with ghost disks takes time with the result that a volume group with ghost disks
takes longer to varyon than one without. For example, in one customer cluster
(configured a few years ago using SCSI disks on HACMP 4.4.1 and AIX 4.3.3) the
ghost disks were found to add about twenty seconds per ghost disk to the time required
to varyon the volume group. Since the volume group had about twenty physical
volumes, this resulted in a significant delay during fallovers.
Dont delete ghost disks
It is very important that if ghost disks occur, they be left in the AIX device configuration
as their presence is necessary for the correct operation of the LVM when the volume
group is ultimately online.

Disk technology differences


Note that not all disk technologies result in ghost disks. For example, the SSA disk
technology provides a unique physical serial number for each SSA disk module which
can be read regardless of whether the disk is reserved to another node. Also, recent
versions of AIX have improved mechanisms for identifying disks at AIX boot time.
Consequently, ghost disks are less likely to occur in any case.

Copyright IBM Corp. 1998, 2005 Unit 2. Shared Storage Considerations for High Availability 2-17
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

RSCT-based Shared Storage Protection

Node Node
1 passive varyon A B active varyon 2
ODM
ODM ODM

active varyon C D passive varyon

Requires Enhanced Concurrent Volume Group


Only used by HACMP
Uses gsclvmd
Independent of disk type
Copyright IBM Corporation 2005

Figure 2-11. RSCT-based Shared Storage Protection AU546.0

Notes:

Introduction
HACMP 5.x supports the new style of shared storage protection which relies on AIXs
RSCT component to coordinate the ownership of shared storage when using
enhanced concurrent volume groups in non-concurrent mode.

How it works
HACMP takes advantage of new parameters on the varyonvg and varyoffvg
commands related to a pair of new concepts called active and passive volume group
varyon states. A volume group being managed by RSCT-based shared storage
protection is varied online in the passive state on all cluster nodes which might need
access to the volume groups data. The volume group is varied online in the active state
by the particular cluster node which needs access to the volume groups data now (in
other words, the node which is running the application has the volume group varied on

2-18 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty in the active state). The LVM on each node prohibits accesses or updates to the volume
groups data unless the node has the volume group varied on in the active state.
It is the responsibility of the RSCT component to ensure that each volume group is
varied online in the active state on not more than one node. Since this mechanism does
not rely on any disk reservation mechanism, it is compatible with all disk technologies
supported by HACMP.

Disk reservation not used


Even disks which support a disk reservation mechanism are not reserved when
RSCT-based shared storage protection is in effect.

Fast disk takeover


Taking over a volume group using RSCT-based shared storage protection is
considerably faster than using reserve/release-based shared storage protection.
Consequently, this style of disk takeover is called fast disk takeover.
During fast disk takeover, HACMP skips the extra processing needed to break the disk
reserves, or update and synchronize the LVM information by running lazy update. As a
result, the disk takeover mechanism used for enhanced concurrent volume groups is
faster than disk takeover used for standard volume groups.

LVM change management: keeping the ODM and VGDA in sync


In HACMP 5.1, lazy update is still required if volume group changes have occurred on
the active node.
Beginning in HACMP 5.2, when using enhanced concurrent volume groups
(RSCT-based shared storage protection), the ODMs on the inactive nodes are updated
immediately with any VG changes and the new timestamp. At fallover time, since the
timestamp in the ODM on the takeover node agrees with the timestamp in the VGDA,
lazy update does not run. This further improves the speed of fast disk takeover.
Note: All nodes in the cluster must be available before making any LVM changes. This
ensures that all nodes have an accurate view of the state of the volume group. This is
an issue if you are using the forced varyon feature, which will be discussed later in this
unit.

Copyright IBM Corp. 1998, 2005 Unit 2. Shared Storage Considerations for High Availability 2-19
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

RSCT-based Voluntary Fast Disk Takeover

Node passive httpvg active


Node
A B
1 varyon varyon 2

ODM ODM
1. A decision is made to move
active
varyon C D
passive
varyon
httpvg from the right node to
dbvg
the left
Node passive httpvg passive Node
A B
1 varyon varyon 2

ODM ODM 2. Right node releases active


active
varyon C
dbvg
D
passive
varyon
varyon of httpvg
(varyoffvg)
Node active httpvg passive
Node
A B
1 varyon varyon 2

ODM ODM
3. Left node obtains active
active passive varyon of httpvg
varyon C D varyon
dbvg
(varyonvg)

Copyright IBM Corporation 2005

Figure 2-12. RSCT-based Voluntary Fast Disk Takeover AU546.0

Notes:

Voluntary fast disk takeover


The fast disk takeover mechanism handles a voluntary VG takeover by first releasing
the active varyon state of the volume group on the node which is giving up the volume
group. It then sets the active varyon state on the node which is taking over the volume
group. The coordination of these operations is managed by HACMP 5.x and AIX 5L
RSCT.

2-20 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
RSCT-based
Involuntary Fast Disk Takeover
Node passive httpvg active
Node
1 varyon
A B
varyon 2 1. Right node fails
ODM ODM 2. Left node realizes that right
active passive node has failed
varyon C D varyon
dbvg

Node passive httpvg Node


A B
1 2
varyon
Active varyon state and passive
ODM ODM varyon state are concepts which
active
varyon C D
passive
varyon
don't apply to failed nodes
dbvg

Node httpvg
Node
1 active A B passive 2
varyon varyon
ODM ODM 3. Left node obtains active
mode varyon of httpvg
active passive
C D
varyon dbvg varyon

Copyright IBM Corporation 2005

Figure 2-13. RSCT-based Involuntary Fast Disk Takeover AU546.0

Notes:

Involuntary fast disk takeover


A node has failed. Once the remaining node (or nodes) realize that the node has failed,
the takeover node sets the volume groups varyon state to be active.
There is no need to break disk reservations as no disk reservations are in place. The
only action required is that the takeover node ask its local LVM to mark the volume
groups varyon state as active.
If Topology Services fail (that is, no communication between the nodes) then group
services fail and it is not possible to activate the volume group. This makes it very safe
to use. It is recommended, however, to attach the disks in an enhanced volume group
only to systems running HACMP 5.x.

Copyright IBM Corp. 1998, 2005 Unit 2. Shared Storage Considerations for High Availability 2-21
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Enabling RSCT-based Fast Disk Takeover


Fast disk takeover is enabled automatically for a Volume
Group if all of the following are true:
The cluster is running AIX 5L on all nodes
HACMP 5.x is installed on all nodes
The volume group is an enhanced concurrent mode
volume group

Copyright IBM Corporation 2005

Figure 2-14. Enabling RSCT-based Fast Disk Takeover AU546.0

Notes:

Requirements
Fast disk takeover is only used if all of the requirements listed above have been met.
Since RSCT is independent of disk technology, all disks supported by HACMP can be
used in an enhanced concurrent mode volume group.

2-22 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
Fast Disk Takeover Additional Details
Fast disk takeover is faster than reserve/release-based
disk takeover
Ghost disks do not occur when fast disk takeover is
enabled
Since fast disk takeover is implemented by RSCT, it is
independent of the disk technology supported by
HACMP
The gsclvmd subsystem which uses group services
provides the protection
The distinction between active varyon and passive
varyon is private to each node (that is, it isn't
recorded anywhere on the shared disks)

Copyright IBM Corporation 2005

Figure 2-15. Fast Disk Takeover Additional Details AU546.0

Notes:

Introduction
As with any technology, the implications of using fast disk takeover must be properly
understood if the full benefits are to be experienced.
Note: If RSCT is not running, it is possible (although it takes some work), to manually
varyon an enhanced concurrent volume group to active mode, while it is varied on in
active mode on another node. While this is possible, it is an unlikely occurrence. This
small risk can easily be avoided by never varying on your shared volume groups
manually.

Copyright IBM Corp. 1998, 2005 Unit 2. Shared Storage Considerations for High Availability 2-23
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Lets Review Topic 1


1. Which of the following statements is true (select all that apply)?
a. Static application data should always reside on private storage.
b. Dynamic application data should always reside on shared
storage.
c. Shared storage must always be simultaneously accessible to all
cluster nodes.
d. Regardless of the size of the cluster, all shared storage must
always be accessible, subject to access control, by all cluster
nodes.
2. True or False?
Using RSCT-based shared disk protection results in slower
fallovers.
3. True or False?
Ghost disks must be checked for and eliminated immediately after
every cluster fallover or fallback.
4. True or False?
Lazy Update is no longer needed with enhance concurrent
volume groups.
Copyright IBM Corporation 2005

Figure 2-16. Lets Review Topic 1 AU546.0

Notes:

2-24 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty 2.2 Shared Disk Technology

Copyright IBM Corp. 1998, 2005 Unit 2. Shared Storage Considerations for High Availability 2-25
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Topic 2 Objectives:
Shared Disk Technology
After completing this topic, you should be able to:
Discuss the capabilities of various disk technologies in an
HACMP environment
Discuss the installation considerations of a selected disk
technology when combined with HACMP
Explain the issue of PVID consistency within an HACMP
cluster

Copyright IBM Corporation 2005

Figure 2-17. Topic 2 Objectives: Shared Disk Technology AU546.0

Notes:

2-26 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
SCSI Technology and HACMP
HACMP-related issues with SCSI disk architecture:
SCSI buses require termination at each end
In HACMP environments the terminators have to be external to
ensure that the bus is still terminated properly after a failed system
unit has been removed
SCSI buses are ID-based. All devices must have a unique ID
number
The default for all SCSI adapters at initial power-on is ID 7\
SCSI adapters on shared SCSI busses must be configured to
not use ID 7 in order to ensure that there isn't an ID conflict
when some other SCSI adapter powers on
Maximum 25m
Host Host
System System
T T SCSI
SCSI
Controller
5 6 Controller
SCSI 4 SCSI 3 SCSI 2 SCSI 1
Module Module Module Module

Disk Disk Disk Disk


Drive Drive Drive Drive

Copyright IBM Corporation 2005

Figure 2-18. SCSI Technology and HACMP AU546.0

Notes:

SCSI termination
In HACMP environments, SCSI terminators must be external so that the bus is still
terminated after a failed system unit has been removed.

Avoid using SCSI ID 7


In an HACMP environment it is a very good practice to avoid using SCSI ID 7 because
a node booted in service or diagnostic mode, by default, has SCSI controllers set to the
default ID of 7. If you are troubleshooting; boot the failed node into service or diagnostic
mode; and the surviving node is using SCSI ID 7, you will have a SCSI ID conflict which
could be detrimental to your data (in other words, bad things happen).

Devices on a shared bus


Do not connect other SCSI devices, such as CD-ROMs, to a shared SCSI bus.

Copyright IBM Corp. 1998, 2005 Unit 2. Shared Storage Considerations for High Availability 2-27
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Power supply redundancy


If you mirror your logical volumes across two or more physical disks, the disks should
not be connected to the same power supply; otherwise, loss of a single power supply
can prevent access to all copies. As a result, plan on using multiple disk subsystem
drawers or desk-side units to avoid dependence on a single power supply.

2-28 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
SCSI Continued
Different SCSI bus types have different maximum cable lengths for
the buses (maximum is 25 meters for differential SCSI)
Four node limit
Certain SCSI subsystems support hot swappable drives.
SCSI cables are not hot pluggable (power must be turned off on all
devices attached to the SCSI bus before a SCSI cable connection
is made or severed).
Clusters using shared SCSI disks often experience ghost disks.
For additional information see:
IBM course AU20,
AIX 5L System Administration IV: Storage Management.

http://www.ibm.com/redbooks

Copyright IBM Corporation 2005

Figure 2-19. SCSI Continued AU546.0

Notes:

Cable length and number of drives


You can connect up to sixteen devices to a SCSI bus. Each SCSI adapter, and each
disk, is considered a separate device with its own SCSI ID. The maximum bus length
for most SCSI devices provides enough length for most cluster configurations to
accommodate the full sixteen-device connections allowed by the SCSI standard.

Hot swapping SCSI devices


The hot swappability of SCSI devices is generally poorly understood. The rules are
actually quite simple:
- If the documentation which comes with the SCSI device does not explicitly state that
a device is hot swappable, then assume that it is not hot swappable.
- In general, the only hot swappable SCSI devices are certain SCSI disk drive
modules.

Copyright IBM Corp. 1998, 2005 Unit 2. Shared Storage Considerations for High Availability 2-29
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

- SCSI cable connection points are never hot swappable. Disconnecting or


connecting a SCSI cable while any device on the bus is powered on is a dangerous
activity.

Disconnecting or connecting SCSI cables


Many people have disconnected or connected SCSI cables without causing any
problems. This is, at best, proof that the person has been lucky. The possible
consequences of disconnecting or connecting a SCSI cable when any device on the
bus is powered on can include:
- I/O errors which are seen by the operating system and potentially reflected back to
the application
- I/O errors which result in the operating system crashing or refusing to continue to
use the SCSI bus in question (typically, until the operating system is rebooted)
- Data transfer errors which are not seen by the operation but which result in data
corruption on the disk drive
- Total failure of devices and controllers on the bus (this failure is usually temporary
and can be fixed by replacing a fuse but permanent damage is a real possibility).
There are devices which can be inserted into the middle of SCSI buses which claim to
allow the bus to be severed at the point of insertion. Unless you can get IBM to
specifically state that they support such a device, then you should not use it.

2-30 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
SSA Technology and HACMP
SSA uses a loop technology which offers multiple data paths to disk
Number and type of adapter restrictions on each loop. For example:
SSA loops can support eight adapters per loop
(Maximum of eight HACMP nodes sharing SSA disks)
Adapters used in RAID mode are limited to two per loop
Shared SSA disks never appear as ghost disks
For additional information see:
IBM course AU20,
AIX 5L System Administration IV: Storage Management
Redbook, Understanding SSA Subsystems in Your Environment,
SG24-5750-00.
http://www.storage.ibm.com/hardsoft/products/ssa/docs/index.html

SSA Adapter

Copyright IBM Corporation 2005

Figure 2-20. SSA Technology and HACMP AU546.0

Notes:

Introduction
Serial Storage Architecture (SSA) enables you to minimize single points of failure and
achieve high availability in an HACMP environment.
You can use IBM 7133 and 7131-405 SSA disk subsystems as shared external disk
storage devices to provide concurrent access in an HACMP cluster configuration.
SSA is hot pluggable. Consequently, if you include SSA disks in a volume group using
LVM mirroring, you can replace a failed disk drive without powering off the entire
system.
SSA disk subsystems are less prone to power supply problems because they have
redundant power supplies.

Copyright IBM Corp. 1998, 2005 Unit 2. Shared Storage Considerations for High Availability 2-31
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

SSA information
Manuals
The following SSA manuals are available:
SA33-3285 Advanced Serial RAID Plus adapter; Users Guide and
Maintenance Information
SA33-3287 Advanced Serial RAID Plus adapter; Installation Guide
SA33-3286 Advanced Serial RAID Plus adapter; Technical
Reference
SA33-3278 7133 Models D40 and T40; Serial Disk Systems;
Operator Guide
SA33-3281 7133 Models D40 and T40; Serial Disk Systems;
Hardware Technical Information
GY33-0192 7133 Models D40 and T40; Serial Disk Systems;
Service Guide
GA33-3279 7133 Models D40; Installation Guide
GA33-3280 7133 Models T40; Installation Guide
AIX SSA subsystem
There is online information available on the AIX device drivers and subsystem interface
within AIX. Refer to the AIX 5L information pages and search for SSA. This brings up
sections describing the SSA device drivers, their IOCTL interfaces and how they are
used within an AIX system.
The IBM Hursley SSA Online Customer Support page is at:
http://www.storage.ibm.com/hardsoft/products/ssa
This contains adapter and disk drive microcode download packages as well as
technical data and product descriptions for IBM SSA products developed at Hursley.
The top-level IBM disk storage systems page is at:
http://www.storage.ibm.com/disk/index.html
This contains links to the complete range of IBM disk storage systems.
IBM training
AIX System Administration IV: Storage Management - AU20 / Q1320
For details on this class, or other IBM training classes, go to:
http://www.ibm.com
select Services & Solutions
select Training

2-32 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty IBM Redbooks


Another useful source of information is the IBM Redbooks:
http://www.redbooks.ibm.com

Copyright IBM Corp. 1998, 2005 Unit 2. Shared Storage Considerations for High Availability 2-33
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Configuring SSA for Maximum Availability


Left Node Right Node

A1
4 1 A1
SSA A2 A2 SSA
4 1
Adapter B1

B2
5
16
B1

B2
Adapter
5
16
13 7133 8
13 7133 8
A1 A1
SSA A2 12 9 A2 SSA
Adapter B1

B2
12 9 B1

B2
Adapter
SSA -1
SSA -2

Copyright IBM Corporation 2005

Figure 2-21. Configuring SSA for Maximum Availability AU546.0

Notes:

Example
There are many scenarios for cabling SSA disks. The example shown above is just one
possibility, and it is actually only half of the solution. You would need a second 7133
drawer and you would use the B-loop for it (this avoids the 7133 itself being a (unlikely
but real) single point of failure). Once the 7133s are configured, you would then mirror
the data across the 7133s.

SSA configuration guidelines


There are several rules which should be considered when planning the SSA
environment.
- Never allow the same loop to use both the A and B ports of the same SSA adapter.
- Always connect the bypass pairs to the same node. This allows the bypass
functionality to work should a node be powered down.

2-34 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty - Consider the impact of blank disk slots


It is not supported to have more then three blanks in a row for a single SSA loop.
You need to bear this in mind when considering the potential impact of node failures.
For example, if disk slots 11, 12, 13 and 14 are disk blanks, then the loop above is
intact as it consists of two sequences of two disk blanks separated by the SSA
adapter. Should the left node fail however, the 12-13 pair of SSA ports would enter
bypass mode, and the four blanks in slots 11, 12, 13 and 14 would now constitute
four blanks in a row and would cause a break in the loop.

More information
Refer to the SSA documentation or attend the AU20 course for more information.

Copyright IBM Corp. 1998, 2005 Unit 2. Shared Storage Considerations for High Availability 2-35
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

SSA Adapters
The capabilities of SSA adapters have improved over time:
Only 6215, 6219, 6225 and 6230 adapters support Target Mode SSA and RAID5
Only the 6230 adapter with 6235 Fast Write Cache Option feature code supports
enabling the write cache with HACMP
Compatible adapters:
6214 + 6216 or 6217 + 6218 or 6219 + 6215 + 6225 + 6230
For more information and microcode updates:
http://www.storage.ibm.com/hardsoft/products/ssa/
Features and functionality of otherwise identical adapters and drives can vary
depending upon the level of microcode installed on the devices so be careful!
FC Adapter Name Adapters/loop RAID5 TMSSA
RAID/JBOD Cache
6214 SSA 4-port adapter (MCA) -/2 N N
6216 Enhanced SSA 4-port adapter (MCA) -/8 N N
6217 SSA 4-port RAID adapter (MCA) 1/1 N N
6218 SSA 4-port adapter (PCI) 1/1 N N
6219 SSA Multi-Initiator/RAID EL Adapter (MCA) 2/8 Not for HA Y
6215 SSA Multi-Initiator/RAID EL Adapter (PCI) 2/8 Not for HA Y
6225 Advanced SerialRAID Adapter (PCI) 2/8 Not for HA Y
6230 Advanced SerialRaid Adapter Plus (PCI) 2/8 Not for HA Y
6230 + 6235 Fast Write Cache Option 2/8 Yes for HA Y
Note: AIX 5L V5.2 does not support the MCA 6214, 6216, 6217 and 6219 SSA adapters.
Copyright IBM Corporation 2005

Figure 2-22. SSA Adapters AU546.0

Notes:

SSA adapter issues


The PCI adapter feature code 6230 supports RAID5 write cache failovers although a
configuration for RAID5 can contain only two adapters per loop. Keep in mind this still
presents a single point of failure, as should an adapter fail, the node loses access to the
RAID5 LUN. Also, keep in mind mixed adapter compatibility.

2-36 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
ESS Technology
The ESS is an example of a smart storage device. ESS provides
highly available storage centrally managed by the storage manager.
The inner workings of the storage device are masked from AIX.
Basic implementations are transparent to HACMP.
The optional HACMP XD add-on can be used to coordinate the
fallover of ESS PPRC based remote data mirrors.
Up to 32 Connection ports +
Host Adapters online upgrades
HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA
8,16 or 32 GB Cache
Switch Nonvolatile backup of write data
Dedicated 4-way
in cache
SMP Systems
Cache NVS 64 Internal disk paths
All disks can communicate at
Disk Adapters Cluster1 NVS Cache Cluster2 the same time
DA DA Hot-swap disks with redundant
spares + online upgrades
SSA Loops RAID5 providing performance
and availability
Physical Disks
partitioned into
Logical Volumes

Full Duplication - No Single Point Of Failure

Copyright IBM Corporation 2005

Figure 2-23. ESS Technology AU546.0

Notes:

Enterprise Storage System (ESS) overview


The ESS or shark was designed for storage consolidation projects. It provides
concurrent multihost attachment. The ESS provides unprecedented scalability (420 GB
to 22 TB) and true data sharing (Parallel Access LVs and Multiple Allegiance).
The ESS provides extensive functionality such as flash copy and peer-to-peer remote
copy (PPRC), Web-based management, and ultra high performance which can exceed
41000 I/O ops / sec.
The ESS is a proven technology (SSA), with intelligent cache and priority I/O queuing.
It is highly available storage, supported by a dual active storage processing complex,
using RAID5 redundancy with dynamic sparing.
It supports redundant power, cooling, adapters, buses, and processor complexes.

Copyright IBM Corp. 1998, 2005 Unit 2. Shared Storage Considerations for High Availability 2-37
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

ESS models
The ESS is available in two series:
- IBM TotalStorage DS6000 series:
This is a rack mountable enclosure which holds 16 drives. Using expansion
enclosures, the DS6000 can provide up to 38.4 TBytes of storage.
- IBM TotalStorage DS8000 series:
This is a cabinet sized enclosure. Using expansion frames, the DS8100 can provide
up to 115 TBytes of storage and the DS8300 can provide up to 192 TBytes of
storage.
For more information:
http://www.ibm.com/servers/storage/disk/index.html

2-38 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
ESS Continued
Advanced features of the Storage unit may be supported by HACMP.
Subsystem Device Driver (SDD) is supported by HACMP
(with appropriate PTFs)
For additional information refer to:
IBM course AU20,
AIX 5L System Administration IV: Storage Management
Implementing the Enterprise Storage Server in Your Environment,
SG24-5420-01
IBM TotalStorage Enterprise Storage Server Model 800, SG24-6424-00.

http://www.ibm.com/redbooks
Copyright IBM Corporation 2005

Figure 2-24. ESS Continued AU546.0

Notes:

Additional details: multi-pathing


HACMP supports making LVM changes with storage systems which use the vpath and
the SDD device driver from within the system management SMIT menus.
HACMP also supports AIX 5L V5.2 Multi-Path I/O (MPIO).
HACMP does not support EMCs powerpath from within the system management
menus.

Fibre channel adapters


See the HACMP for AIX: Planning and Installation Guide for details of supported
adapters and firmware levels.

Copyright IBM Corp. 1998, 2005 Unit 2. Shared Storage Considerations for High Availability 2-39
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Fibre Channel Technology


Fibre channel is supported by AIX and HACMP:
The gigabit fibre channel adapters (FC6228 and FC6239) are
supported by HACMP.
The IBM fibre channel raid storage server is supported for HACMP
configurations.
The FAStT disk technology is supported with restrictions in AIX and
HACMP.
Up to 32 HACMP nodes
For more information refer to the following Redbooks:
Planning and Implementing an IBM SAN, SG24-6116-00
Designing an IBM Storage Area Network, SG24-5758-00
Implementing Fibre Channel Attachment on the ESS, SG24-6113-00

http://www.ibm.com/redbooks

Copyright IBM Corporation 2005

Figure 2-25. Fibre Channel Technology AU546.0

Notes:

Fibre channel adapters


See the HACMP for AIX: Planning and Installation Guide for details of supported
adapters and firmware levels.

2-40 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
Fibre Channel Continued
An example of a redundant fabric fibre channel implementation:

Node Node
1 2

HACMP node HACMP node


FC
switches

Fibre channel
RAID
storage servers

Copyright IBM Corporation 2005

Figure 2-26. Fibre Channel Continued AU546.0

Notes:
Planning is important. Think about availability issues when designing storage
configurations. In this figure, the fiber switches have been set up so the loss of a single
switch does not cause data access to be lost from either cluster node.

Copyright IBM Corp. 1998, 2005 Unit 2. Shared Storage Considerations for High Availability 2-41
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Physical Volumes IDs


# lspv
hdisk0 000206238a9e74d7 rootvg
hdisk1 00020624ef3fafcc None
hdisk2 00206983880a1580 None
hdisk3 00206983880a1ed7 None
hdisk4 00206983880a31a7 None

Node 1
A B

ODM
C D

Copyright IBM Corporation 2005

Figure 2-27. Physical Volume IDs AU546.0

Notes:
AIX assigns a unique physical volume ID (PVID) to each disk that it sees. This is stored in
the ODM and linked to a logical construct called a hdisk. hdisks are numbered sequentially
as discovered by the configuration manager (cfgmgr).

2-42 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
hdisk Inconsistency
# lspv # lspv
hdisk0 000206238a9e74d7 rootvg hdisk0 000206238a9e74d7 rootvg
hdisk1 00020624ef3fafcc None A hdisk1 000206238beef264 rootvg
hdisk2 00206983880a1580 None B hdisk2 00206983880a1ed7 None C
hdisk3 00206983880a1ed7 None C hdisk3 00206983880a31a7 None D
hdisk4 00206983880a31a7 None D hdisk4 00020624ef3fafcc None A
hdisk5 00206983880a1580 None B

Node Node
A B
1 2

ODM ODM

C D

Neither HACMP nor AIX are affected by having a physical disk known
by different hdisk numbers on different systems.

Humans are, unfortunately, more easily confused.

Copyright IBM Corporation 2005

Figure 2-28. hdisk Inconsistency AU546.0

Notes:

Disk name inconsistency


There is no requirement in AIX or HACMP that the hdisk name for a shared disk be the
same on all nodes. However, if the names are different, this is likely to be an ongoing
source of confusion for humans and a possible source of errors which could lead to
down time.

Example
In this example, the node on the right uses the names hdisk1, hdisk2, hdisk3 and
hdisk4 for the physical disks A, B, C and D respectively.
The shared storage subsystem has not been cabled consistently (put the disk
controllers in the same slots and cable each controller in a slot to the same disks as the
corresponding controller in the other node(s)). In addition, the right-hand node has an
extra disk which was configured before the shared disks.

Copyright IBM Corp. 1998, 2005 Unit 2. Shared Storage Considerations for High Availability 2-43
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

The result is that the right hand node uses the names hdisk2, hdisk3, hdisk4 and hdisk5
for physical disks C, D, A and B respectively.
This is likely to be a source of constant confusion for the humans who have to
administer this cluster. On the other hand, assuming that the humans do not get
confused, both HACMP and AIX deal with this configuration properly.

Error example
An example of where human confusion could lead to apparent HACMP and AIX
confusion is if disks A and C are intended to represent datavg and disks B and D are
intended to represent stuffvg. If stuffvg is imported on both nodes by designating hdisk2
on both nodes then the left node will use disks B and D for stuffvg and the right node will
use disks A and C for stuffvg!
Note that neither HACMP nor AIX are actually confused. Rather, confusion on the part
of the human has resulted in apparent confusion on the part of HACMP and/or AIX. Had
the human done the right thing (used hdisk2 or hdisk4 when importing stuffvg on the left
node and used hdisk3 or hdisk5 when importing stuffvg on the right node), then neither
HACMP nor AIX would have appeared to be confused.

2-44 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
Removing hdisk Inconsistencies
# rmdev -d -l hdisk1 ; rmdev -d -l hdisk2
# rmdev -d -l hdisk3 ; rmdev -d -l hdisk4
# mkdev -c disk -t 160mb -s scsi -p scsi0 -w 6,1 -d
# cfgmgr
# lspv # rmdev -d -l hdisk2 ; rmdev -d -l hdisk3
hdisk0 000206238a9e74d7 rootvg # rmdev -d -l hdisk4 ; rmdev -d -l hdisk5
hdisk2 00020624ef3fafcc None A # cfgmgr
hdisk3 00206983880a1580 None B # lspv
hdisk0 000206238a9e74d7 rootvg
hdisk4 00206983880a1ed7 None C
hdisk1 000206238beef264 rootvg
hdisk5 00206983880a31a7 None D
hdisk2 00020624ef3fafcc None A
hdisk3 00206983880a1580 None B
"Fake" hdisk1 will exist in a defined state and will not hdisk4 00206983880a1ed7 None C
appear in lspv output (use lscfg to see hdisk1). hdisk5 00206983880a31a7 None D

Node Node
A B
1 2

ODM ODM

C D

The two systems will now have consistent hdisk PVIDs.


Copyright IBM Corporation 2005

Figure 2-29. Removing hdisk Inconsistencies AU546.0

Notes:

Ensuring hdisk name consistency


Ensuring hdisk inconsistencies requires the use of two techniques:
- Ensuring that the shared disk subsystem cabling is organized so that the
configuration manager on each side discovers new shared disks in the same order
(this primarily involves ensuring that disk adapter slot numbers are taken into
account to avoid crossing cables).
- Defining fake hdisks to occupy hdisk numbers on nodes with fewer real disks than
other nodes.
These two techniques are combined with judicious use of rmdev -d -l and cfgmgr to
get the hdisk numbers to be consistent.

Copyright IBM Corp. 1998, 2005 Unit 2. Shared Storage Considerations for High Availability 2-45
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Example
In the example, the problem was corrected by
- Recabling the disks
- Removing the disks from the ODM on both nodes
- Creating a fake disk on Node1 (mkdev command) so the disk numbering matches
Node2
- Running cfgmgr on both nodes

Time well spent


This can take a bit of time although time spent planning what you are going to do
coupled with an understanding of how the configuration manager discovers disks and
assigns hdisk numbers can considerably reduce the total amount of time spent dealing
with this issue.
Unless your cluster does not have humans involved in its ongoing support and
management, it is probably a good idea to eliminate hdisk inconsistencies.

2-46 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
Support for OEM Disks
HACMP lets you use either IBM disks or OEM disks
Treat an unknown disk type the same way as a known type
/etc/cluster/disktype.lst
/etc/cluster/lunreset.lst
/etc/cluster/conraid.dat
Use custom disk processing methods
Identifying ghost disks
Determining whether a disk reserve is being held by another node
in the cluster
Breaking a disk reserve
Making a disk available for use by another node
Enhanced concurrent VGs
Additional considerations

Copyright IBM Corporation 2005

Figure 2-30. Support for OEM Disks AU546.0

Notes:

Introduction
HACMP lets you use either physical storage disks manufactured by IBM or by an
Original Equipment Manufacturer (OEM) as part of a highly available infrastructure.
Depending on the type of OEM disk, custom methods allow you (or an OEM disk
vendor) either
- To tell HACMP that an unknown disk should be treated the same way as a known
and supported disk type, or
- To specify the custom methods that provide the low-level disk processing functions
supported by HACMP for that particular disk type

Treat an unknown disk the same way as a known type


HACMP provides mechanisms that will allow you, while configuring a cluster, to direct
HACMP to treat an unknown disk exactly the same way as another disk it supports. The

Copyright IBM Corp. 1998, 2005 Unit 2. Shared Storage Considerations for High Availability 2-47
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

following three files can be edited to perform this configuration. (There is no SMIT menu
to edit these files.)
/etc/cluster/disktype.lst
This file is referenced by HACMP during disk takeover.
You can use this file to tell HACMP that it can process a particular type of disk the same
way it processes a disk type that it supports. The file contains a series of lines of the
following form:
<PdDvLn field of the hdisk><tab><supported disk type>
To determine the value of the PdDvLn field for a particular hdisk, enter the following
command:
# lsdev -Cc disk -l <hdisk name> -F PdDvLn
The known and supported disk types are:

Disk Name in HACMP Disk Type


SCSIDISK SCSI -2 Disk
SSA IBM Serial Storage Architecture
FCPARRAY Fibre Attached Disk Array
ARRAY SCSI Disk Array
FSCSI Fibre Attached SCSI Disk
For example, to have a disk whose PdDvLn field was disk/fcal/HAL9000 be treated
the same as IBM fibre SCSI disks, a line would be added that read:
disk/fcal/HAL9000 FSCSI
A sample disktype.lst file, which contains comments, is provided.
/etc/cluster/lunreset.lst
This file is referenced by HACMP during disk takeover.
HACMP will use either a target ID reset or a LUN reset for parallel SCSI devices based
on whether a SCSI inquiry of the device returns a 2 or a 3. Normally, only SCSI-3
devices support LUN reset. However, some SCSI-2 devices will support an LUN reset.
So, HACMP will check the Vendor Identification returned by a SCSI inquiry against
the lines of this file. If the device is listed in this file, then a LUN reset is used. This file is
intended to be customer modifiable.
For example, if the HAL 9000" disk subsystem returned an ANSI level of '2' to inquiry,
but supported LUN reset, and its Vendor ID was HAL and its Product ID was 9000,
then this file should be modified to add a line which was either:
HAL
or
HAL 9000

2-48 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty depending on whether vendor or vendor plus product match was desired. Note the
use of padding of Vendor ID to 8 characters.
A sample /etc/cluster/lunreset.lst file, which contains comments, is provided.
/etc/cluster/conraid.dat
This file is referenced by HACMP during varyon of a concurrent volume group.
You can use this file to tell HACMP that a particular disk is a RAID disk that can be used
in classical concurrent mode. The file contains a list of disk types, one disk type per line.
The value of the Disk Type field for a particular hdisk is returned by the following
command:
# lsdev -Cc disk -l <hdisk name> -F type
Note: This file only applies to classical concurrent volume groups. Thus this file has no
effect in AIX 5L V5.3, which does not support classical concurrent VGs.
HACMP does not include a sample conraid.dat file. The file is referenced by the
/usr/sbin/cluster/events/utils/cl_raid_vg script, which does include some
comments.

Additional considerations
The previously described files in /etc/cluster are not modified by HACMP after they
have been configured and are not removed if the product is uninstalled. This ensures
that customized modifications are unaffected by the changes in HACMP. By default, the
files initially contain comments explaining their format and usage.
Keep in mind that the entries in these files are classified by disk type, not by the number
of disks of the same type. If there are several disks of the same type attached to a
cluster, there should be only one file entry for that disk type.
Finally, unlike other configuration information, HACMP does not automatically
propagate these files across nodes in a cluster. It is your responsibility to ensure that
these files contain the appropriate content on all cluster nodes. You can use the
HACMP File Collections facility to propagate this information to all cluster nodes.

Use custom disk processing methods


Some disks may behave sufficiently differently from those supported by HACMP so that
it is not possible to achieve proper results by telling HACMP to process these disks
exactly the same way as supported disk types. For these cases, HACMP provides finer
control.
While doing cluster configuration, you can either
- select one of the specific methods to be used for the steps in disk processing
- specify a custom method

Copyright IBM Corp. 1998, 2005 Unit 2. Shared Storage Considerations for High Availability 2-49
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

HACMP supports the following disk processing steps:


- Identifying ghost disks
- Determining whether a disk reserve is being held by another node in the cluster
- Breaking a disk reserve Disk Name in HACMP
- Making a disk available for use by another node
HACMP allows you to specify any of its own methods for each step in disk processing,
or to use a customized method, which you define.
Using SMIT, you can perform the following functions for OEM disks:
-Add Custom Disk Methods
-Change/Show Custom Disk Methods
-Remove Custom Disk Methods

Additional considerations for custom methods


The custom disk processing method that you add, change or delete for a particular
OEM disk is added only to the local node. This information is not propagated to other
nodes; you must copy this custom disk processing method to each node manually or
use the HACMP File Collections facility.

OEM disks and enhanced concurrent volume groups


OEM disks can be used in enhanced concurrent volume groups, either for concurrent
access mode or, in non-concurrent access mode, for fast disk takeover. In this case,
you would need to edit the /etc/cluster/disktype.lst file and associate the OEM disk
with a supported disk type.

More information
For detailed information about configuring OEM disks for use with HACMP, see:
SC23-4861-06 HACMP for AIX, Version 5.3: Planning and Installation Guide
Appendix D: OEM Disk, Volume Group, and Filesystems
Accommodation

2-50 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
Lets Review Topic 2
1. Which of the following disk technologies are
supported by HACMP?
a. SCSI.
b. SSA.
c. FC.
d. All of the above.
2. True or False?
SSA disk subsystems can support RAID5 (cache-enabled) with HACMP.
3. True or False?
Compatibility must be checked when using different SSA adapters in the
same loop.
4. True or False?
No special considerations are required when using SAN based storage
units (DS8000, ESS, EMC HDS, and so forth).
5. True or False?
hdisk numbers must map to the same PVIDs across an entire HACMP
cluster.

Copyright IBM Corporation 2005

Figure 2-31. Lets Review Topic 2 AU546.0

Notes:

Copyright IBM Corp. 1998, 2005 Unit 2. Shared Storage Considerations for High Availability 2-51
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

2-52 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty 2.3 Shared Storage from the AIX Perspective

Copyright IBM Corp. 1998, 2005 Unit 2. Shared Storage Considerations for High Availability 2-53
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Topic 3 Objectives:
Shared Storage from the AIX Perspective
After completing this topic, you should be able to:
Discuss how LVM aids cluster availability
Describe the quorum issues associated with HACMP
Set up LVM for maximum availability
Configure a new shared volume group, filesystem, and
jfslog

Copyright IBM Corporation 2005

Figure 2-32. Topic 3 Objectives: Shared Storage from the AIX Perspective AU546.0

Notes:
This topic discusses shared storage from the AIX perspective.

2-54 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
Logical Volume Manager (LVM) Review
LVM is one of the major enhancements that AIX brings to traditional UNIX disk
management. LVM's capabilities are exploited by HACMP
Physical disk volumes are:
Organized into volume groups
Identified by a unique physical volume ID (PVID)
Divided into physical partitions which are mapped to logical partitions in
logical volumes
Applications (such as file systems and databases) use logical volumes

Physical Logical
Partitions Partitions

PVID

Physical hdisk0 Logical


Volumes Volume
PVID

hdisk1

Volume
Group

Copyright IBM Corporation 2005

Figure 2-33. Logical Volume Manager (LVM) Review AU546.0

Notes:

LVM review
The set of operating system commands, library subroutines and other tools that allow
the user to establish and control logical volume storage is called the logical volume
manager.
LVM controls disk resources by mapping data between a simple and flexible logical
view of storage space and the actual physical disks. The logical volume manager does
this by using a layer of device driver code that runs above the traditional physical device
drivers.

Logical volumes
This logical view of the disk storage, which is called a logical volume (LV), is provided to
applications and is independent of the underlying physical disk structure. The LV is
made up of logical partitions.

Copyright IBM Corp. 1998, 2005 Unit 2. Shared Storage Considerations for High Availability 2-55
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Physical volumes
Each individual disk drive is called a physical volume (PV). It has a physical volume ID
(PVID) associated with it and an AIX name, usually /dev/hdiskx (where x is a
unique integer on the system). Every physical volume in use belongs to a volume group
(VG) unless it is being used as a raw storage device or a readily available spare (often
called a hot spare). Each physical volume is divided into physical partitions (PPs) of a
fixed size for that physical volume. A logical partition is mapped to one or more physical
partitions.

Volume groups
Physical volumes and their associated logical volumes are grouped into volume group.
Operating system files are stored in the rootvg volume group. Application data are
usually stored in one or more additional volume groups.

2-56 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
LVM Relationships
LVM manages the components of the disk subsystem. Applications talk to the
disks through LVM.

This example shows an application writing to a filesystem which has its LVs
mirrored in a volume group physically residing on separate hdisks.

LVM
Physical Logical
Partitions Partitions
Volume Group

write to
/filesystem

Mirrored
Logical
Volume
Application

Copyright IBM Corporation 2005

Figure 2-34. LVM Relationships AU546.0

Notes:

LVM relationships
An application writes to a file system. A file system provides the directory structure and
is used to map the application data to logical partitions of a logical volume. Because
there is a LVM, the application is isolated from the physical disks. The LVM can be
configured to map a logical partition to up to three physical partitions and have each
physical partition (copy) reside on a different disk.

Copyright IBM Corp. 1998, 2005 Unit 2. Shared Storage Considerations for High Availability 2-57
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

LVM Mirroring
LVM mirroring has some key advantages over other types of mirroring:

Up to three-way mirroring of all logical volume types, including concurrent logical


volumes, sysdumpdev, paging space, and raw logical volumes
Disk type and disk bus independence
Optional parameters for maximizing speed or reliability
Changes to most LVM parameters can be done while the affected components are in
use
The splitlvcopy command can be used to perform online backups

LVM
Physical Logical
Partitions
Volume Group

Partitions

write to
/filesystem
Mirrored
Logical
Volume
Application

Copyright IBM Corporation 2005

Figure 2-35. LVM Mirroring AU546.0

Notes:

Introduction
Reliable storage is essential for a highly available cluster. LVM mirroring is one option to
achieve this. Other options are a hardware RAID disk array configured in RAID-5 mode
or some other solution which provides sufficient redundancy such as an external
storage subsystem like the ESS (DS6000/DS8000), EMC, and so forth.

LVM mirroring
Some of the features of LVM mirroring are:
- Data can be mirrored on three disks rather than having just two copies of data. This
provides higher availability in the case of multiple failures, but does require more
disks for the three copies.
- The disks used in the physical volumes could be of mixed attachment types.

2-58 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty - Instead of entire disks, individual logical volumes are mirrored. This provides
somewhat more flexibility in how the mirrors are organized. It also allows for an odd
number of disks to be used and provides protection for disk failures when more than
one disk is used.
- The disks can be configured so that mirrored pairs are in separate sites or in
different power domains. In this case, after a total power failure on one site,
operations can continue using the disks on the other site that still has power. No
information is displayed on the physical location of each disk when mirrored logical
volumes are being created, unlike when creating RAID 1 or RAID 0+1 arrays, so
allocating disks on different sites requires considerable care and attention.
- Mirrored pairs can be on different adapters.
- Read performance is good for short length operations as data can be read from
either of two disks, so the one with the shortest queue of commands can be used.
Write performance requires a write to two disks.
- Extra mirrored copies can be created and then split off for backup purposes.
- Data can be striped across several mirrored disks, an approach which avoids hot
spots caused by excessive activity on a few disks by distributing the I/O operations
across all the member disks.
- There are parameters such as Mirror Write Consistency, Scheduling Policy, and
Enable Write Verify which can help maximize performance and reliability.

Copyright IBM Corp. 1998, 2005 Unit 2. Shared Storage Considerations for High Availability 2-59
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Steps to Creating a Mirrored Filesystem


These are the steps to creating a properly mirrored filesystem for HA
environments:

Step Description Options

1 create shared volume group Name the VG something meaningful like shared_vg1

2 change the autovary on flag chvg -an shared_vg1

create a jfslog lv Type=jfslog, size=1pp, separate physical volumes =


3 yes, scheduling = sequential, copies=2
"sharedlvlog"
4 initialize the jfslog logform /dev/sharedlvlog
type= jfs, size=??, separate physical volumes= yes,
5 create a data lv "sharedlv"
copies=2, scheduling = sequential, write verify = ??
create a filesystem on a pick the lv = sharedlv to create the file system on,
6
previously created lv automount = no, assign desired mount point
mount filesystem, lsvg -l shared_vg1 should show 1
7 verify the log file is in use
lv type jfslog, 1 lp, 2pp.

Copyright IBM Corporation 2005

Figure 2-36. Steps to Creating a Mirrored File system AU546.0

Notes:

Introduction
This visual describes a procedure for creating a shared and mirrored file system. There
is an easier-to-use method provided by an HACMP facility called C-SPOC which is
discussed later in the course. The C-SPOC method cannot be used until the HACMP
clusters topology and at least one resource group have been configured.
The procedure described in the visual permits the creation of shared file systems before
performing any HACMP related configuration (an approach favored by some cluster
configurators).

Detailed procedure
Here are the steps in somewhat more detail:
a. Use the smit mkvg fastpath to create the volume group.

2-60 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty b. Make sure that the volume group is created with the Activate volume group
AUTOMATICALLY at system restart parameter set to no (or use smit chvg to
set it to no). This gives HACMP control over when the volume group is brought
online. It is also necessary to prevent, for example, a backup node from attempting
to online the volume group at a point in time when it is already online on a primary
node.
c. Use the smit mklv fastpath to create a logical volume for the jfslog with the
parameters indicated in the figure above (make sure that you specify a type of jfslog
or AIX ignores the logical volume and creates a new one (which is not mirrored)
when you create file system below).
d. Use the logform command to initialize a logical volume for use as a JFS log device.
Note: The only intended use for the logform command is to initialize a JFS log
logical volume as a JFS log device. The SMIT interface for creating a JFS and the
crfs command allow only one JFS log device per volume group.
e. Use the smit mklv fastpath again to create a logical volume for the file system with
the parameters indicated in the figure above.
f. Use the smit crjfslv fastpath (not crjfs) to create a JFS file system in the now
existing logical volume.
g. Verify by mounting the file system and using the lsvg command. Notice that if
copies were set to 2, then the number for PPs should be twice the number for LPs
and that if you specified separate physical volumes then the values for PVs should
be 2 (the number of copies).
The procedure for creating a JFS2 file system is quite similar although there are a few
differences:
- The type of the JFS2 log logical volume should be jfs2log.
- The logform command requires an additional parameter to cause it to create a
JFS2 log:
# logform -V jfs2log <lvname>
- The type of the JFS2 file system logical volume should be jfs2.
- The fastpath for creating a JFS2 file system in an existing logical volume is
smit crjfs2lvstd.

Copyright IBM Corp. 1998, 2005 Unit 2. Shared Storage Considerations for High Availability 2-61
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Adding a Shared Volume Group

Node Disk Node


1 Array 2

VGDA
ODM ODM

#1 #2 #3 #4 #5
mkvg unmount cfgmgr varyoffvg Start HACMP
chvg varyoffvg importvg
mklv (log) chvg
logform
mklv (data)
crfs

Copyright IBM Corporation 2005

Figure 2-37. Adding a Shared Volume Group AU546.0

Notes:

Introduction
The steps to add a shared volume groups are:
0) Ensure consistent hdisks names
1) Create a new VG and its contents
2) Varyoff VG on Node1
3) Import VG on Node2 and set VG characteristics correctly
4) Varyoff VG on Node2
5) Start HACMP
Please note that the slide presents only a high-level view of the commands required to
perform these steps. More details are provided below.

2-62 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty 0. Ensure consistent hdisk names


As discussed earlier, HACMP has no requirement that hdisk names on all the nodes are
consistent. But to avoid human confusion, you should make sure that the hdisk names
are consistent.
a. Ensure disks are cabled so that cfgmgr will discover the disks in the same order on
both nodes.
b. If needed, define fake hdisks to occupy hdisk numbers on nodes with fewer local
disks than other nodes.
c. Add the shared disk(s) to AIX on the primary node (Node1 in the example):
cfgmgr
d. Assign a PVID to the disk(s)
chdev -a pv=yes -l disk_name
where disk_name is hdisk#, hdiskpower# or vpath#.
e. Add the disks to AIX on the secondary node (Node2)
cfgmgr
f. Using the PVIDs, verify that disk names agree. If not, correct.
lspv

1. Create a new VG on Node1


a. Create the shared volume group
Use smit mkvg or C-SPOC, remember to pick a unique Major number for the VG.
b. Change the auto vary on flag using:
chvg -an <vgname>
(C-SPOC does this automatically. Also, this step is unnecessary if you are using an
enhanced concurrent VG)
c. Create and Initialize the jfslog using:
mklv or smit mklv
logform <jfslogname>
(C-SPOC handles this automatically)
d. Create the logical volume
use smit mklv or C-SPOC
e. Create the file system using one of the following options:
crfs or smit jfs or C-SPOC
using SMIT, select
Add a Journaled File System on a previously defined logical volume

2. Varyoff VG from Node1


a. umount <File_System> any file systems that are part of the VG which was just
created.
b. varyoffvg <vgname>, the new volume group created in step 1.

Copyright IBM Corp. 1998, 2005 Unit 2. Shared Storage Considerations for High Availability 2-63
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

3. Import VG on Node2 and set VG characteristics correctly


a. On the second cluster node perform the following commands:
importvg -V <major#> -y <vgname> <hdisk#>
chvg -an <vgname>

4. Varyoff the VG on Node2


a. varyoffvg <vgname>

5. Start HACMP
a. Restart HACMP, which varies on the VG and mounts the filesystems and you can
then resume processing.

C-SPOC
Fortunately, there is an easier way.
These steps will be done automatically if the cluster is active and C-SPOC is used.
Otherwise, you can use the commands listed here in the notes.
Unfortunately, we are not looking at the easier way until we get to the C-SPOC unit.

2-64 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
Quorum Checking
AIX performs quorum checking on volume groups in order to ensure that the
volume group remains consistent
The quorum rules are intended to ensure that structural changes to the volume
group (for example, adding or deleting a logical volume) are consistent across an
arbitrary number of varyon-varyoff cycles
Overriding the quorum rules can be VERY DANGEROUS!

Quorum checking ENabled f or


VG status Quorum checking DISabledfor volume group
volume group

Running >50% VGDA's >1 VGDAs

100% VGDA's
varyonvg >50% VGDA's or if MISSINGPV_VARYON=TRUE
>50% VGDA's

Copyright IBM Corporation 2005

Figure 2-38. Quorum Checking AU546.0

Notes:

Quorum
Quorum is the check used by the LVM at the volume group level to resolve possible
data conflicts and to prevent data corruption. Quorum is a method by which >50% of
VGDAs must be available in a volume group before any LVM actions can continue.
Note: For a VG with 3 or more disks, there is one copy of the VGDA on each disk. For a
one disk VG, there are two copies of the VGDA. For a two disk VG, the first disk has two
copies and the second has one copy of the VGDA. The VGDA is identical for all disks in
the VG.
Quorum is especially important in a HA cluster. If LVM can varyon a volume group with
half or less of the disks, it might be possible for two nodes to varyon the same VG at the
same time, using different subsets of the disks in the VG. This is a very bad situation
which we will discuss in the next visual.

Copyright IBM Corp. 1998, 2005 Unit 2. Shared Storage Considerations for High Availability 2-65
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Normally LVM verifies quorum when the VG is varied on and continuously while the VG
is varied on.

Disabling quorum - nonquorum volume groups


Quorum checking can be disabled on a per-volume group basis. If quorum checking is
disabled, LVM will not varyoff a volume group if quorum is lost while the VG is running.
However, in this case, 100% of the VGDAs must be available when the volume group is
varied on.

AIX errlog entry for quorum loss


If quorum is lost the following is an example of an AIX errlog entry:
Id Label Type CL Description
91F9700D LVM_SA_QUORCLOSE UNKN H QUORUM LOST, VOLUME GROUP CLOSING.

How HACMP reacts to quorum loss


HACMP 4.5 and up automatically reacts to a loss of quorum (LVM_SA_QUORCLOSE)
error associated with a volume group going offline on a cluster node. In response to this
error, a non-concurrent resource group goes offline on the node where the error
occurred. If the AIX 5L Logical Volume Manager takes a volume group in the resource
group offline due to a loss of quorum for the volume group on the node, HACMP
selectively moves the resource group to another node.
You can change this default behavior by customizing resource recovery to use a notify
method instead of fallover. For more information, see Chapter 3: Configuring HACMP
Cluster Topology and Resources (Extended) in the Administration Guide.
Note: HACMP launches selective fallover and moves the affected resource group only
in the case of the LVM_SA_QUORCLOSE error. This error can occur if you use mirrored
volume groups with quorum enabled. However, other types of volume group failure
errors could occur. HACMP does not react to any other type of volume group errors
automatically. In these cases, you still need to configure customized error notification
methods, or use AIX 5L Automatic Error Notification methods to react to volume group
failures.

2-66 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
Disabling Quorum Checking
Cluster designers or implementers are often tempted to disable quorum
checking. Although often necessary/desirable, there are risks if quorum
is disabled or if a volume group varyon is forced:

It may be possible for each side of a two-node cluster to have different parts
of the same volume group vary'd online

It is possible that volume group structural changes (for example, add or


delete of a logical volume) made during the last varyon are unknown during
the current varyon

It is possible that volume group structural changes are made to one part of
the volume group which are inconsistent with a different set of structural
changes which are made to another part of the volume group

Copyright IBM Corporation 2005

Figure 2-39. Disabling Quorum Checking AU546.0

Notes:

Why disable quorum checking?


Disabling quorum checking may seem like a good idea from an availability point of view.
For example, consider a volume group mirrored across two disk cabinets. If access to
one disk cabinet is lost, only half of the VGDAs are available. With quorum checking
enabled, quorum is lost and the VG is varied off. This would seem to defeat the purpose
of mirroring. However, there are real risks associated with disabling quorum. We will
discuss ways to handle the quorum problem in the next few visuals.

Risks of disabling quorum checking


Disabling quorum checking is an option, however, considerable care must be taken to
ensure that a consistent set of VGDAs is used on an ongoing basis. In addition,
exceptional care must be taken to ensure that one half of the cluster isnt running with
one half of all the mirrored logical volumes while the other have is running with the other

Copyright IBM Corp. 1998, 2005 Unit 2. Shared Storage Considerations for High Availability 2-67
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

half of all the mirrored logical volumes as this leads to a phenomenon known as data
divergence.
Trust me... you do not want to experience the results of data divergence!
Sometimes it may necessary to disable quorum in a cluster. In this case, take care that
you do not end up with data divergence. The primary strategy for avoiding data
divergence is to avoid partitioned clusters although careful design of the clusters
shared storage is also important.

2-68 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
Eliminating Quorum Problems

The following points help minimize the quorum problems:

Avoid volume groups with less than three disks


Generally not an issue with HA clusters

Distribute hard disks across more than one bus


Use three adapters per node in SCSI
Use two adapters per node, per loop in SSA
Use VPATHs and two adapters with Fibre Channel

Use different power sources


Connect each power supply in a drawer to a different power source

Use RAID arrays or Enterprise Storage Solutions

Consider a quorum-buster disk

Copyright IBM Corporation 2005

Figure 2-40. Eliminating Quorum Problems AU546.0

Notes:

Eliminating quorum problems


In order to enhance the availability of a volume group you should think about the
following:
- If there are only two disks in the volume group then you loose access to the volume
group if the disk with two VGDAs is lost.
- Using more than one disk adapter prevents the loss of all the disks if a single
adapter fails. This can be used with mirroring so that different copies of the data are
accessed through different adapters.
- Certain SSA models such as the T40 allow you the choice to use a single external
power source or two external power sources. For higher availability you should use
two external power sources.
- RAID allows you to get redundancy with fewer disks than mirroring.

Copyright IBM Corp. 1998, 2005 Unit 2. Shared Storage Considerations for High Availability 2-69
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

- If you are mirrored across two disk cabinets, consider a quorum buster disk to
prevent loss if quorum if you lose access to one cabinet. This is discussed in the
next visual

2-70 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
The Quorum Buster
In some conditions, loss of quorum may lead to an unplanned
system downtime. The quorum buster can help eliminate this
possibility.

sharedvg

Copyright IBM Corporation 2005

Figure 2-41. The Quorum Buster AU546.0

Notes:

Introduction
In an environment where you have a shared volume group with mirrored logical
volume(s), you could have a situation where you lose half the disks. This would cause
quorum to fail even though all the data is still available through the mirrored copy.

Quorum buster disk


The solution is to add a disk to the volume group without putting data on it; this is called
a quorum buster disk. The extra disk need not contain any data, but as a member of the
shared VG it holds a copy of the VGDA and hence is counted in the quorum check.
Note: In order to be effective, the quorum buster disk must not rely on any component
that either of the two halves of the rest of the volume group relies on. In other words, the
quorum buster must have its own disk controller (in each node), its own source of power
and its own cabling and cooling. If, for example, the quorum buster shares a disk

Copyright IBM Corp. 1998, 2005 Unit 2. Shared Storage Considerations for High Availability 2-71
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

controller or a power supply with one of the two halves then the loss of that disk
controller or power supply results in the loss of the half and of the quorum buster which,
in turn, results in the loss of quorum and the volume group goes offline.

2-72 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
HACMP Forced Varyon
HACMP 5.x provides a per resource group forced varyon:
Each resource group has a flag which can be set to cause HACMP to
perform a careful forced varyon of the resource group's VGs
If normal varyonvg fails and this flag is set:
HACMP verifies that at least one complete copy of each logical volume is
available
If verification succeeds, HACMP forces the volume group online
This is not a complete and perfect solution to quorum issues:
If the cluster is partitioned then the rest of the volume group might still
be online on a node in the other partition
HACMP 4.5 introduced forced varyon for all shared VGs:
Still available in HACMP 5.x
If the HACMP_MIRROR_VARYON environment variable is set to TRUE,
forced varyon is enabled for all shared VGs in the cluster
If set, HACMP_MIRROR_VARYON overrides the per resource group
forced varyon flag
Copyright IBM Corporation 2005

Figure 2-42. HACMP Forced Varyon AU546.0

Notes:

varyonvg -f
AIX 5L provides the ability to varyon a volume group if a quorum of disks is not
available. This is called forced varyon. The varyonvg -f command allows a volume
group to be made active that does not currently have a quorum of available disks. All
disks that cannot be brought to an active state will be put in a removed state. At least
one disk must be available for use in the volume group.

Per resource group forced varyon


HACMP 5.x provides a flag in each resource group which allows you to enable forced
varyon of the VGs in that resource group, as described in the visual.

Copyright IBM Corp. 1998, 2005 Unit 2. Shared Storage Considerations for High Availability 2-73
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Forced varyon of all shared volume groups


The HACMP_MIRROR_VARYON environment variable, introduced in HACMP 4.5, when set
to TRUE, enables the forced varyon mechanism for all shared volume groups in the
cluster.
In contrast, the HACMP 5.x forced varyon mechanism applies to specific resource
groups volume groups.
The HACMP_MIRROR_VARYON variable is still supported by HACMP 5.x and, if set to TRUE,
overrides any per-resource group settings for the forced varyon feature.
If the HACMP_MIRROR_VARYON variable is used, it should probably be defined by inserting
the following line into /etc/environments on each cluster node:
HACMP_MIRROR_VARYON=TRUE

MISSINGPV_VARYON environment variable


An approach commonly used in the past to deal with quorum-related issues involves
the use of the MISSINGPV_VARYON environment variable. This AIX provided environment
variable, if set to TRUE in /etc/environment, enables the forced varyon of any VGs
which are missing disks.
Clusters which use the MISSINGPV_VARYON variable should probably be updated to use
either the HACMP_MIRROR_VARYON variable or HACMP 5.xs forced varyon feature.

2-74 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
Recommendations for Forced Varyon
Before enabling HACMP's forced varyon feature for a volume
group or the HACMP_MIRROR_VARYON variable for the entire
cluster, ensure that:
The affected volume groups are mirrored across disk enclosures
The affected volume groups are set to super-strict allocation
There are redundant heartbeat networks between all nodes
Administrative policies are in effect to prevent volume group structural
changes when the cluster is running degraded (that is, failed over or
with disks missing)

Copyright IBM Corporation 2005

Figure 2-43. Recommendations for Forced Varyon AU546.0

Notes:

Be careful when using forced varyon


Failure to follow each and every one of these recommendations could result in either
data divergence or inconsistent VGDAs. Either problem can be very difficult if not
impossible to resolve in any sort of satisfactory way, so be careful!

More information
Refer to the HACMP for AIX Administration Guide Version 5.3 (chapter 14) and the
HACMP for AIX Planning and Installation Guide Version 5.3 (chapter 5) for more
information about forced varyon and quorum issues.

Copyright IBM Corp. 1998, 2005 Unit 2. Shared Storage Considerations for High Availability 2-75
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Enhanced Concurrent Volume Groups


Introduced in AIX 5L V5.1
Supported for all HACMP-supported disk technologies
Supports JFS and JFS2 filesystems
File systems may only be mounted by one node at a time
Replaces old style classic concurrent volume groups
Classic concurrent VGs only supported on SSA or serial DASD and
only on 32 bit kernel
AIX 5L V5.1: Enhanced concurrent is default when using C-SPOC to
create concurrent VGs
AIX 5L V5.2: Cannot create classic concurrent VGs, but existing
classic concurrent VGs may be used
AIX 5L V5.3: classic concurrent VGs cannot be used at all
C-SPOC can be used to convert classic concurrent VGs to enhanced
concurrent VGs
Enhanced concurrent VGs are required in order to use:
Heartbeat over disk for a non-IP network
(Covered in the network unit)
Fast disk takeover

Copyright IBM Corporation 2005

Figure 2-44. Enhanced Concurrent Volume Groups AU546.0

Notes:

Introduction
Defining an enhanced concurrent volume group allows the LVM to use RSCT to
manage varyonvg and varyoffvg processing.

Concurrent access
In a concurrent access environment, all the nodes will varyon the volume group.

Fast disk takeover (enhanced concurrent VGs in a non-concurrence


environment)
As was described earlier, using enhanced concurrent volume groups can result in
significantly shorter fallover and fallback times (depending on the number of physical
volumes and volume groups involved). In this case, one node will varyon the volume
group in active mode, while all the other nodes will varyon the VG in passive mode.

2-76 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty Heart beat over disk


Using enhanced concurrent volume groups also provides the capability to do heart
beats over disk to create a non-IP heartbeat network for HACMP (discussed in the next
unit).

Copyright IBM Corp. 1998, 2005 Unit 2. Shared Storage Considerations for High Availability 2-77
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Active Varyon versus Passive Varyon


Active Varyon (lsvg -o)
Behaves like normal varyon (listed with lsvg -o)
Allows all of the usual operations like:
RSCT responsible for ensuring that only one node has VG actively varied on
Passive Varyon (lsvg <vg_name>)
Volume group is available in a very limited read-only mode
Only certain operations allowed:
Reading volume group configuration information (for example, lsvg)
Reading logical volume configuration information (for example, lslv)
Most operations are prohibited:
Any operations on filesystems and logical volumes (for example, mounts,
open, create, modify, delete, and so forth)
Modifying, synchronizing the volume group's configuration
Any operation which changes the contents or hardware state of the disks
HACMP uses the appropriate varyonvg commands with
enhanced concurrent volume groups
Protecting VG integrity when using fast disk takeover
Use multiple IP networks and disk heartbeating
(discussed in next unit)
Do not make structural changes to VG unless all nodes are online
Copyright IBM Corporation 2005

Figure 2-45. Active Varyon versus Passive Varyon AU546.0

Notes:

Active varyon
If using enhanced concurrent volume groups in a non-concurrence access
environment, only one node will varyon the VG in active mode, allowing full access.

Passive varyon
Other nodes will varyon the VG in passive mode. In passive mode, only very limited
operations are allowed on the volume group.

Fast disk takeover


Switching a volume group from active to passive state (or the reverse) is a very fast
operation as it only updates the LVMs internal state of the volume group in an AIX
kernel data structure and does not require any actual disk access operations. This is

2-78 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty what makes fast disk takeover faster than traditional disk-reservation based volume
group takeover.

Protecting volume group integrity using fast disk takeover


When fast disk takeover is used, the SCSI disk reservation function is not used. If the
cluster becomes partitioned, nodes in each partition could accidentally varyon the
volume group in active state. Because active state varyon of the volume group allows
mounting of filesystems and changing physical volumes, this situation can result in
different copies of the same volume group.
To avoid this situation:
- Make sure that there are multiple heartbeat paths to prevent a loss of network
communication from triggering a fallover when the active node is still running. This
protects against a partitioned cluster.
- Avoid making structural changes to the VG (such as adding or removing a logical
volume, changing the size of a logical volume, and so forth) unless all nodes are
online. This ensures that all nodes will have a common view of the volume group
structure.

Copyright IBM Corp. 1998, 2005 Unit 2. Shared Storage Considerations for High Availability 2-79
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

lsvg vg_name
ON ACTIVE NODE
halifax # lsvg ecmvg
VOLUME GROUP: ecmvg VG IDENTIFIER: 0009314700004c00000000f
e2eaa2d6d
VG STATE: active PP SIZE: 8 MB
VG PERMISSION: read/write TOTAL PPs: 537 (4296 MB)
... ... ...
Concurrent: Enhanced-Capable Auto-Concurrent: Disabled

ON PASSIVE NODE:
toronto # lsvg ecmvg
VOLUME GROUP: ecmvg VG IDENTIFIER: 0009314700004c00000000f
e2eaa2d6d
VG STATE: active PP SIZE: 8 MB
VG PERMISSION: passive-only TOTAL PPs: 537 (4296 MB)
... ... ...
Concurrent: Enhanced-Capable Auto-Concurrent: Disabled

Copyright IBM Corporation 2005

Figure 2-46. lsvg vg_name AU546.0

Notes:

Introduction
The VG PERMISSION field in the output of lsvg shows if a volume group is varied on in
active or passive mode.

2-80 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
LVM and HACMP Considerations
Following these simple guidelines helps keep the configuration easier
to administer:
All LVM constructs must have unique names in the cluster.
For example, httplv, httploglv, httpfs and httpvg
Mirror or otherwise provide redundancy for critical logical volumes.
Don't forget the jfslog
If it isn't worth mirroring then consider deleting it now rather than
having to wait to lose the data when the wrong disk fails someday
Even data which is truly temporary is worth mirroring as it avoids
an application crash when the wrong disk fails
RAID-5 and ESS-based storage are alternative ways to provide
redundancy
The VG major device numbers should be the same
Mandatory for clusters exporting NFS filesystems, but it is a good
habit for any cluster
Shared data on internal disks is a bad idea
Focus on the elimination of single points of failure
Copyright IBM Corporation 2005

Figure 2-47. LVM and HACMP Considerations AU546.0

Notes:

Unique names
Since your LVM definitions are used on multiple nodes in the cluster, you must make
sure that the names created on one node are not in use on another node. The safest
way to do this is generally to explicitly create and name each entity (do not forget to
explicitly create, name and format (using logform) the jfslog logical volumes).

Mirror or otherwise provide redundancy


For availability, you should mirror (or use hardware RAID) for all your shared logical
volumes including the jfslog logical volume.
- If it is worth keeping then it is worth mirroring. If it is not worth mirroring then it is not
worth keeping and should be deleted.
- It is important to even mirror totally scratch space (in other words, space whose
contents is worthless after a restart of the application). Failure to mirror scratch

Copyright IBM Corp. 1998, 2005 Unit 2. Shared Storage Considerations for High Availability 2-81
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

space could cause an outage if the wrong disk fails. In order to avoid the outage,
mirror the scratch space!
The mirrorvg command provides an easy way to mirror all the logical volumes on a
given volume group. This same functionality may also be accomplished manually if you
execute the mklvcopy command for each individual logical volume in a volume group.

Volume group major numbers


If you are using NFS, you must be sure to use the same major number on all nodes.
Even if not using NFS, this is good practice, and makes it easy to begin using NFS with
this volume group in the future.
Use the lvlstmajor command on each node to determine a free major number
common to all nodes.

Use external disks for shared data


External disks should be used for shared volume groups. If internal disks were
configured for shared volume groups and the owning node needed to be powered down
for any reason, it would render the shared volume groups unavailable - clearly a bad
idea.

Eliminate single points of failure


The focus of cluster design must always be eliminating single points of failure.

2-82 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
Support for OEM Volume Groups
OEM volume groups can be used with HACMP
HACMP 5.3 automatically detects and provides the
methods for Veritas volume groups (VxVM)
Configuring custom volume group processing methods
using SMIT
List volume groups of a specified type
List physical and logical disks in a volume group
Bring a volume group online and offline
Determine a volume group status
Verify volume groups configuration
Provide a location of log files and other debugging information.
View using the AIX 5L snap -e command.
Limitations and more information

Copyright IBM Corporation 2005

Figure 2-48. Support for OEM Volume Groups AU546.0

Notes:

Introduction
You can configure OEM volume groups in AIX 5L and use HACMP as an IBM High
Availability solution to manage such volume groups.
Note: Different OEMs may use different terminology to refer to similar constructs. For
example, the Veritas Volume Manage (VxVM) term Disk Group is analogous to the AIX
LVM term Volume Group. We will use the term volume groups to refer to OEM and
Veritas volume groups.

Veritas Volume Manager


Among other OEM volume groups and filesystems, HACMP 5.3 supports volume
groups and filesystems created with VxVM in Veritas Foundation Suite v.4.0. To make it
easier for you to accommodate Veritas volume groups in the HACMP cluster, the
methods for Veritas volume groups support are predefined in HACMP and are used

Copyright IBM Corp. 1998, 2005 Unit 2. Shared Storage Considerations for High Availability 2-83
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

automatically. After you add Veritas volume groups to HACMP resource groups, you
can select the methods for the volume groups from the pick lists in HACMP SMIT
menus for OEM volume groups support.
Note: Veritas Foundation Suite is also referred to as Veritas Storage Foundation (VSF).

Configuring custom volume group processing methods using SMIT


When HACMP identifies OEM volume groups of a particular type, it can be configured
to provide the volume group processing functions shown in the visual.
You can add, change, and remove custom volume groups processing methods for a
specific OEM volume group using SMIT. You can select existing custom volume group
methods that are supported by HACMP, or you can use your own custom methods.
Using SMIT, you can perform the following functions for OEM disks:
-Add Custom Volume Group Methods
-Change/Show Custom Volume Group Methods
-Remove Custom Volume Group Methods

Additional considerations
The custom volume group processing methods that you specify for a particular OEM
volume group is added to the local node only. This information is not propagated to
other nodes; you must copy this custom volume group processing method to each node
manually. Alternatively, you can use the HACMP File Collections facility to make the
disk, volume, and file system methods available on all nodes.

Limitations and more information


There are some limitations to using OEM volume groups with HACMP. For example,
HACMP supports a number of extended functions for LVM volume groups that are not
available for OEM volume groups, such as enhanced concurrent mode, active and
passive varyon process, heartbeating over disk, selective fallover upon volume group
loss and others. In addition, there are a number of other limitations.
For complete details on using OEM volume groups with HACMP, see Appendix D in the
HACMP for AIX 5L Planning and Installation Guide.

2-84 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
Support for OEM File Systems
OEM file systems can be used with HACMP
HACMP 5.3 automatically detects and provides the
methods for Veritas file systems (VxFS)
Configuring custom file systems processing methods using
SMIT
List file systems of a specified type
List volume groups hosting a specified file system type
Bring a file system online and offline
Determine a file systems status
Verify file system configuration
Provide a location of log files and other debugging information.
View using the AIX 5L snap -e command.
Limitations and more information

Copyright IBM Corporation 2005

Figure 2-49. Support for OEM File Systems AU546.0

Notes:

Introduction
You can configure OEM file systems in AIX 5L and use HACMP as an IBM High
Availability solution to manage such file systems.

Veritas file systems


Among other OEM volume groups and filesystems, HACMP 5.3 supports volume
groups and filesystems created with VxVM in Veritas Foundation Suite v.4.0. To make it
easier for you to accommodate Veritas filesystems in the HACMP cluster, the methods
for Veritas filesystems support are predefined in HACMP. After you add Veritas
filesystems to HACMP resource groups, you can select the methods for the filesystems
from the pick lists in HACMP SMIT menus for OEM filesystems support.
Note: Veritas Foundation Suite is also referred to as Veritas Storage Foundation (VSF).

Copyright IBM Corp. 1998, 2005 Unit 2. Shared Storage Considerations for High Availability 2-85
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Configuring custom volume group processing methods using SMIT


When HACMP identifies OEM file systems of a particular type, it can be configured to
provide the file system processing functions shown in the visual.
You can add, change, and remove custom volume groups processing methods for a
specific OEM volume group using SMIT. You can select existing custom file system
methods that are supported by HACMP, or you can use your own custom methods.
Using SMIT, you can perform the following functions for OEM disks:
-Add Custom Filesystem Methods
-Change/Show Custom Filesystem Methods
-Remove Custom Filesystem Methods

Additional considerations
The custom file system processing methods that you specify for a particular OEM file
system is added to the local node only. This information is not propagated to other
nodes; you must copy this custom file system processing method to each node
manually. Alternatively, you can use the HACMP File Collections facility to make the
disk, volume, and filesystem methods available on all nodes.

Limitations and more information


There are some limitations to using OEM file systems with HACMP.
For complete details on using OEM file systems with HACMP, see Appendix D in the
HACMP for AIX 5L Planning and Installation Guide.

2-86 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
Checkpoint
1. True or False?
Lazy update keeps VGDA constructs in sync between cluster nodes
(reserve/release-based shared storage protection)
2. Which of the following commands will bring a volume group
online?
a. getvtg <vgname>
b. mountvg <vgname>
c. attachvg <vgname>
d. varyonvg <vgname>
3. True or False?
Quorum should always be disabled on shared volume groups.
4. True or False?
filesystem and logical volume attributes cannot be changed while the
cluster is operational.
5. True or False?
An enhanced concurrent volume group is required for the heartbeat over
disk feature.

Copyright IBM Corporation 2005

Figure 2-50. Checkpoint AU546.0

Notes:

Copyright IBM Corp. 1998, 2005 Unit 2. Shared Storage Considerations for High Availability 2-87
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Unit Summary
Access to shared storage must be controlled
Nonconcurrent (serial) access
Reserve/release-based protection:
Slower and may result in ghost disks
RSCT-based protection (fast disk takeover):
Faster, no ghost disks, and some risk of partitioned cluster in the event of
communication failure
Careful planning is needed for both methods of shared storage protection to
prevent fallover due to communication failures
Concurrent access
Access must be managed by the parallel application
HACMP supports several disk technologies
Must be well understood to eliminate single points of failure
Shared storage should be protected with redundancy
LVM mirroring
LVM configuration options must be understood to ensure availability
LVM quorum checking and forced varyon must be understood to ensure
availability
Hardware RAID
Copyright IBM Corporation 2005

Figure 2-51. Unit Summary AU546.0

Notes:

2-88 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty Unit 3. Networking Considerations for High


Availability

What This Unit Is About


This unit describes the HACMP functions related to networks. You
learn which networks are supported in an HACMP cluster and what
you have to take into consideration for planning it.

What You Should Be Able to Do


After completing this unit, you should be able to:
Discuss how HACMP uses networks
Describe the HACMP networking terminology
Explain and configure IP Address Takeover (IPAT)
Configure an IP network for HACMP
Configure a non-IP network
Explain how client systems are likely to be affected by HACMP
Minimize the impact of failure recovery on client systems

How You Will Check Your Progress


Accountability:
Checkpoint
Machine exercises

References
SC23-4867-05 HACMP for AIX: HACMP Master Glossary
SC23-4864-06 HACMP for AIX: Concepts and Facilities Guide
SC23-4861-06 HACMP for AIX: Planning and Installation Guide
SC23-4862-06 HACMP for AIX: Administration Guide
SC23-5177-00 HACMP for AIX: Troubleshooting Guide

Copyright IBM Corp. 1998, 2005 Unit 3. Networking Considerations for High Availability 3-1
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Unit Objectives
After completing this unit, you should be able to:
Discuss how HACMP uses networks
Describe the HACMP networking terminology
Explain and set up IP Address Takeover (IPAT)
Configure an IP network for HACMP
Configure a non-IP network
Explain how client systems are likely to be affected by
failure recovery
Minimize the impact of failure recovery on client systems

Copyright IBM Corporation 2005

Figure 3-1. Unit Objectives AU546.0

Notes:

Unit objectives
This unit discusses networking in the context of HACMP.

3-2 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty 3.1 How HACMP Uses Networks

Copyright IBM Corp. 1998, 2005 Unit 3. Networking Considerations for High Availability 3-3
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Topic 1 Objectives:
How HACMP Uses Networks
After completing this topic, you should be able to:
Explain how HACMP uses networks to:
Provide client access to the cluster
Detect failures
Diagnose failures
Communicate with other nodes in the cluster
Explain why a non-IP network is an essential part of any
HACMP cluster
Describe what a persistent node IP label is and what they
are typically used for
Provide an overview of IP Address Takeover

Copyright IBM Corporation 2005

Figure 3-2. Topic 1 Objectives: How HACMP Uses Networks AU546.0

Notes:

Topic 1 objectives
This topic explores how HACMP uses networks. It includes an introduction to the
HACMP concept of IP Address Takeover (IPAT). A more detailed look at IPAT appears
in a later section.

3-4 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
How Does HACMP Use Networks?
HACMP uses networks to:
1. Provide clients with highly available access to the cluster's
applications
2. Detect and diagnose node, network and network interface
card (NIC) failures
3. Communicate with other HACMP daemons on other nodes in
the cluster
1

en0 en1 en0 en1

2
RSCT RSCT
3
clcomd clcomd

Copyright IBM Corporation 2005

Figure 3-3. How Does HACMP Use Networks? AU546.0

Notes:

Network design for availability


In order to design a network that supports high availability using HACMP, we must
understand how HACMP uses networks.

Client access to applications


From the users perspective, the only reason that the cluster is on a network is that the
network provides them with access to the clusters highly available applications. As we
see, satisfying this requirement for client access to the cluster involves a bit more than
just plugging in a network cable.

Detection and diagnosis of failures


In contrast, the fact that HACMP uses the networks to detect and diagnose various
failures is likely to be of considerably more interest to the cluster designers and

Copyright IBM Corp. 1998, 2005 Unit 3. Networking Considerations for High Availability 3-5
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

administrators. Just being able to detect node, network and network interface card
(NIC) failures imposes a number of requirements on how the networks are designed.
Being able to distinguish between certain failures, for example the failure of a network
and the failure of a node, imposes yet more requirements on the network design.
Reliable Scalable Cluster Technology (RSCT) provides facilities for monitoring node
membership; network interface and communication interface health; and event
notification, synchronization and coordination via reliable messaging.

HACMP internode communications


The final way in which HACMP uses networks, to communicate with HACMP daemons
running on other nodes in the cluster, is rather mundane. Assuming that the
requirements imposed by the first two uses are properly satisfied, this last use does not
impose any additional requirements on the network design.
All communication between nodes is sent through the Cluster Communications
daemon, clcomd, which runs on each node. The clcomd daemon manages the
connection authentication between nodes and any message authentication or
encryption configured.

3-6 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
Providing HA Client Access to the Cluster
Providing clients with highly available access to the cluster's
applications requires:
Multiple NICs per network per node
(Possibly) multiple networks per node
Careful network design and implementation all the way out to
the client's systems

Copyright IBM Corporation 2005

Figure 3-4. Providing HA Client Access to the Cluster AU546.0

Notes:

Network interface card (NIC) and single point of failure (SPOF)


In order to avoid the NIC being a single point of failure, each cluster node requires at
least two NICs per network. The alternative is that the loss of a single NIC would cause
a significant outage while the application (that is, the resource group) is moved to
another node.

Network as SPOF
The network itself is, of course, a single point of failure since the failure of the network
will disrupt the users ability to communicate with the cluster. The probability of this
SPOF being an issue can be reduced by careful network design, an approach which is
often considered sufficient.

Copyright IBM Corp. 1998, 2005 Unit 3. Networking Considerations for High Availability 3-7
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Eliminating the network as a SPOF


If the network as a SPOF must be eliminated then the cluster requires at least two
networks. Unfortunately, this only eliminates the network directly connected to the
cluster as a SPOF. It is not unusual for the users to be located some number of hops
away from the cluster. Each of these hops involves routers, switches and cabling - each
of which typically represents yet another SPOF. Truly eliminating the network as a
SPOF can become a massive undertaking. Most organizations which are concerned
about the network as a SPOF usually compromise by designing the network to ensure
that no single failure deprives all key users of their access to the cluster.

Importance of careful network design


In the end, there is simply no replacement for careful network design and
implementation all the way out to the users. Failure to perform this design and
implementation activity properly could easily become a crippling issue once the cluster
is put into production.

3-8 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
What HACMP Detects and Diagnoses
Remember, HACMP only handles the following
failures directly:
Network Interface Card (NIC) failure
Node failure
Network failure

IP network
en0 en1 en0 en1

non-IP network

bondar hudson

Copyright IBM Corporation 2005

Figure 3-5. What HACMP Detects and Diagnoses AU546.0

Notes:

Failures that HACMP handles directly


HACMP uses RSCT to detect failures. Actually, the only thing that RSCT can detect is
the loss of heartbeat packets. RSCT sends heartbeats over IP and non-IP networks. By
gathering heartbeat information from multiple NICs and non-IP devices on multiple
nodes, HACMP makes a determination of what type of failure this is and takes
appropriate action. Using the information from RSCT, HACMP only handles three
different types of failures:
- Network interface card (NIC) failures
- Node failures
- Network failures

Copyright IBM Corp. 1998, 2005 Unit 3. Networking Considerations for High Availability 3-9
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Other failures
HACMP uses AIX features to respond to other failures (for example, the loss of a
volume group can trigger a fallover), but HACMP is not directly involved in detecting
these other types of failures. HACMP really is only involved in NIC, node, and network
failures.

3-10 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
Heartbeat Packets
HACMP sends heartbeat packets across networks
Heartbeat packets are sent and received by every NIC
This is sufficient to detect all NIC, node and network failures
Heartbeat packets are not acknowledged

en0 en1 en0 en1

bondar hudson

Copyright IBM Corporation 2005

Figure 3-6. Heartbeat Packets AU546.0

Notes:

Heartbeat packets
HACMPs primary monitoring mechanism is to send heartbeat packets. The cluster
sends heartbeat packets from every NIC and to every NIC and to and from non-IP
devices.

Heartbeating pattern
In a typical two-node cluster with two NICs on the network, the heartbeat packets are
sent in the pair-wise fashion shown above. The pattern gets more complicated when
the cluster gets larger as HACMP uses a pattern which is intended to satisfy three
requirements:
- That each NIC be used to send heartbeat packets (to verify that the NIC is capable
of sending packets).

Copyright IBM Corp. 1998, 2005 Unit 3. Networking Considerations for High Availability 3-11
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

- That heartbeat packets be sent to each NIC (to verify that the NIC is capable of
receiving heartbeat packets).
- That no more heartbeat packets are sent than are necessary to achieve the first two
requirements (to minimize the load on the network).
The details of how HACMP satisfies the third requirement are discussed in a later unit.

Detecting failures
Heartbeat packets are not acknowledged. Instead, each node knows what the
heartbeat pattern is and simply expects to receive appropriate heartbeat packets on
appropriate network interfaces. Noticing that the expected heartbeat packets have
stopped arriving is sufficient to detect failures.

3-12 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
Failure Detection versus Failure Diagnosis
Failure Detection is realizing that something is wrong
For example, realizing that packets have stopped flowing
between bondar's en1 and hudson's en1
Failure Diagnosis is figuring out what is wrong
For example, figuring out that bondar's en1 NIC has failed
HACMP uses RSCT to do both detection and diagnosis

en0 en1 en0 en1

bondar hudson

Copyright IBM Corporation 2005

Figure 3-7. Failure Detection versus Failure Diagnosis AU546.0

Notes:

Diagnosis
The heartbeat patterns just discussed are sufficient to detect a failure in the sense of
realizing that something is wrong. They are not sufficient to diagnose a failure in the
sense of figuring out exactly what is broken.
For example, if the en1 interface on the bondar node fails as in the visual above, bondar
stops receiving heartbeat packets via its en1 interface, and hudson stops receiving
heartbeat packets via its en1 interface. Bondar and hudson both realize that something
has failed, but neither of them have enough information to determine what has failed.

Copyright IBM Corp. 1998, 2005 Unit 3. Networking Considerations for High Availability 3-13
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Failure Diagnosis
When a failure is detected, HACMP (RSCT topology services)
uses specially crafted packet transmission patterns to
determine (that is, diagnose) the actual failure by ruling out
other alternatives
Example:
1. RSCT on bondar notices that heartbeat packets are no longer
arriving via en1 and notifies hudson (which has also noticed
that heartbeat packets are no longer arriving via its en1)
2. RSCT on both nodes send diagnostic packets between
various combinations of NICs (including out via one NIC and
back in via another NIC on the same node)
3. The nodes soon realize that all packets involving bondar's
en1 are vanishing but packets involving hudson's en1 are
being received
4. DIAGNOSIS: bondar's en1 has failed.

Copyright IBM Corporation 2005

Figure 3-8. Failure Diagnosis AU546.0

Notes:

Diagnostic heartbeat patterns


Once one or more cluster nodes detect a failure, they share information and plan a
diagnostic packet pattern or series of patterns which will diagnose the failure.
These diagnostic packet patterns can be considerably more network-intensive than the
normal heartbeat traffic, although they usually only take a few seconds to complete the
diagnosis of the problem.

3-14 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
What If All Heartbeat Packets Stop?
A node might notice that heartbeat packets are no longer arriving
on any NIC.
In the configuration below, it's impossible for either node to
distinguish between failure of the network and failure of the other
node.
Each node concludes that the other node is down!

en0 en1 en0 en1

bondar hudson

Copyright IBM Corporation 2005

Figure 3-9. What If All Heartbeat Packets Stop? AU546.0

Notes:

Total loss of heartbeat traffic


If a node in a two-node cluster realizes that it is no longer receiving any heartbeat
packets from the other node, then it starts to suspect that the other node has gone
down. Once it determines that it is totally unable to communicate with the other node, it
concludes that the other node has failed.

Both nodes try to take control


In the above configuration, if the network fails, then each node soon concludes that the
other node has failed. Each node then proceeds to take over any resource groups
configured to be able to run on both nodes, but currently resident on the other node.

Copyright IBM Corp. 1998, 2005 Unit 3. Networking Considerations for High Availability 3-15
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Partitioned cluster
Since each node is, in fact, still very alive, the result is that the applications are now
running simultaneously on both nodes. If the shared disks are also online to both nodes,
then the result could be a quite massive data corruption problem. This situation is called
a partitioned cluster. It is, clearly, a situation which must be avoided.
Note that essentially equivalent situations can occur in larger clusters. For example, a
five-node cluster might become split into a group of two nodes and a group of three
nodes. Each group concludes that the other group has failed entirely and takes what it
believes to be appropriate action. The result is almost certainly very unpleasant.

3-16 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
All Clusters REQUIRE a Non-IP Network!
There must be more than one network to distinguish between:
Failure of the other node
Failure of a network
There must be a non-IP network to distinguish between:
Failure of the other node's IP subsystem
Total failure of the other node
Therefore, ALL CLUSTERS SHOULD HAVE A NON-IP
NETWORK!!!

en0 en1 en0 en1

non-IP network

bondar hudson
Copyright IBM Corporation 2005

Figure 3-10. All Clusters REQUIRE a non-IP Network! AU546.0

Notes:

Why we need more than one network


Distinguishing between the failure of the network and the failure of the other node
requires that there be a path between the two nodes which does not involve the network
in question. Consequently, if a partitioned cluster is to be avoided then every cluster
must be configured with at least two ways for nodes to communicate with each other.

Why we need a non-IP network


Although rather unlikely, it is also possible for the entire IP (Internet Protocol)
subsystem to fail on a node without the node crashing. In order to distinguish between
the failure of the IP subsystem on a node and the failure of the node itself, every cluster
most be configured with a way to communicate between nodes which does not require
IP to be operational.

Copyright IBM Corp. 1998, 2005 Unit 3. Networking Considerations for High Availability 3-17
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Both IP and non-IP networks are needed


These pathways which do not require IP are called non-IP networks. Every cluster must
be configured with enough non-IP networks to ensure that any node can communicate
with every other node (possibly by asking an intermediate node to pass along
messages) without requiring any nodes IP subsystem to be operational.

Both IP and non-IP networks are used


Many untrained people seem to assume that the non-IP network is for heartbeating with
the implication possibly being that the IP networks are not used for heartbeating or that
the non-IP networks are not used for heartbeating. Neither implication is true. HACMP
sends heartbeat packets across all configured networks, IP and non-IP. HACMP uses
whichever network does the job to communicate with other cluster nodes.

Terminology: serial networks versus non-IP networks


HACMP 4.x documentation generally refers to these non-IP networks as serial
networks. HACMP 5.x has de-emphasized the use of the term serial network as the
term is often misunderstood to imply only RS232 serial protocol networks. Although
possibly somewhat bulkier, the term non-IP network better captures the essence of
what these networks are - a non-IP communication path between nodes.

3-18 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
An Important Implementation Detail
HACMP must ensure that heartbeats are sent out via all NICs and
know which NIC is used.
If a node has multiple NICs on the same logical subnet then AIX
can rotate which NIC is used to send packets to the network.
Therefore, each NIC on each physical IP network on any given
node must have an IP address on a different logical subnet.

en0 en1 en0 en1


192.168.1.1 192.168.2.1 192.168.1.2 192.168.2.2

non-IP network

bondar hudson

Copyright IBM Corporation 2005

Figure 3-11. An Important Implementation Detail AU546.0

Notes:

Requirements for HACMP to monitor every NIC


If a node has two NICs on the same logical IP subnet and a network packet is sent to an
IP address on the same logical subnet, then the AIX kernel is allowed to use either NIC
on the sending node to send the packet.
As this is incompatible with HACMPs requirement that HACMP be able to dictate which
NIC is to be used to send heartbeat packets, HACMP requires that each NIC on each
node be on a different logical IP subnet.
Well give some examples of valid and invalid configurations later in this unit, after we
have covered the other subnetting rules.
Note: There is an exception to the requirement that each NIC be on a different logical IP
subnet. We will discuss that shortly.

Copyright IBM Corp. 1998, 2005 Unit 3. Networking Considerations for High Availability 3-19
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Failure Recovery and Reintegration


HACMP continues to monitor failed components in order to detect
their recovery
Recovered components are reintegrated back into the cluster
Reintegration might trigger significant actions
For example, recovery of primary node will optionally trigger
fallback of resource group to primary node

en0 en1 en0 en1 en0 en1 en0 en1

bondar hudson bondar hudson

Copyright IBM Corporation 2005

Figure 3-12. Failure Recovery and Reintegration AU546.0

Notes:

NIC and network recovery


NIC cards and networks are automatically reintegrated into the cluster when they
recover.

Node recovery
In contrast, a node is not considered to have recovered until the HACMP daemons have
been started up on the node. This allows the node to be rebooted and otherwise
exercised as part of the repair process without HACMP declaring failures and/or
performing reintegration while the repair action is going on.
The reintegration of a component might trigger quite significant actions. For example, if
a node is reintegrated which has a high priority within a resource group then, depending
on how the resource group is configured, the resource group might fallback.

3-20 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
IP Address Takeover (IPAT)
Each highly available application is likely to require its own IP
address (called a service IP address)
This service IP address would usually be placed in the
application's resource group
HACMP would then be responsible for ensuring that the service
IP address was available on the node currently responsible for
the resource group

IP rvice
el
NF
Sy
Fil tem

lab
S

Se
e
s

ex

ro e
NF po

G lum
S

up
mo rts

Vo
un r ve r
ts io n Se
licat
App

Resou
rc
Grou e
p

Copyright IBM Corporation 2005

Figure 3-13. IP Address Takeover (IPAT) AU546.0

Notes:

Service IP address
Most highly available applications work best, from the users perspective, if the
applications IP address never changes. This capability is provided by HACMP using a
feature called IP Address Takeover. An IP address is selected which is associated with
the application. This IP address is called a service IP address because it is used to
deliver a service to the use. It is placed in the applications resource group. HACMP
then ensures that the service IP address is kept available on whichever node the
resource group is currently on. The process of moving an IP address to another NIC or
to a NIC on another node is called IP address takeover (IPAT).

Applications that dont need IPAT


Although very common, IPAT is an optional behavior which must be configured into the
cluster. An example of an application which might not require IPAT is a database server

Copyright IBM Corp. 1998, 2005 Unit 3. Networking Considerations for High Availability 3-21
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

for which the client software can be configured to check multiple IP addresses when it is
looking for the server.
Also, IPAT is not supported for resource groups configured for concurrent access as the
application in such a resource group is active on all the nodes which are currently up.
Consequently, clients of a concurrent access resource group must be capable of finding
their server by checking multiple IP addresses.

3-22 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
IPAT After a Node Failure
If the application's current node fails, HACMP moves the
application's resource group to the other node.
If IPAT is configured for the resource group then the application's
service IP address is associated with the application's new node.

192.168.25.12

192.168.25.12

Copyright IBM Corporation 2005

Figure 3-14. IPAT After a Node Failure AU546.0

Notes:

Node failure with IPAT


When a node that is running an IPAT-enabled resource group fails, HACMP moves the
resource group to an alternative node. Since the service IP address is in the resource
group, it moves with the rest of the resources to the new node.

Node failure from the users perspective


The users experience a short outage and then, from their perspective, the same server
is back up and running.
You probably shouldnt correct the user when they mention that the server was down for
a few minutes earlier when you happen to know that it is still down and undergoing
repair! Strictly speaking, the service was down for a few minutes and is now back up
again.

Copyright IBM Corp. 1998, 2005 Unit 3. Networking Considerations for High Availability 3-23
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

IPAT After a NIC Failure


If the NIC associated with the application's service IP address fails,
HACMP moves the service IP address to another NIC.
From HACMPs perspective, NIC failures include anything which
prevent the NIC from sending and receiving packets (for example,
a damaged or disconnected cable, a failed switch port, and so
forth).

192.168.25.12

192.168.25.12

Copyright IBM Corporation 2005

Figure 3-15. IPAT After a NIC Failure AU546.0

Notes:

NIC failure with IPAT


The failure of a NIC is generally handled locally on the node which experienced the
failure by moving the IP address to a still available NIC. The outage in this case is
considerably shorter than the one which occurs when a node fails.

Users perspective
Since existing TCP/IP sessions generally recover cleanly from this sort of
failure/move-IP-address operation, users might not even notice the outage if they dont
happen to be interacting with the application at the time of the failure.

Failure of all NICs on a node


If the last remaining NIC on a node fails, then HACMP triggers a fallover for any
resource groups with service IP addresses on the failed NICs network.

3-24 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
Lets Review Topic 1
1. How does HACMP use networks (select all which apply)?
a. Provide client systems with highly available access to the
cluster's applications
b. Detect failures
c. Diagnose failures
d. Communicate between cluster nodes
e. Monitor network performance
2. Using information from RSCT, HACMP only directly handles
three types of failures: ______________, ______________,
______________.
3. True or False?
Heartbeat packets must be acknowledged or a failure is assumed to have
occurred.
4. True or False?
Clusters should include a non-IP network.
5. True or False?
Each NIC on each physical IP network on each node is required to have an
IP address on a different logical subnet.
Copyright IBM Corporation 2005

Figure 3-16. Lets Review Topic 1 AU546.0

Notes:

Copyright IBM Corp. 1998, 2005 Unit 3. Networking Considerations for High Availability 3-25
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

3-26 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty 3.2 HACMP Concepts and Configuration Rules

Copyright IBM Corp. 1998, 2005 Unit 3. Networking Considerations for High Availability 3-27
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Topic 2 Objectives:
HACMP Concepts and Configuration Rules
After completing this topic, you should be able to:
List the networking technologies supported by HACMP
Describe the purpose of public and private HACMP
networks
Describe the topology components and their naming rules
Define key networking related HACMP terms
Describe the basic HACMP network configuration rules

Copyright IBM Corporation 2005

Figure 3-17. Topic 2 Objectives: HACMP Concepts and Configuration Rules AU546.0

Notes:

Topic 2 objectives
This section will explore HACMP networking concepts, terms and configuration rules in
more detail.

3-28 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
HACMP Networking Support
Supported IP networking technologies:
Ethernet
All speeds
Not the IEEE 802.3 frame type which uses et0, et1 ...
FDDI
Token-Ring
ATM and ATM LAN Emulation
Etherchannel
SP Switch 1 and SP Switch 2
Supported non-IP network technologies:
Heartbeat over Disks (diskhb)
Requires Enhanced Concurrent Volume Group and
HACMP 5.x
RS232/RS422 (rs232)
Target Mode SSA (tmssa)
Target Mode SCSI (tmscsi)

Copyright IBM Corporation 2005

Figure 3-18. HACMP Networking Support AU546.0

Notes:

Supported IP networks
HACMP supports all of the popular IP networking technologies (and a few which are
possibly not quite as popular). Note that the IEEE 802.3 Ethernet frame type is not
supported.

Supported non-IP networks


HACMP supports four non-IP networking technologies. Note that Target Mode SCSI
technology requires adapters that support target mode, such as PCI-X Dual Channel
Ultra320 SCSI Blind Swap Adapter (Feature code 5710).

Copyright IBM Corp. 1998, 2005 Unit 3. Networking Considerations for High Availability 3-29
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Network Types
HACMP categorizes all networks:
IP networks:

Network type: ether, token, fddi, atm,


hps (SP Switch or High Performance Switch)
Network attribute: public or private

Non-IP networks (also called serial networks):

Network type: rs232, tmssa, tmscsi, diskhb


Copyright IBM Corporation 2005

Figure 3-19. Network Types AU546.0

Notes:

IP networks
As mentioned before, IP networks are used by HACMP for:
- HACMP heartbeat (failure detection and diagnosis)
- Communications between HACMP daemons on different nodes
- Client network traffic

IP network attribute
The default for this attribute is public. Oracle uses the private network attribute
setting to select networks for Oracle inter-node communications. This attribute is not
used by HACMP itself. See the HACMP for AIX: Planning and Installation Guide for
more information.

3-30 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty HACMP and virtual Ethernet


HACMP 5.3 does support virtual Ethernet in POWER5-based systems, however, there
are some considerations. We summarize some of them below, for compete details on
using virtual I/O with HACMP, see:
http://www.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/FLASH10390.
- IP Address Takeover (IPAT) via Aliasing must be used. IPAT via Replacement and
Hardware Address Takeover (HWAT) are not supported. In general, IPAT via
Aliasing is recommended for all HACMP networks that can support it.
Note: We will discuss IPAT and HWAT in detail in the next topic in this unit.
- HACMPs PCI Hot Plug facility cannot be used. PCI Hot Plug operations are
available through the VIO Server. Note that when an HACMP node is using Virtual
I/O, HACMPs PCI Hot Plug facility is not meaningful because the I/O adapters are
virtual rather than physical.
- All Virtual Ethernet interfaces defined to HACMP should be treated as
single-adapter networks as described in the Planning and Installation Guide. In
particular, the netmon.cf file must be used to monitor and detect failure of the
network interfaces. netmon.cf should include a list of clients to ping. Due to nature
of Virtual Ethernet, other mechanisms to detect the failure of network interfaces are
not effective.
- If the VIO server has multiple physical interfaces on the same network or if there are
two or more HACMP nodes using VIO servers in the same frame, HACMP will not
be informed of (and hence will not react to) single physical interface failures. This
does not limit the availability of the entire cluster because VIOS itself routes traffic
around the failure. The VIOS support is analogous to EtherChannel in this regard.
Other methods (not based the VIO server) must be used for providing notification of
individual adapter failures.

If the VIO server has only a single physical interface on a network then a failure of
that physical interface will be detected by HACMP. However, that failure will isolate
the node from the network.

Non-IP networks
HACMP uses non-IP networks for:
- Alternative non-IP path for HACMP heartbeat and messaging
- Differentiates between node/network failure
- Eliminates IP as a single point of failure

Copyright IBM Corp. 1998, 2005 Unit 3. Networking Considerations for High Availability 3-31
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

HACMP Topology Components


HACMP uses some unique terminology to describe the type and
function of topology (as in, network) components under its control.
IP lab
IP el s
Ne IP addres
two
rk vancouver-service 192.168.5.2

TCP/IP network - Internalnet


Comm
u nicatio
n Interf
ace
Network

Interface

Network
Interface

Interface
Network
Interface

Card
Card

Card
Card

ne non
tw -IP
Network

or
k
Communication Device
Serial Serial
Port non IP - rs232 Port

non IP - tmssa non


-IP
bondar n
e tw hudson
ork
non IP - diskhb non
-IP
net
node wo
name rk

Copyright IBM Corporation 2005

Figure 3-20. HACMP Topology Components AU546.0

Notes:

Terminology
HACMP has quite a few special terms that are used repeatedly throughout the
documentation and the HACMP smit screens. Over the next few visuals we will discuss
some of the network related terminology in detail.
- node
An IBM Eserver pSeries system operating within an HACMP cluster.
- node name
The name of a node from HACMPs perspective.
- IP label
For TCP/IP networks, the name specified in the /etc/hosts file or by the Domain
Name Service for a specific IP address.

3-32 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty HACMP nodes will have multiple NICs, and thus multiple IP labels, but only one
hostname. Well look at the relationship between hostname, node name and IP
labels in the next visual.
In HACMP, IP labels are either service IP labels or non-service IP labels. Well
discuss this distinction in the next few visuals.
- IP network
A network which uses the TCP/IP family of protocols.
- non-IP network or serial network
A point-to-point network which does not rely on the TCP/IP family of protocols.
- communication interface
A network connection onto an IP network (slightly better definition coming shortly).
- communication device
A port or device connecting a node to a non-IP network (slightly better definition
coming shortly).

Copyright IBM Corp. 1998, 2005 Unit 3. Networking Considerations for High Availability 3-33
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

On the Naming of Nodes


There are several names a node can be known by, including the
AIX hostname, the HACMP node name and one of the IP labels.
These concepts should not be confused.
AIX hostname HACMP node name
# hostname # /usr/es/sbin/cluster/utlities/get_local_nodename
gastown vancouver
# uname -n
gastown

IP labels
# netstat -i
Name Mtu Network Address Ipkts Ierrs Opkts Oerrs Coll
lo0 16896 link#1 5338 0 5345 0 0
lo0 16896 127 localhost 5338 0 5345 0 0
lo0 16896 ::1 5338 0 5345 0 0
tr0 1500 link#2 0.4.ac.49.35.58 76884 0 61951 0 0
tr0 1500 192.168.1 vancouver-if1 76884 0 61951 0 0
tr1 1492 link#3 0.4.ac.48.22.f4 476 0 451 13 0
tr1 1492 192.168.2 vancouver-if2 476 0 451 13 0
tr2 1492 link#4 0.4.ac.4d.37.4e 5667 0 4500 0 0
tr2 1492 195.16.20 db-app-svc 5667 0 4500 0 0

Copyright IBM Corporation 2005

Figure 3-21. On the Naming of Nodes AU546.0

Notes:

Hostname
Each node within an HACMP cluster is an IBM Eserver pSeries system. It almost
certainly has a hostname associated with it that was assigned when the machine was
first installed onto the network. For example, a hypothetical machine might have been
given the name gastown.

HACMP node name


Each node within an HACMP cluster also has a node name. The node name for a
machine is almost always the same as the hostname, as the alternative would result in
unnecessary confusion. It is important to keep in mind that node names are not required
to be the same as hostnames. For example, our hypothetical machine with a hostname
of gastown might have a node name of vancouver.
Side note: The Canadian city of Vancouver was once called Gastown.

3-34 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty IP labels
Each IP address used by an HACMP cluster almost certainly has an IP label associated
with it. In non-HACMP systems, it is not unusual for the systems only IP label to be the
same as the systems hostname. This is rarely a good naming convention within an
HACMP cluster as there are just so many IP labels to deal with, and having to pick
which one gets a name that is the same as a nodes hostname is a pointless exercise.

IP label naming conventions: non-service IP labels


It is generally preferable to assign IP labels to IP addresses which describe, in some
sense, the purpose of the IP address.
For IP addresses which are not associated with an application (non-service), its usually
useful to include which node the IP address is associated with. In example in the visual,
there are two NICs which have a vancouver- prefix on their IP labels since these
particular IP labels will never be associated with any other node.

IP label naming conventions: service IP labels


Service IP labels/addresses, which are used in IPAT, can move from node to node.
Service IP labels should NOT contain the name of any node since they are not always
associated with any particular node. Experience shows that including a node name or a
hostname as any part of an IPAT service IP label is almost always the source of
significant confusion (significant in the sense that it leads to a cluster outage or other
painful experience).
In the example in the visual, there is one service IP label: db-app-svc.

Copyright IBM Corp. 1998, 2005 Unit 3. Networking Considerations for High Availability 3-35
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

HACMP 5.x Network Terms (1 of 2)


Communication Interface:
A communication interface refers to IP-based networks and NICs.
An HACMP communication interface is a combination of:
A network interface
for example: en0
An IP label / address
for example: db-app-svc, 195.16.20.10

Communication Device:
A communication device refers to one end of a point-to-point
non-IP network connection, such as /dev/tty1, /dev/hdisk1 or
/dev/tmssa1.
Communication Adapter:
A communication adapter is an X.25 adapter used to support a
Highly Available Communication Link.

Copyright IBM Corporation 2005

Figure 3-22. HACMP 5.X Network Terms (1 of 2) AU546.0

Notes:

HACMP network terminology


When using HACMP SMIT, it is important to understand the difference between
communication interfaces, devices and adapters:
- Communication interfaces:
Interfaces for IP-based networks
Note: The term communication interface in HACMP refers to more than just the
physical NIC. From HACMPs point of view, a communication interface is an object
defined to HACMP, which includes:
The logical interface (the name for the physical NIC), such as en0
The IP label / address
- Communication devices:
Devices for non-IP networks

3-36 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty - Communication adapters:


X.25 adapters

HACMP 4.x versus HACMP terminology


There were some serious problems with some of the HACMP 4.x terminology. HACMP
5.x resolves these problems by introducing some new terms.
This course will use the HACMP 5.x terms consistently, except when pointing out the
differences compared to HACMP 4.x

Copyright IBM Corp. 1998, 2005 Unit 3. Networking Considerations for High Availability 3-37
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

HACMP 5.x Network Terms (2 of 2)


Service IP label / address: Address configured by HACMP to
support client traffic. It is kept highly available by HACMP.
Configured on an interface either by replacement or by alias.
Non-service IP label / address: An IP label / address defined to
HACMP for communication interfaces and is not used by HACMP
for client traffic. Two types:
interface (stored in AIX ODM)
persistent (see below).
Service interface: A communications interface configured with a
service IP label / address (either by alias or replacement).
Non-service interface: A communications interface not configured
with a service IP label / address. Used as a backup for a service IP
label / address.
Persistent IP label / address: An IP label / address, defined as an
alias to an interface IP label / address which stays on a single node
and is kept available on that node by HACMP.
Copyright IBM Corporation 2005

Figure 3-23. HACMP 5.X Network Terms (2 of 2) AU546.0

Notes:

More HACMP 5.x terminology


Another set of terms that are important are service, non-service and persistent:

Service IP label / address


An IP label or address intended to be used by client systems to access services running
within the cluster. Used with IP Address Takeover (IPAT).

Non-service IP label / address


An IP address which is configured onto a NIC using AIXs TCP/IP smit screens and
stored in the AIX ODM. In other words, it is the IP address that a NIC has immediately
after AIX finishes booting. HACMP might replace a non-service IP address with a
service IP address depending on factors which are explained shortly.

3-38 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty Note: In earlier versions of HACMP, the terms boot IP label and boot IP address were
used to refer to what is now being called non-service IP label / address. The older terms
still appear in a few places in the HACMP 5.x documentation.

Applications should use the service IP label / address


Non-service IP labels and non-service IP addresses should not be used by client
systems to contact the clusters applications. This is particularly important if IPAT is
configured, because a client system which gets into the habit of connecting to its
application using a non-service IP label / address is not able to find its application after
a fallover to a different node.

Service Interface
A communications interface configured with a service IP label / address (either by alias
or by replacement).

Non-Service Interface
A communications interface not configured with a service IP label / address. Used as a
backup for a service IP label / address.

Persistent IP label / address


An IP address monitored by HACMP but it stays on the node on which it is configured. It
is implemented as an alias and HACMP will attempt to keep this IP label / address
highly available on the same node. Persistent IP labels / addresses are discussed later
in this unit.

Copyright IBM Corp. 1998, 2005 Unit 3. Networking Considerations for High Availability 3-39
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

IP Network Configuration Rules


General
Each node must have at least one direct connection with every other
node
Do not place network equipment that filters packets between nodes
Heartbeating over IP interfaces
Each IP address on a node must be on a different logical subnet
With multiple NICs on the same subnet, HACMP cannot reliably monitor each NIC
Each logical subnet should use the same subnet mask
There must be at least one subnet in common with all nodes
Heartbeating over IP alias
With this heartbeating method, there are no subnet restrictions on all
service and non-service IP addresses
You specify a base address for the heartbeat paths
HACMP configures a set of IP addresses and subnets for heartbeating
Heartbeating addresses are applied to NICs as aliases, allowing all NICs
to be monitored

Copyright IBM Corporation 2005

Figure 3-24. IP Network Configuration Rules AU546.0

Notes:

Network configuration rules for heartbeating


The visual shows some of the rules for configuring HACMP IP-based networks. These
are not quite the complete set of rules as we have not had a close enough look at IPAT
yet and there are a few other issues still to be discussed.

General rules
The primary purpose of these rules is to ensure that cluster heartbeating can reliably
monitor NICs, networks and nodes.
There are two basic approaches:
- Heartbeating over IP interfaces
- Heartbeating over IP aliases

3-40 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty In either case,


- HACMP requires that each node in the cluster have at least one direct, non-routed
network connection with every other node.
- Between cluster nodes, do not place intelligent switches, routers, or other network
equipment that do not transparently pass through UDP broadcasts and other
packets to all cluster nodes. Bridges, hubs, and other passive devices that do not
modify the packet flow may be safely placed between cluster nodes.

Heartbeating over IP interfaces


In this case, the configured service addresses and non-service addresses are used for
heartbeating. Because of this, there are requirements on how the addresses are
configured to ensure that heartbeating can occur reliably:
- Each interface on a node must be on a different logical subnet
If there are multiple interfaces on the same logical subnet, AIX can use any one of
them for outgoing messages. In this case, HACMP can not select which interface
will be used for heartbeating, so it cannot reliably monitor all interfaces.
- Each logical subnet should use the same subnet mask
- There must be at least one subnet in common with all nodes

IP configuration rules too restrictive?


If it is difficult to conform to the IP address configuration rules for heartbeating over IP
interfaces, there are two choices:
- Heartbeating over IP aliases
- Using netmon.

Heartbeating over IP aliases


With this heartbeating method, the service and non-service addresses are not used for
heartbeating. Instead, you specify an IP address offset to be used for heartbeating.
HACMP then configures a set of IP addresses and subnets for heartbeating which are
totally separate from those used as service and non-service addresses. The
heartbeating addresses are added to the NICs using IP aliases.
Since HACMP automatically generates the proper addresses required for heartbeating,
all other addresses are free of any constraints. Of course, you must reserve a unique
address and subnet range that is used specifically for heartbeating.
For more details, see the HACMP Installation and Planning Guide.

Copyright IBM Corp. 1998, 2005 Unit 3. Networking Considerations for High Availability 3-41
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Subnet considerations for heartbeating over IP alias


Heartbeating over IP Aliases provides the greatest flexibility for configuring non-service
and service IP addresses.
HACMP installations typically require many subnets. If you only have a limited number
of subnets available, you may consider using heartbeating over IP alias and putting
multiple service IP addresses on the same subnet or putting a service address on the
same subnet as non-service addresses. While this is perfectly acceptable in terms of
HACMP heartbeating, it needs to be well thought out because of the way AIX 5L
handles multiple routes to the same destination.
AIX 5L supports multiple routes to the same destination and, by default, will round robin
between the available routes. This could create a problem for your application.
Consider the following scenario:
The non-service addresses on en1 and en2 on node1 are in the same subnet as an
applications service address. The service address starts on en1. en1 fails and
HACMP moves the service address to en2. Since AIX does not know that en1 has
failed, AIX will continue to round robin packets between en1 and en2 (since they
have the same subnet destination). Packets sent to en1 will be lost due to the
failure.
AIX 5Ls active Dead Gateway Detection provides a way for AIX to detect routes that
are down and adjust the routing table, however this does involve some additional
network traffic. For more information about AIX 5Ls support for multipath routing and
active Dead Gateway Detection, see the man page for the no command and the AIX 5L
Version 5.3 System Management Guide: Communications and Networks.

netmon
netmon, the network monitor portion of RSCT Topology Services, allows you to create a
configuration file that specifies additional network addresses to which ICMP ECHO
requests can be sent as an additional way to monitor interfaces. netmon is outside the
scope of this class. Please see the HACMP for AIX: Planning and Installation Guide for
information on using netmon.

Unmonitorable NICs
One final point: if no other mechanism has been configured into the cluster, HACMP
attempts to monitor an otherwise unmonitorable (is that a word?) NIC by checking to
see if packets are arriving and being sent via the interface. This approach is not
sufficiently robust to be relied upon - use heartbeating via IP aliases or netmon to get
the job done right.

3-42 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
IP Network Configuration Examples
IP Address IP Address Valid configuration?
node1 node2 Assume subnet mask = 255.255.255.0
192.168.5.1 192.168.5.2 Both node2 interfaces are on same subnet; they
192.168.6.1 192.168.5.3 cannot be monitored, unless you use heartbeating
over IP alias or netmon.
192.168.5.1 192.168.5.2 OK, but NICs are a single point of failure. Normally
you should have at least two NICs per network on
each node.
192.168.5.1 192.168.5.2 OK
192.168.6.1 192.168.6.2
192.168.5.1 192.168.5.2 OK, but 2nd NIC on both nodes do not have a
192.168.6.1 192.168.7.1 common subnet with another node; they cannot
be monitored, unless you use heartbeating over IP
alias or netmon.
192.168.5.1 192.168.5.2 OK, but 3rd and 4th interfaces on node1 do not
192.168.6.1 192.168.6.2 have a common subnet with another node; they
192.168.7.1 cannot be monitored, unless you use heartbeating
192.168.8.1 over IP alias or netmon.

Copyright IBM Corporation 2005

Figure 3-25. IP Network Configuration Examples AU546.0

Notes:

Examples
The visual shows some IP network examples.

Copyright IBM Corp. 1998, 2005 Unit 3. Networking Considerations for High Availability 3-43
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Non-IP Network Configuration Rules


Non-IP networks are strongly recommended in order to provide an
alternate communication path between cluster nodes in the event
of an IP network failure or IP subsystem failure
With more than two nodes you can configure the non-IP network
topology using one of the following layouts:
Mesh: Each node is connected to all other nodes.
This is the most robust, but requires the most hardware.
Ring (or Loop): Each node is connected to its two adjacent
neighbors. Each node has two non-IP connections for heartbeating.
Star: One node is connected to all other nodes.
This is the least robust; the center node becomes a single point of
failure for all the associated networks.

net_rs232_04

net_rs232_01 net_rs232_02 net_rs232_03


node1 node2 node3 node4

Copyright IBM Corporation 2005

Figure 3-26. Non-IP Network Configuration Rules AU546.0

Notes:

Non-IP networks
Non-IP networks are point-to-point. That is each connection between two nodes is
considered a network and a separate non-IP network label for it is created in HACMP.
For example, the visual shows four RS232 networks, in a ring configuration, connecting
four nodes to provide full cluster non-IP connectivity.

Types of non-IP networks


You can configure heartbeat paths over the following types of networks:
- Serial (RS232)
- Disk heartbeat (over an enhanced concurrent mode disk)
- Target Mode SSA
- Target Mode SCSI

3-44 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty Rules
The rules for non-IP networks are considerably simpler than the rules for IP networks
although they are just as important.
The basic rule is that you MUST configure enough non-IP networks to provide a non-IP
communication path, possibly via intermediate nodes, between every pair of nodes in
the cluster. In other words, every node must have a non-IP network connection to at
least one other node. Additional communication paths, such as the ring or mesh
topologies discussed in the visual, provide more robustness.
In addition, there are some considerations based on the type of non-IP network you are
using.

Planning Disk Heartbeat Networks


Any shared disk in an enhanced concurrent mode volume group can support a
point-to-point heartbeat connection. Each disk can support one connection between two
nodes. The connection uses the shared disk hardware as the communication path.
A disk heartbeat network in a cluster contains:
- Two nodes
A node may be a member of any number of one disk heartbeat networks. A cluster
can include up to 256 communications devices.
- An enhanced concurrent mode disk that participates in only one heartbeat network.
Keep in mind the following points when selecting a disk to use for disk heartbeating:
- A disk used for disk heartbeating must be a member of an enhanced concurrent
mode volume group. However, the volume groups associated with the disks used for
disk heartbeating do not have to be defined as resources within an HACMP
resource group. In other words, an enhanced concurrent volume group associated
with the disk that enables heartbeating does not have to belong to any resource
group in HACMP.
You can convert an existing volume group to enhanced concurrent mode. For
information about converting a volume group, see Chapter 11: Managing Shared
LVM Components in a Concurrent Access Environment in the Administration Guide.
- The disk should have fewer than 60 seeks per second at peak load. (Disk
heartbeats rely on being written and read within certain intervals.)
Use the AIX 5L filemon command to determine the seek activity, as well as the I/O
load for a physical disk.
Typically, most disk drives that do not have write caches can perform about 100
seeks per second. Disk heartbeating uses 24 seeks.
Disks that are RAID arrays, or subsets of RAID arrays, may have lower limits. Check
with the disk or disk subsystem manufacturer to determine the number of seeks per

Copyright IBM Corp. 1998, 2005 Unit 3. Networking Considerations for High Availability 3-45
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

second that a disk or disk subsystem can support. However, if you choose to use a
disk that has significant I/O load, increase the value for the timeout parameter for the
disk heartbeat network.
- When SDD is installed and the enhanced concurrent volume group is associated
with an active vpath device, ensure that the disk heartbeating communication device
is defined to use the /dev/vpath device (rather than the associated /dev/hdisk
device).
- If a shared volume group is mirrored, at least one disk in each mirror should be used
for disk heartbeating.
This is particularly important if you plan to set the forced varyon option for a
resource group.

Planning Serial Point-to-Point Networks


When planning a serial (RS232) network, keep in mind the following:
- If there are no native serial ports available, and your planned HACMP configuration
for that node uses an RS232 network, the configuration requires a serial NIC card.
- All RS232 networks defined to HACMP are brought up by RSCT with a default of
38400 bps. The tty ports should be defined to AIX 5L as running at 38400 bps.
RSCT supports baud rates of 38400, 19200, 9600.
Any serial port that meets the following requirements can be used for heartbeats:
- The hardware supports use of that serial port for modem attachment.
- The serial port is free for HACMP exclusive use.
Examples of processors with native serial ports that do not meet these conditions are
S70, S7A, S80, and serial ports 1 and 2 in the F50, H50, and H70.
Certain RS/6000 systems do not support the use of native serial ports.
Note: HACMP supports serial ports that the hardware and the system software make
available for application use. It is your responsibility to manage any modems or
extenders between the ports. Refer to the hardware documentation and HACMP
support announcements to determine whether your serial ports meet the requirements.

Planning Target Mode Networks


Target mode SCSI and target mode SSA are also supported for point-to-point heartbeat
communications. Each of these types of networks includes two nodes, a shared disk,
and SCSI or SSA communications (as appropriate to the disk type).

3-46 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
Persistent Node IP Labels
An IP label associated with a particular node
Useful for administrative purposes:
Provides highly available IP address associated with a
particular node
Allows external monitoring tools (for example, Tivoli) and
administrative scripts to reach a particular node
Assigned, via IP aliasing, after node synchronization, to a
communications interface on the node
HACMP will strive to keep the persistent node IP label available on
that node -- never moved to another node.
Maximum of one persistent node IP label per network per node
Persistent node IP labels must adhere to subnet rules:
Persistent node IP labels must not be in any non-service
interface subnet

Copyright IBM Corporation 2005

Figure 3-27. Persistent Node IP Labels AU546.0

Notes:

Rationale
In earlier releases of HACMP, the only way to guarantee that a known IP address would
always be available on each node for administrative purposes was to configure a
separate network which was never used for IPAT. Such a configuration limits the
usefulness of the administrative network.

Persistent IP labels
Starting in the HACMP 4.5 (classic and ES) release, users may now configure
persistent node IP labels. These are IP aliases that are configured on a node and kept
available as long as at least one communication interface remains active on the
associated network.
Persistent IP labels can be used with IP address takeover (IPAT).

Copyright IBM Corp. 1998, 2005 Unit 3. Networking Considerations for High Availability 3-47
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

If HACMP is not up
If HACMP is not up on the node then the persistent node IP label is still aliased to a
communication interface although the failure of the underlying communication interface
will, of course, cause the persistent node IP label to become unavailable.

More on persistent IP labels


A persistent node IP label is an IP alias that has been assigned to a specific node in the
cluster, and always stays on the same node. The persistent node IP label coexists on
an interface with the non-service or service label that is already there. Persistent node
IP labels do not require installation of additional physical adapters, and they are not
included in any resource groups (the clients of a concurrent access resource group
might be configured to use the persistent node IP label). A persistent node IP label is
intended primarily to provide administrative access to the node, but also plays a role in
HATivoli clusters.
Persistent node IP labels are supported on the following types of IP-based networks
only:
- Ethernet
- Token Ring
- FDDI
- ATM LANE

3-48 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
Lets Review Topic 2
1. True or False?
Clusters must always be configured with a private IP network for
HACMP communication.
2. Which of the following are true statements about
communication interfaces (select all that apply)?
a. Has an IP address assigned to it using the AIX TCP/IP smit screens
b. Might have more than one IP address associated with it
c. Sometimes but not always used to communicate with clients
d. Always used to communicate with clients
3. True or False?
Persistent node IP labels are not supported for IPAT via IP replacement.
4. True or False?
There are no exceptions to the rule that, on each node, each NIC on the
same LAN must have an IP address in a different subnet

Copyright IBM Corporation 2005

Figure 3-28. Lets Review Topic 2 AU546.0

Notes:

Copyright IBM Corp. 1998, 2005 Unit 3. Networking Considerations for High Availability 3-49
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

3-50 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty 3.3 Implementing IP Address Takeover (IPAT)

Copyright IBM Corp. 1998, 2005 Unit 3. Networking Considerations for High Availability 3-51
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Topic 3 Objectives:
Implementing IP Address Takeover (IPAT)
After completing this topic, you should be able to:
Describe IPAT via IP aliasing and IPAT via IP replacement:
How to configure a network to support them
What happens when
There are no failed components
A communication interface fails
A communication interface recovers
A node fails
A node recovers
Know how to select which style of IPAT is appropriate in a
given context
Describe how the AIX boot sequences changes when IPAT
is configured in a cluster
Describe the importance of consistent IP addressing and
labeling conventions
Copyright IBM Corporation 2005

Figure 3-29. Topic 3 Objectives: Implementing IP Address Takeover (IPAT) AU546.0

Notes:

Topic 3 objectives
This section explains how to configure both variants of IP Address Takeover (IPAT).

3-52 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
Two Ways to Implement IPAT
IPAT via IP Aliasing:
HACMP adds the service IP address to an (AIX) interface IP
address using AIX's IP aliasing feature:
ifconfig en0 alias 192.168.1.2
IPAT via IP Replacement:
HACMP replaces an (AIX) interface IP addresses with the
service IP addresses:
ifconfig en0 192.168.1.2

Copyright IBM Corporation 2005

Figure 3-30. Two Ways to Implement IPAT AU546.0

Notes:

IPAT via IP aliasing


IPAT via IP aliasing takes advantage of AIXs ability to have multiple IP addresses
associated with a single NIC. This ability, called IP aliasing, allows HACMP to move
service IP addresses between NICs (or between nodes) without having to either change
existing IP addresses on NICs or worry about whether or not there is already a service
IP label on the NIC.

IPAT via IP replacement


IPAT via IP replacement involves replacing the IP address currently on a NIC with a
service IP address. This approach supports a rather interesting facility called hardware
address takeover which we discuss shortly. It has the limitation of supporting only one
service IP label per adapter which restricts the number of resource groups which can
use IPAT and, in practical terms, the number of service IP labels in a resource group.

Copyright IBM Corp. 1998, 2005 Unit 3. Networking Considerations for High Availability 3-53
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Which is better?
We will examine the advantages and disadvantages of each method in the next few
pages. Keep in mind that the question is not which is better but rather which is better
suited to a particular context.

3-54 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
IPAT via IP Aliasing Configuration
Define IP address for each network interface in the AIX ODM
Each interface IP address must be in a different logical IP subnet*
Define these address in the /etc/host file and configure them in HACMP
topology as communication interfaces

Define service addresses in /etc/hosts and in HACMP resources


They must not be in the same logical IP subnet as any of the interface IP
addresses
HACMP will configure them to AIX when needed

Before starting the application resource group

192.168.10.1 (ODM) 192.168.11.1 (ODM) 192.168.10.2 (ODM) 192.168.11.2 (ODM)

* Refer to earlier discussion of heartbeating and failure diagnosis for explanation of why
Copyright IBM Corporation 2005

Figure 3-31. IPAT via IP Aliasing Configuration AU546.0

Notes:

Requirements
Before configuring an HACMP network to use IPAT via IP aliasing, the cluster
configurator should ensure that all of the following are true:
- The network is of a type that supports IPAT via IP aliasing:
Ethernet
token-ring
FDDI
SP switch 1 / SP switch 2
- No service IP labels on the network require hardware address takeover (HWAT)
- The non-service IP addresses on each node are all on separate IP subnets
- The service IP addresses are on separate IP subnets from all non-service IP
addresses

Copyright IBM Corp. 1998, 2005 Unit 3. Networking Considerations for High Availability 3-55
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

IPAT via aliasing subnet rules example


The interfaces must all be on different subnets, and the service IP labels may not be in
any of the non-service subnets.
For example, in a cluster with one network using IPAT via aliasing, where each node
has two communication interfaces and there are two service IP labels, the network will
require at least four subnets: one for each set of non-service IP labels and one for each
service label:
Node name NIC IP Label IP Address
node1 en0 n1-if1 192.168.10.1
node1 en1 n1-if2 192.168.11.1
node2 en0 n2-if1 192.168.10.2
node2 en1 n2-if2 192.168.11.2
Service address appA-svc 9.47.87.22
Service address appB-svc 9.47.88.22

subnet IP labels
192.168.10/24 n1-if1, n2-if1
192.168.11/24 n1-if2, n2-if2
9.47.87/24 appA-svc
9.47.88/24 appB-svc

Hardware address takeover (HWAT)


HWAT is not supported on networks which use IPAT via IP aliasing (well be discussing
HWAT in detail shortly). The reason is that the service IP label is configured as an alias
on top of the existing interface. Since the underlying interfaces IP address is not
changed, its hardware address is also expected to remain the same.

Planning considerations
A node on a network that uses IPAT via aliasing may be the primary node for multiple
resource groups on the same network, regardless of the number of actual boot
interfaces on the node. Still, users should plan their networks carefully in order to
balance the RG load across the cluster.

Additional background information


HACMP 5.x systems try to keep the number of service IP labels on each NIC roughly
equal although it has no way to predict which service IP labels will be most popular.
Consequently, any load balancing is the responsibility of the cluster administrator (and
will require customization which is beyond the scope of this course).

3-56 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty IPAT via IP aliasing is supported by HACMP/ES 4.5 and HACMP 5.x systems. It is not
supported by earlier versions of HACMP including HACMP 4.5 classic.
Unfortunately, HACMP/ES 4.5 does a rather poor job of distributing service IP labels
across NICs.

Copyright IBM Corp. 1998, 2005 Unit 3. Networking Considerations for High Availability 3-57
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

IPAT via IP Aliasing in Operation


When the resource group comes up on a node, HACMP aliases the
service IP label onto one of the node's available (that is, currently
functional) interfaces (ODM).

After starting the application resource group

9.47.87.22 (alias)
192.168.10.1 (ODM) 192.168.11.1 (ODM) 192.168.10.2 (ODM) 192.168.11.2 (ODM)

Copyright IBM Corporation 2005

Figure 3-32. IPAT via IP Aliasing in Operation AU546.0

Notes:

Operation
HACMP uses AIXs IP aliasing capability to alias service IP labels included in resource
groups onto interfaces (NICs) on the node which runs the resource group. With aliasing,
the non-service IP address (stored in the ODM) is still present.
Note that one advantage of sorts of IPAT via IP aliasing is that the non-service IP
addresses do not need to be routable from the client/user systems.

Applications should use the service address


Users should be strongly discouraged from using anything other than approved service
IP addresses when contacting the cluster as the NICs associated with these
non-service IP addresses might fail or the application might move to a different node
while the non-service IP labels remain behind on the original node.

3-58 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
IPAT via IP Aliasing After an Interface Fails
If the communication interface being used for the service IP label
fails, HACMP aliases the service IP label onto one of the node's
remaining available (currently functional) non-service (ODM)
interfaces
The eventual recovery of the failed boot adapter makes it available
again for future use

9.47.87.22 (alias)
192.168.10.1 (ODM) 192.168.11.1 (ODM) 192.168.10.2 (ODM) 192.168.11.2 (ODM)

Copyright IBM Corporation 2005

Figure 3-33. IPAT via IP Aliasing After an Interface Fails AU546.0

Notes:

Interface failure
If a communication interface fails, HACMP moves the service IP addresses to another
communication interface, which is still available, on the same network. If there are no
remaining available NICs on the node for the network, then HACMP initiates a fallover
for that resource group.

Copyright IBM Corp. 1998, 2005 Unit 3. Networking Considerations for High Availability 3-59
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

IPAT via IP Aliasing After a Node Fails


If the resource group's node fails, HACMP moves the resource
group to a new node and aliases the service IP label onto one of
the new node's available (currently functional) non-service (ODM)
communication interfaces

9.47.87.22 (alias)
192.168.10.2 (ODM) 192.168.11.2 (ODM)

Copyright IBM Corporation 2005

Figure 3-34. IPAT via IP Aliasing After a Node Fails AU546.0

Notes:

Node failure
When a node which has an IPAT-enabled resource group fails, the resource group is
acquired by a surviving node which is listed in the resource groups configuration. The
service IP address is aliased onto an available (currently functional) communication
interface on the takeover node.

3-60 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
IPAT via IP Aliasing: Distribution
preference for service IP label aliases
Network level attribute which controls the placement of service IP
labels onto physical NICs
Load balancing
VPN requirements
If there are insufficient interfaces available to satisfy the
preference, HACMP allocates service IP label aliases and
persistent IP labels to an existing active network interface card
Four choices
Anti-Collocation
Collocation
Collocation with Persistent Label
Anti-Collocation with Persistent Label

Copyright IBM Corporation 2005

Figure 3-35. IPAT via IP Aliasing: Distribution preference for service IP label aliases AU546.0

Notes:

Distribution preference for service IP label aliases


You can configure a distribution preference for the placement of service IP labels that
are configured in HACMP. Starting with HACMP 5.1, HACMP lets you specify the
distribution preference for the service IP label aliases.
A distribution preference for service IP label aliases is a network-wide attribute used to
control the placement of the service IP label aliases on the physical network interface
cards on the nodes in the cluster. Configuring a distribution preference for service IP
label aliases does the following:
- Load balancing:
Lets you customize the load balancing for service IP labels in the cluster, taking into
account the persistent IP labels previously assigned on the nodes.

Copyright IBM Corp. 1998, 2005 Unit 3. Networking Considerations for High Availability 3-61
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

- VPN requirements:
Allows you to configure the type of the distribution preference suitable for the VPN
firewall external connectivity requirements.
- HACMP will try to meet preferences, but will always keep service labels active:
The distribution preference is exercised as long as there are acceptable network
interfaces available. However, HACMP always keeps service IP labels active, even
if the preference cannot be satisfied.

Four possible values for this attribute


You can specify in SMIT the following distribution preferences for the placement of
service IP label aliases:
- Anti-Collocation:
This is the default. HACMP distributes all service IP label aliases across all
non-service IP labels using a least loaded selection process.
- Collocation:
HACMP allocates all service IP label aliases on the same network interface card
(NIC).
- Collocation with Persistent Label:
All service IP label aliases are allocated on the same NIC that is hosting the
persistent IP label. This option may be useful in VPN firewall configurations where
only one interface is granted external connectivity and all IP labels (persistent and
service) must be allocated on the same interface card.
- Anti-Collocation with Persistent Label:
HACMP distributes all service IP label aliases across all active physical interfaces
that are NOT hosting the persistent IP label. HACMP will place the service IP label
alias on the interface that is hosting the persistent label only if no other network
interface is available.

3-62 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
IPAT via IP Aliasing Summary
Configure each node's communication interfaces with non-
service IP addresses (each on a different subnet)
Assign service IP labels to resource groups as appropriate
Must be on separate subnet from non-service IP addresses.
There is a total limit of 256 IP addresses known to HACMP
and 64 resource groups. Within those overall limits:
There is no limit on the number of service IP addresses in a
resource group
There is no limit on the number of resource groups with
service IP labels
HACMP assigns service IP labels to communication interfaces
using IP aliases based on resource group rules and available
hardware
IPAT via IP aliasing requires that hardware address takeover
is not configured
IPAT via IP aliasing requires gratuitous ARP support

Copyright IBM Corporation 2005

Figure 3-36. IPAT via IP Aliasing Summary AU546.0

Notes:

Summary
The visual summarizes IPAT via IP aliasing. Some additional considerations are
discussed below.

Advantages
Probably the most significant advantage to IPAT via IP aliasing is that it supports
multiple service IP labels per network per resource group on the same communication
interface and allows a node to easily support quite a few resource groups. In other
words, IPAT allows you to share several service labels on one interface. Thus it can
require fewer physical interfaces than IPAT via replacement.

Copyright IBM Corp. 1998, 2005 Unit 3. Networking Considerations for High Availability 3-63
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Disadvantages
Probably the most significant disadvantage is that IPAT via IP aliasing does not support
hardware address takeover (HWAT).
In addition, since you must have a subnet for each interface and a subnet for each
service IP label, IPAT via IP aliasing can require a lot of subnets.

3-64 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
IPAT via IP Replacement Overview
AIX 5L boots with a non-service (ODM) IP address on each NIC
When HACMP is started, it replaces non-service IP labels with
service IP labels for the resource groups it brings online
Only one service IP label on a NIC at a time
If the NIC hosting a service IP label fails, HACMP will attempt to
replace the non-service IP label of another NIC with the service
IP label, in order to maintain the service IP label
Configuration rules:
Each service IP label must be in the same subnet as a non-
service label subnet
There must be at least as many NICs on each node as there
are service IP lables
All service IP labels must be in the same subnet
Advantages
Supports hardware address takeover (HWAT)
Requires fewer subnets
Disadvantages
Requires more NICs to support multiple service IP labels
Less flexible

Copyright IBM Corporation 2005

Figure 3-37. IPAT via IP Replacement Overview AU546.0

Notes:

History
In the beginning, IPAT via IP replacement was the only form of IPAT available. IPAT via
IP aliasing became available when AIX became able to have multiple IP addresses
associated with a single NIC. Because IPAT via IP aliasing is more flexible and usually
requires less network interface cards, IPAT via IP replacement is not often used.
This visual gives a brief overview of IPAT via IP replacement. A detailed discussion can
be found in Appendix C.

Configuration rules
The visual summarizes the configuration rules. Notice that they are almost the opposite
to the rules for IPAT via IP aliasing.

Copyright IBM Corp. 1998, 2005 Unit 3. Networking Considerations for High Availability 3-65
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Advantages
Probably the most significant advantage of IPAT via IP replacement is that it supports
hardware address takeover (HWAT). HWAT may be needed if your local clients or
routers do not support gratuitous ARP. This will be discussed in a few pages.
Another advantage is that it requires fewer subnets. If you are limited in the number of
subnets available for your cluster, this may be important.
Note: If reducing the number of subnets needed is important, another alternative may
be to use heartbeating via aliasing, see Heartbeating over IP aliases on page 41.

Disadvantages
Probably the most significant disadvantages are that IPAT via IP replacement limits the
number of service IP labels per subnet per resource group on one communications
interface to one and makes it rather expensive (and complex) to support lots of
resource groups in a small cluster. In other words, you need more network adapters to
support more applications.

3-66 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
Changes to AIX Start Sequence
The startup sequence of AIX networking is changed when IPAT is
enabled.
/etc/inittab
/sbin/rc.boot
/etc/inittab cfgmgr
/sbin/rc.boot /etc/rc.net (modified for ipat)
cfgmgr exit 0
/etc/rc.net /etc/rc
cfgif mount all

/etc/rc /usr/sbin/cluster/etc/harc.net
mount all /etc/rc.net -boot
cfgif
/etc/rc.tcpip
daemons start < HACMP startup > clstmgr
event node_up
/etc/rc.nfs node_up_local
daemons start get_disk_vg_fs
exportfs acquire_service_addr
telinit -a
/etc/rc.tcpip
daemons start
/etc/rc.nfs
IPAT changes the init daemons start
sequence exportfs

Copyright IBM Corporation 2005

Figure 3-38. Changes to AIX Start Sequence AU546.0

Notes:

/etc/inittab changes
A node with a network configured for IPAT via IP replacement must not start inetd until
HACMP has had a chance to assign the appropriate IP addresses to the nodes NICs.
Consequently, the AIX start sequence is modified slightly if a node has a resource
group which uses either form of IPAT.

Copyright IBM Corp. 1998, 2005 Unit 3. Networking Considerations for High Availability 3-67
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Changes to /etc/inittab
init:2:initdefault:
brc::sysinit:/sbin/rc.boot 3 >/dev/console 2>&1 # Phase 3 of system boot
. . .
srcmstr:23456789:respawn:/usr/sbin/srcmstr # System Resource Controller
harc:2:wait:/usr/es/sbin/cluster/etc/harc.net # HACMP for AIX network startup
rctcpip:a:wait:/etc/rc.tcpip > /dev/console 2>&1 # Start TCP/IP daemons
rcnfs:a:wait:/etc/rc.nfs > /dev/console 2>&1 # Start NFS Daemons
. . .
qdaemon:a:wait:/usr/bin/startsrc -sqdaemon
writesrv:a:wait:/usr/bin/startsrc -swritesrv
. . .
ctrmc:2:once:/usr/bin/startsrc -s ctrmc > /dev/console 2>&1
ha_star:h2:once:/etc/rc.ha_star >/dev/console 2>&1
dt:2:wait:/etc/rc.dt
cons:0123456789:respawn:/usr/sbin/getty /dev/console
xfs:0123456789:once:/usr/lpp/X11/bin/xfs
hacmp:2:once:/usr/es/sbin/cluster/etc/rc.init >/dev/console 2>&1
clinit:a:wait:/bin/touch /usr/es/sbin/cluster/.telinit # HACMP for AIX These must
be the last entries of run level a in inittab!
pst_clinit:a:wait:/bin/echo Created /usr/es/sbin/cluster/.telinit > /dev/console #
HACMP for AIX These must be the last entries of run level a in inittab!

Copyright IBM Corporation 2005

Figure 3-39. Changes to /etc/inittab AU546.0

Notes:

HACMP 5.x changes to /etc/inittab


The visual shows excerpts from /etc/inittab from a system running AIX 5.3 and HACMP
5.3.

HACMP 5.1
HACMP 5.1 added the harc entry to the /etc/inittab file, which runs harc.net to
configure the network interfaces. Also, starting in HACMP 5.1, some of the other inittab
entries have been changed to run in run-level a. These are invoked by HACMP when it
is ready for the TCP/IP daemons to run. The final two lines use the touch command to
create a marker file when all of the run-level a items have been run. HACMP waits for
this marker file to exist so that it knows when the run-level a items have been
completed.

3-68 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty HACMP 5.3


HACMP 5.3 made some additional changes to the inittab file. In HACMP 5.3, the
HACMP daemons are running all the time, even before you start the cluster. These
daemons are started by the ha_star and hacmp entries in the inittab file.

Copyright IBM Corp. 1998, 2005 Unit 3. Networking Considerations for High Availability 3-69
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Talk to Your Network Administrator


Explain how HACMP uses networks
Ask for what you need:
IPAT via IP Aliasing:
Service IP labels/addresses in the production network for client connections to
the cluster applications
Additional subnets for non-service interface (ODM) labels
One per network interface on the node with the most network adapters
These do not need to be routable
IPAT via IP Replacement:
Service IP labels/addresses
Interface IP label for each network adapter (one must be in the same subnet as
the service label)
A different subnet for each interface
One per adapter on the node with the most adapters
Only the subnet containing the service label need be routable
Persistent node IP label for each node on at least one network
(very useful but optional)
Ask early (getting subnets assigned may take some time)

Copyright IBM Corporation 2005

Figure 3-40. Talk to Your Network Administrator AU546.0

Notes:

Getting IP addresses and subnets


Unless you happen to be the network administrator (in which case, you may feel free to
spend time talking to yourself), you need to get the network administrator to provide you
with IP addresses for your cluster. The requirements imposed by HACMP on IP
addresses are rather unusual and might surprise your network administrator, so be
prepared to explain both what you want and why you want it. Also, ask for what you
want well in advance of the date that you need it as it may take some time for the
network administrator to find addresses and/or subnets for you that meet your needs.
Do not accept IP addresses which do not meet the HACMP configuration rules. Even if
you can get them to appear to work, they almost certainly will not work at a point in time
when you can least afford a problem.

3-70 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
Adopt IP Address Numbering Conventions
HACMP cluster tend to have quite a few IP addresses associated
with them
If at all possible, adopt an IP address numbering convention
Requirements imposed by corporate IT policies or the network
administrators may make it impractical to follow any sort of
convention (do the best you can)

Copyright IBM Corporation 2005

Figure 3-41. Adopt IP Address Numbering Conventions AU546.0

Notes:

Using IP address numbering conventions


The purpose of HACMP is to create a highly available environment for your
applications. While HACMP itself does not need any kind of address numbering
convention, adopting a numbering convention can make it easier for humans to
understand the configuration. This can reduce mistakes, leading to better availability.
We will look at some possible conventions in the next few pages.

Copyright IBM Corp. 1998, 2005 Unit 3. Networking Considerations for High Availability 3-71
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Adopt Labeling/Naming Conventions


HACMP cluster also tend to have quite a few IP labels and other
names associated with them
Adopt appropriate labeling and naming conventions:
For example:
Node-resident labels should include the node's name
bondar-if1, bondar-if2, hudson-if1, hudson-if2
Service IP labels that move between nodes should describe the
application rather than the node
web1-svc, infodb-svc
Persistent IP labels should include the node name (since they wont
be moved to another node) and should identify that they are
persistent
bondar-per, hudson-per
Why?
Conventions avoid mistakes
Avoided mistakes improve availability!

hudson-if1
Copyright IBM Corporation 2005

Figure 3-42. Adopt Labeling/Naming Conventions AU546.0

Notes:

Using IP labeling/naming conventions


Again, the purpose of HACMP is to create a highly available environment for your
applications. A naming convention can make it easier for humans to understand the
configuration. This can reduce mistakes, leading to better availability.
Never EVER underestimate the value of a consistent labeling/naming convention. It can
avoid mistakes which can, in turn, avoid outages.
We will look at some possible conventions in the next few pages.

3-72 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
An IPAT via IP Aliasing Convention
Here's one possible IP label number convention for IPAT via IP aliasing
networks:
IP address is of the form AA.BB.CC.DD
AA.BB is assigned by network administrator
CC indicates which interface, service IP label on each node:
15,16 indicates non-service/interface IP labels
5 chosen for service labels
etc (as required) bondar-if1 192.168.15.29
bondar-if2 192.168.16.29
hudson-if1 192.168.15.31
DD indicates which node hudson-if2 192.168.16.31
29 indicates an IP address on bondar xweb-svc 192.168.5.92
yweb-svc 192.168.5.70
31 indicates an IP address on hudson

Be flexible. For example, this convention uses DD=29 for bondar and
DD=31 for hudson because the network administrator assigned bondar-if1
to be 192.168.15.29 and hudson-if1 to be 192.168.15.31. Fortunately, the
network administrator could be convinced to use .29 and .31 for the other
bondar and hudson interface IP addresses.

Copyright IBM Corporation 2005

Figure 3-43. An IPAT via IP Aliasing Convention AU546.0

Notes:

A possible address numbering convention for IP aliasing


The visual shows one possible numbering convention. Many others could be used. The
point is to come up with a convention which makes it easy to identify the purpose and
location of each address.

Limitations
It may not be possible to adopt a particularly consistent IP address convention. Make
sure that, as a minimum, you adopt a consistent IP labeling convention as you are
frequently looking at the labels rather than the numbers, and any consistent convention
generally avoids mistakes.

Copyright IBM Corp. 1998, 2005 Unit 3. Networking Considerations for High Availability 3-73
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

An IPAT via IP Replacement Convention


Here's one possible IP label number convention for IPAT via IP
replacement networks:
IP address is of the form AA.BB.CC.DD
AA.BB is assigned by network administrator
CC indicates which adapter on each node:
15,16 indicate non-service/interface IP labels (defined by network
administrator)
15 also chosen for service labels
And so forth (as required) bondar-if1 192.168.15.29
DD indicates which node bondar-if2 192.168.16.29
29 indicates bondar hudson-if1 192.168.15.31
hudson-if2 192.168.16.31
31 indicates hudson xweb-svc 192.168.15.92
yweb-svc 192.168.15.70

Copyright IBM Corporation 2005

Figure 3-44. An IPAT via IP Replacement Convention AU546.0

Notes:

Possible addressing convention for IP replacement


Again, do your best, within the constraints imposed on you, to be consistent with the IP
addressing usage and the IP labeling usage.
Please keep in mind that these are just suggestions. There are a lot of other
conventions which are just as good. Try to find a set of conventions which works well for
your team.
You see a different set of conventions in the lab exercises. Treat these as another
example to use when developing your conventions.

3-74 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
The /etc/hosts file
All of the cluster's IP labels must be defined in every cluster node's
/etc/hosts file:
127.0.0.1 loopback localhost 127.0.0.1 loopback localhost
# cluster explorers # cluster explorers
# netmask 255.255.255.0 # netmask 255.255.255.0

# bondar node # bondar node


192.168.15.29 bondar-if1 192.168.15.29 bondar-if1
192.168.16.29 bondar-if2 192.168.16.29 bondar-if2

# hudson node # hudson node


192.168.15.31 hudson-if1 192.168.15.31 hudson-if1
192.168.16.31 hudson-if2 192.168.16.31 hudson-if2

# persistent node IP labels # persistent node IP labels


192.168.5.29 bondar-per 192.168.5.29 bondar-per
192.168.5.31 hudson-per 192.168.5.31 hudson-per

# Service IP labels # Service IP labels


192.168.5.92 xweb-svc 192.168.5.92 xweb-svc
192.168.5.70 yweb-svc 192.168.5.70 yweb-svc

# test client node # test client node


192.168.5.11 test 192.168.5.11 test

Copyright IBM Corporation 2005

Figure 3-45. The /etc/hosts file AU546.0

Notes:

/etc/hosts
Make sure that the /etc/hosts file on each cluster node contain all of the IP labels used
by the cluster (you do not want HACMP to be in a position where it must rely on an
external DNS server to do IP label to address mappings).

But Im using DNS / NIS


If NIS or DNS is in operation, IP label lookup defaults to a nameserver system for name
and address resolution. However, if the nameserver was accessed through an interface
that has failed, the request does not complete, and eventually times out. This may
significantly slow down HACMP event processing. To ensure that cluster event
completes successfully and quickly, HACMP disables NIS or DNS hostname resolution
by setting the following AIX 5L environment variable during service IP label swapping:
NSORDER = local

Copyright IBM Corp. 1998, 2005 Unit 3. Networking Considerations for High Availability 3-75
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

As a result, the /etc/hosts file of each cluster node must contain all HACMP-defined IP
labels for all cluster nodes.

Maintaining /etc/hosts
The easiest way to ensure that all of the /etc/hosts file contain all of the required
addresses is to get one /etc/hosts file setup correctly and then copy it to all of the other
nodes or use the filecollections facility of HACMP 5.x.

3-76 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
Service IP Address Examples

Valid service IP Valid service IP addresses


IP addresses on IP addresses on
addresses for IPAT via for IPAT via IP
first node second node
IP aliasing replacement
192.168.7.1 192.168.5.3 and 192.168.5.97
192.168.5.1 192.168.5.2
192.168.183.57 OR
192.168.6.1 192.168.6.2
198.161.22.1 192.168.6.3 and 192.168.6.97
192.168.8.1
192.168.5.1 192.168.5.2 192.168.5.3
192.168.183.57
192.168.6.1 192.168.7.1 192.168.5.97
198.161.22.1

192.168.7.1 192.168.5.3 and 192.168.5.97


192.168.5.1 192.168.5.98
192.168.183.57 OR
192.168.6.14 192.168.6.171
198.161.22.1 192.168.6.3 and 192.168.6.97

192.168.5.1 192.168.5.3 and 192.168.5.97


192.168.4.1
192.168.6.1 192.168.5.2 OR
192.168.10.1
192.168.7.1 192.168.6.2 192.168.6.3 and 192.168.6.97
192.168.183.57
192.168.8.1 192.168.7.2 OR
198.161.22.1
102.168.9.1 192.168.7.3 and 192.168.7.97

Copyright IBM Corporation 2005

Figure 3-46. Service IP Address Examples AU546.0

Notes:

Service IP address examples


The rules for service IP addresses can be somewhat difficult to keep straight. Here are
a few examples to consider. Notice that for a given set of IP addresses on NIC cards
(AIX ODM), service IP labels which are acceptable for IPAT via IP aliasing are not
acceptable for IPAT via replacement and vice-versa.

Copyright IBM Corp. 1998, 2005 Unit 3. Networking Considerations for High Availability 3-77
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Common TCP/IP Configuration Problems


Subnet masks are not consistent for all HA network adapters
Interface IP labels on one node are placed on the same subnet
Service and interface IP labels are placed in the same subnet in
IPAT via IP aliasing networks
Service and interface IP labels are placed in different subnets in
IPAT via IP replacement networks
Ethernet frame type is set to 802.3. This includes etherchannel
Ethernet speed is not set uniformly or is set to autodetect
The contents of /etc/hosts is different on the cluster nodes

Copyright IBM Corporation 2005

Figure 3-47. Common TCP/IP Configuration Problems AU546.0

Notes:

Configuration problems
The visual shows some common IP configuration errors to watch out for.

History
A common error in HACMP 4.x involves the /.rhosts file. HACMP 4.x relies upon the
ability to issue rsh commands between cluster nodes when cluster configuration
changes are being made. This results in a requirement that each cluster node trust
every other cluster node for root-owned rsh sessions. This requirement is met, often
grudgingly, by configuring appropriate /.rhosts files. These files are not needed except
when cluster configuration changes are being made so they can be removed or
renamed outside of maintenance windows.
HACMP 5.x takes a totally different approach to dealing with cluster configuration
changes which eliminates the need for /.rhosts files.

3-78 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
Single IP Adapter Nodes
Single IP Adapter nodes may appear attractive as they appear to
reduce the cost of the cluster
The cost reduction is an illusion:
1. A node with only a single adapter on a network is a node with a
single point of failure - the single adapter.
2. Clusters with unnecessary single points of failure tend to suffer more
outages
3. Unnecessary outages cost (potentially quite serious) money
One of the fundamental cluster design goals is to reduce
unnecessary outages by avoiding single points of failure
HACMP requires at least two NICs per IP network for failure
diagnosis
Clusters with less than two NICs per IP network are not
supported*

* Certain Cluster 1600 SP Switch-based clusters are supported with only one
SP Switch adapter per network.
Copyright IBM Corporation 2005

Figure 3-48. Single IP Adapter Nodes AU546.0

Notes:

Single IP adapter nodes


It is not unusual for a customer to try to implement an HACMP cluster in which one or
more of the cluster nodes have only a single network adapter (the motivation is usually
the cost of the adapter but the additional cost of a backup system with enough PCI slots
for the second adapter can also be the issue).
The situation is actually, quite simple: with the exception of certain Cluster 1600 clusters
which use the SP Switch facility, any cluster with only one NIC on a node for a given
network has a single point of failure, the solitary NIC, and is not supported.
Nodes with only a single NIC on an IP network are, at best, a false economy. At worst,
they are a fiasco waiting to happen as the lack of a second NIC on one or more of the
nodes could lead to extended cluster outages and just generally strange behavior
(including HACMP failing to detect failures which would have been detected had all
nodes had at least two NICs per IP network).

Copyright IBM Corp. 1998, 2005 Unit 3. Networking Considerations for High Availability 3-79
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Lets Review Topic 3


1. True or False?
A single cluster can use both IPAT via IP aliasing and IPAT via IP replacement.
2. True or False?
All networking technologies supported by HACMP support IPAT via IP aliasing.
3. True or False?
All networking technologies supported by HACMP support IPAT via IP
replacement.
4. If the left node has NICs with the IP addresses 192.168.20.1 and 192.168.21.1 and
the right hand node has NICs with the IP addresses 192.168.20.2 and 192.168.21.2
then which of the following are valid service IP addresses if IPAT via IP aliasing is
being used (select all that apply)?
a. (192.168.20.3 and 192.168.20.4) OR (192.168.21.3 and 192.168.21.4)
b. 192.168.20.3 and 192.168.20.4 and 192.168.21.3 and 192.168.21.4
c. 192.168.22.3 and 192.168.22.4
d. 192.168.23.3 and 192.168.24.3
5. If the left node has NICs with the IP addresses 192.168.20.1 and 192.168.21.1 and
the right hand node has NICs with the IP addresses 192.168.20.2 and 192.168.21.2
then which of the following are valid service IP addresses if IPAT via IP replacement
is being used (select all that apply)?
a. (192.168.20.3 and 192.168.20.4) OR (192.168.21.3 and 192.168.21.4)
b. 192.168.20.3, 192.168.20.4, 192.168.21.3 and 192.168.21.4
c. 192.168.22.3 and 192.168.22.4
d. 192.168.23.3 and 192.168.24.3
Copyright IBM Corporation 2005

Figure 3-49. Lets Review Topic 3 AU546.0

Notes:

3-80 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty 3.4 The Impact of IPAT on Clients

Copyright IBM Corp. 1998, 2005 Unit 3. Networking Considerations for High Availability 3-81
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Topic 4 Objectives:
The Impact of IPAT on Clients
After completing this topic, you should be able to:
Explain how user systems are affected by IPAT related
operations
Describe the ARP cache issue
Explain how gratuitous ARP usually deals with the ARP
cache issue
Explain three ways to deal with the ARP cache issue if
gratuitous ARP does not provide a satisfactory resolution to
the ARP cache issue:
Configure clinfo on the client systems
Configure clinfo within the cluster
Configure Hardware Address Takeover within the cluster

Copyright IBM Corporation 2005

Figure 3-50. Topic 4 Objectives: The Impact of IPAT on Clients AU546.0

Notes:

Topic 4 Objectives
This section looks at the impact of IPAT on client systems.

3-82 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
How Are Users Affected?
IP address moves/swaps within a node result in a short outage
Long-term connection oriented sessions typically recover seamlessly
(TCP layer deals with packet retransmission)
Resource group fallovers to a new node result in a longer outage and sever
connection oriented services (long term connections must be reestablished,
short term connections retried)
In either case:
Short-lived TCP-based services like http and SQL queries experience
short server down outage
UDP-based services must deal with lost packets

Copyright IBM Corporation 2005

Figure 3-51. How are Users Affected? AU546.0

Notes:

What users see


Users who are actively using the clusters services at the time of a failure will notice an
outage while HACMP detects, diagnoses and recovers from the failure.

How long does failure recovery take?


There are three components which each contribute to the duration of the outage:
- How long it takes HACMP to decide that something has failed
- How long it takes HACMP to diagnose the failure (determine what failed)
- How long it takes HACMP to recover from the failure
The first two of these generally takes between about five and about thirty seconds
depending on the exact failure involved. The third component can take another dozen

Copyright IBM Corp. 1998, 2005 Unit 3. Networking Considerations for High Availability 3-83
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

or so seconds when moving an IP address within a node or it can take a few minutes or
more in the case of a fallover.

Recovery without fallover


If the problem can be resolved without a fallover then the users generally notice a short
outage and then are able to continue with what they were doing. Their TCP/IP-based
sessions come back to life and everything appears to be fine again. Unless they are
actively using the clusters applications at the time, they might not even notice the
outage.

Recovery with fallover


If the problem requires a fallover, then existing TCP/IP sessions eventually fail (usually
as soon as the service IP address comes up on the takeover node and AIX on that node
resets sessions that it gets packets for that it does not know about). Users are also
more likely to notice the outage because it typically takes a couple of minutes to
complete a fallover (much of this time is spent dealing with taking over volume groups,
checking file systems and recovering applications).
Each of these issues tends to be visible to the humans using the application in some
fashion or other. It may be that they see a short period of total silence followed by a
clean recovery, or they might have to reconnect to the application. What they actually
experience generally depends far more on how the client side of the application is
designed and implemented than on anything within the control of the clusters
administrator.

3-84 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
What About the User's Computers?
An IPAT operation renders ARP cache entries on client systems
obsolete
Client systems must (somehow) update their ARP caches

xweb (192.168.5.1) 00:04:ac:62:72:49 xweb (192.168.5.1) 00:04:ac:62:72:49

xweb (192.168.5.1) 00:04:ac:48:22:f4

xweb 192.168.5.1 (alias) 192.168.5.1 (alias) xweb


192.168.10.1 (ODM) 192.168.11.1 (ODM)
192.168.10.1 (ODM) 192.168.11.1 (ODM)
00:04:ac:62:72:49 00:04:ac:48:22:f4
00:04:ac:62:72:49 00:04:ac:48:22:f4

Copyright IBM Corporation 2005

Figure 3-52. What About the Users's Computers? AU546.0

Notes:

ARP cache issues


Client systems which are located on the same physical network as the cluster may find
that their ARP cache entries are obsolete after an IP address moves to another NIC (on
the same node or on a different node).
The ARP cache is a table of IP addresses and the network hardware addresses (MAC
addresses) of the physical network cards that the IP addresses are assigned to. When
an IP address moves to a different physical network card, the clients ARP cache might
still have the old MAC address. It could take the client system a few minutes to realize
that its ARP cache is out-of-date and ask for an updated MAC address for the servers
IP address.

Copyright IBM Corp. 1998, 2005 Unit 3. Networking Considerations for High Availability 3-85
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Local or Remote Client?


If the client is remotely connected through a router, it is the
router's ARP cache which must be corrected.

ARP: ARP:
router (192.168.8.1) 00:04:ac:42:9c:e2 router (192.168.8.1) 00:04:ac:42:9c:e2

192.168.8.3 192.168.8.3
00:04:ac:27:18:09 00:04:ac:27:18:09

ARP: ARP:
xweb (192.168.5.1) 00:04:ac:62:72:49 192.168.8.1 xweb (192.168.5.1) ??? 192.168.8.1
client (192.168.8.3) 00:04:ac:27:18:09 00:04:ac:42:9c:e2 client (192.168.8.3) 00:04:ac:27:18:09 00:04:ac:42:9c:e2

192.168.5.99 192.168.8.99
00:04:ac:29:31:37 00:04:ac:29:31:37
xweb 192.168.5.1 (alias) 192.168.5.1 (alias) xweb
192.168.10.1 (ODM) 192.168.11.1 (ODM) 192.168.10.1 (ODM) 192.168.11.1 (ODM)
00:04:ac:62:72:49 00:04:ac:48:22:f4 00:04:ac:62:72:49 00:04:ac:48:22:f4

Copyright IBM Corporation 2005

Figure 3-53. Local or Remote Client? AU546.0

Notes:

ARP cache entries are always local


ARP cache entries are only maintained by a system for the physical network cards that
it communicates with directly. If there is a router between the client system and the
cluster then the client systems ARP cache has entry for the IP address and MAC
address for the routers network interface located on the clients side of the router. No
amount of IP address moves or node fallovers have any (positive or negative) impact on
what needs to be in the clients ARP cache.
Rather, it is the ARP cache entries for the router which is on the clusters network which
must be up-to-date.
Most clusters have either a small handful or no client systems on the same physical
network as the cluster. Consequently, whatever ARP cache issues might exist in a
particular configuration, they do not usually affect very many systems.

3-86 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
Gratuitous ARP
AIX 5L supports a feature called gratuitous ARP
AIX sends out a gratuitous (that is, unrequested) ARP update
whenever an IP address is set or changed on a NIC
Other systems on the local physical network are expected to
update their ARP caches when they receive the gratuitous ARP
packet
Remember: only systems on the cluster's local physical network
must respect the gratuitous ARP packet
So arp update problems have been minimized
Required if using IPAT via aliasing

Copyright IBM Corporation 2005

Figure 3-54. Gratuitous ARP AU546.0

Notes:

Gratuitous ARP
AIX 5L supports a feature called gratuitous ARP. Whenever an IP address associated
with a NIC changes, AIX broadcasts out a gratuitous (in other words, unsolicited) ARP
update. This gratuitous ARP packet is generally received and used by all systems on
the clusters local physical network to update their ARP cache entries.
The result is that all relevant ARP caches are updated almost immediately after the IP
address is assigned to the NIC.
The problem is that not all systems respond or even necessarily receive these
gratuitous ARP cache update packets. If a local system either does not receive or
ignores the gratuitous ARP cache packet then its ARP cache remains out-of-date.
Note that unless the network is VERY overloaded, local systems generally either
always or never act upon the gratuitous ARP update packet.

Copyright IBM Corp. 1998, 2005 Unit 3. Networking Considerations for High Availability 3-87
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Gratuitous ARP Support Issues


Gratuitous ARP is supported by AIX on the following network
technologies:
Ethernet (all types and speeds)
Token-Ring
FDDI
SP Switch 1 and SP Switch 2
Gratuitous ARP is not supported on ATM
Operating systems are not required to support gratuitous ARP
packets
Practically every operating system does support gratuitous ARP
Some systems (for example, certain routers) can be configured
to respect or ignore gratuitous ARP packets

Copyright IBM Corporation 2005

Figure 3-55. Gratuitous ARP Support Issues AU546.0

Notes:

Gratuitous ARP issues


Not all network technologies provide the appropriate capabilities to implement
gratuitous ARP. In addition, operating systems which implement TCP/IP are not
required to respect gratuitous ARP packets (although practically all modern operating
systems do).
Finally, support issues aside, an extremely overloaded network or a network which is
suffering intermittent failures might result in gratuitous ARP packets being lost (a
network which is sufficiently overloaded to be losing gratuitous ARP packets or which is
suffering intermittent failures which result in gratuitous ARP packets being lost is likely
to be causing the cluster and the cluster administrator far more serious problems than
the ARP cache issue involves).

3-88 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
What if Gratuitous ARP is Not Supported?
If the local network technology doesn't support gratuitous ARP or
there is a client system or router on the local physical network
which must communicate with the cluster and which does not
support gratuitous ARP packets:
clinfo can used on the client to receive updates of changes.
clinfo can be used on the servers to ping a list of clients, forcing an
update to their ARP caches.
HACMP can be configured to perform Hardware Address Takeover
(HWAT).

Suggestion:
Do not get involved with using either clinfo or HWAT to deal with
ARP cache issues until you've verified that there actually are ARP
issues which need to be dealt with.

Copyright IBM Corporation 2005

Figure 3-56. What if Gratuitous ARP is Not Supported? AU546.0

Notes:

If gratuitous ARP is not supported


HACMP supports three alternatives to gratuitous ARP. We will discuss these in the next
few pages.

Dont add unnecessary complexity


Cluster configurators should probably not simply assume that gratuitous ARP wont
provide a satisfactory solution as each of the alternatives introduce additional, possibly
unnecessary complexity into the cluster.
If the cluster administrator or configurator decides that the probability of a gratuitous
ARP update packet being lost is high enough to be relevant, then they should proceed
as though their context does not support gratuitous ARP.

Copyright IBM Corp. 1998, 2005 Unit 3. Networking Considerations for High Availability 3-89
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Option 1: clinfo on the Client


The cluster information daemon (clinfo ) provides a facility to
automatically flush the ARP cache on a client system.
In this option, clinfo must execute on the client platform
clinfo executables are supplied for AIX
clinfo source code is provided with HACMP to facilitate porting clinfo to
other platforms
clinfo uses SNMP for communications with HACMP nodes
/usr/es/sbin/cluster/etc/clhosts on the client system must contain a
list of persistent node IP labels (one for each cluster node)
clinfo.rc is invoked to flush the local arp cache

192.168.5.1 (alias) xweb


192.168.10.1 (boot) 192.168.11.1 (boot)
00:04:ac:62:72:49 00:04:ac:48:22:f4

snmpd
clinfo
clstrmgr clinfo.rc

Copyright IBM Corporation 2005

Figure 3-57. Option 1: clinfo on the Client AU546.0

Notes:

clinfo on the client


The cluster information service may be run on any client system. clinfo can execute a
script that flushes the local ARP cache and pings the servers following failure. clinfo
can detect failure either by polling or receiving SNMP traps from within the cluster.
The clinfo source code is provided with HACMP so that it can, at least in theory, be
ported to non-AIX client operating systems.

3-90 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
Option 2: clinfo From Within the Cluster
clinfo may also be used on the cluster's nodes to force an ARP cache
update.
In this option, clinfo runs on every cluster node
If clinfo is only run on one cluster node then that node become a
single point of failure!
clinfo flushes local ARP cache (on the cluster node) then pings a
defined list of clients listed in /usr/es/sbin/cluster/etc/clinfo.rc
Clients pick up the new IP address to hardware address relationship as a
result of the ping request

192.168.5.1 (alias) xweb


192.168.10.1 (boot) 192.168.11.1 (boot)
00:04:ac:62:72:49 00:04:ac:48:22:f4
ping!
snmpd
clinfo
clstrmgr

clinfo.rc

Copyright IBM Corporation 2005

Figure 3-58. Option 2: clinfo From Within the Cluster AU546.0

Notes:

clinfo on the cluster nodes


clinfo is already compiled and ready to run on the clusters servers. Once again
clinfo may execute a script on the servers that flushes the local ARP cache and pings
the local clients. These in-bound ping packets contain the new IP address-to-MAC
address relationship, and are used by the client operating system to update its ARP
cache. Unfortunately, this is not a mandatory feature of TCP/IP, so its possible
(although rather unusual) that a client operating system might fail to update its ARP
cache when the ping packet arrives.

Copyright IBM Corp. 1998, 2005 Unit 3. Networking Considerations for High Availability 3-91
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

clinfo.rc script (extract)


This script is located under /usr/es/sbin/cluster/etc and is present
on an AIX system if the cluster.client fileset has been installed.
A separate file /etc/cluster/ping_client_list can also contain a list
of client machines to ping
# Example:
#
# PING_CLIENT_LIST="host_a host_b 1.1.1.3"
#
PING_CLIENT_LIST=""

TOTAL_CLIENT_LIST="${PING_CLIENT_LIST}"

if [[ -s /etc/cluster/ping_client_list ]] ; then
#
# The file "/etc/ping_client_list" should contain only a line
# setting the variable "PING_CLIENT_LIST" in the form given
# in the example above. This allows the client list to be
# kept in a file that is not altered when maintenance is
# applied to clinfo.rc.
#
. /etc/cluster/ping_client_list

TOTAL_CLIENT_LIST="${TOTAL_CLIENT_LIST} ${PING_CLIENT_LIST}"
fi

#
# WARNING!!! For this shell script to work properly, ALL entries in
# the TOTAL_CLIENT_LIST must resolve properly to IP addresses or hostnames
# (must be found in /etc/hosts, DNS, or NIS). This is crucial.
. . .

Copyright IBM Corporation 2005

Figure 3-59. clinfo.rc script (extract) AU546.0

Notes:

clinfo.rc
The clinfo.rc script must be edited manually on the cluster nodes which run clinfo.
There is no reason why clinfo cannot also be run on the client systems although these
changes are only required on the cluster nodes which are running clinfo.rc.
Remember: all the cluster nodes should be running clinfo if clinfo is being used
within the cluster to deal with ARP cache issues since you never know which cluster
nodes will survive whatever has gone wrong).
Edit the /usr/es/sbin/cluster/etc/clinfo.rc file on each server node. Add the IP
label or IP address of each system that accesses service IP addresses managed by
HACMP to the PING_CLIENT_LIST list. Then start the clinfo daemon (clinfo can be
started as part of starting HACMP on the cluster nodes).

3-92 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty /etc/cluster/ping_client_list
You can also provide the list of clients to be pinged in the file
/etc/cluster/ping_client_list. This is probably the best method as it ensures that the
list of clients to ping is not overlaid by future changes to clinfo.rc.

More details
This script is invoked by HACMP as follows:
clinfo.rc {join,fail,swap} interface_name
The next set of details likely do not make sense until we are further into the course.
When clinfo is notified that the cluster is stable after undergoing a failure recovery of
some sort or when clinfo first connects to clsmuxpd (the SNMP part of HACMP), it
receives a new map (description of the clusters state). It checks for changed states of
interfaces:
- If a new state is UP, clinfo calls clinfo.rc join interface_name.
- If a new state is DOWN, clinfo calls clinfo.rc fail interface_name.
- If clinfo receives a node_down_complete event, it calls clinfo.rc with the fail
parameter for each interface currently UP.
- If clinfo receives a fail_network_complete event, it calls clinfo.rc with the
fail parameter for all associated interfaces.
- If clinfo receives a swap_complete event, it calls clinfo.rc swap
interface_name.

Copyright IBM Corp. 1998, 2005 Unit 3. Networking Considerations for High Availability 3-93
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Option 3: Hardware Address Takeover


HACMP can be configured to swap a service IP label's hardware
address between network adapters.
HWAT is incompatible with IPAT via IP aliasing because each
service IP address must have its own hardware address and a NIC
can support only one hardware address at any given time.
Cluster implementer designates a Locally Administered Address
(LAA) which HACMP assigns to the NIC which has the service IP
label.

Copyright IBM Corporation 2005

Figure 3-60. Option 3: Hardware Address Takeover AU546.0

Notes:

Hardware address takeover (HWAT)


Hardware Address Takeover (HWAT) is the most robust method of dealing with the ARP
cache issue as it ensures that the hardware address associated with the service IP
address does not change (which avoids the whole issue of whether the client systems
ARP cache is out-of-date).
The essence of HWAT is that the cluster configurator designates a hardware address
which is to be associated with a particular service IP address. HACMP then ensures
that whichever NIC the service IP address is on also has the designated hardware
address.
HWAT is discussed in detail in Appendix C.

HWAT considerations
There are a few points which must be kept in mind when contemplating HWAT:

3-94 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty - The hardware address which is associated with the service IP address must be
unique within the physical network that the service IP address is configured for.
- HWAT is not supported by IPAT via IP aliasing because each NIC can have more
than one IP address but each NIC can only have one hardware address.
- HWAT is only supported for Ethernet, token ring and FDDI networks (MCA FDDI
network cards do not support HWAT). ATM networks do not support HWAT.
- HWAT increases the takeover time (usually by just a few seconds).
- HWAT is an optional capability which must be configured into the HACMP cluster
(we will see how to do that in detail in a later unit).
- Cluster nodes using HWAT on token ring networks must be configured to reboot
after a system crash as the token ring card will continue to intercept packets for its
hardware address until the node starts to reboot.

Copyright IBM Corp. 1998, 2005 Unit 3. Networking Considerations for High Availability 3-95
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Checkpoint
1. True or False?
Clients are required to exit and restart their application
after a fallover.
2. True or False?
All client systems are potentially directly affected by the
ARP cache issue.
3. True or False?
clinfo must not be run both on the cluster nodes and
on the client systems.
4. If clinfo is run by cluster nodes to address ARP cache
issues, you must add the list of clients to ping to either the
__________________________ or the
__________________________ file

Copyright IBM Corporation 2005

Figure 3-61. Checkpoint AU546.0

Notes:

3-96 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
Unit Summary (1 of 2)
HACMP uses networks to:
Provide highly available client access to applications in the cluster
Detect and diagnose NIC, node, and network failures using RSCT heartbeats
Communicate with HACMP daemons on other nodes
All HACMP clusters require a non-IP network
Differentiate between node, IP subsystem and network failures
Prevent cluster partitioning
HACMP networking terminology
Service IP label/address: HA address used by client to access application
Non-service IP label/address: Applied to NIC at boot time; stored in AIX ODM
Persistent IP label/address: Node bound HA address for admin access to a node
Communication interface: Association between a NIC and an IP label/address
Communication device: Device used in non-IP network
Communication adapter: X.25 adapter used in a HA communication link
IP Address Takeover (IPAT): Moves service IP address to working NIC after a
failure
IPAT via aliasing: Adds the service address to a NIC using IP aliasing
IPAT via replacement: Replaces the non-service address with the service address

Copyright IBM Corporation 2005

Figure 3-62. Unit Summary (1 of 2) AU546.0

Notes:

Copyright IBM Corp. 1998, 2005 Unit 3. Networking Considerations for High Availability 3-97
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Unit Summary (2 of 2)
HACMP has very specific requirements for subnets
IPAT via aliasing
NICs on a node must be on different subnets; which must use the same subnet mask
There must be at least one subnet in common with all nodes
Service addresses must be on different subnet than any non-service address
A service address can be on same subnet with another service address
IPAT via replacement
NICs on a node must be on different subnets; which must use the same subnet mask
Each service address must be in same subnet as one of the non-service addresses on
the highest priority node
Multiple service addresses must be in the same subnet
Heartbeating over IP alias (any form of IPAT)
Service and non-service addresses can coexist on the same subnet, or be on separate
subnets
One subnet required for heartbeating; does not need to be routed
HACMP can update local clients ARP cache after IPAT
Gratuitous ARP (default)
clinfo on clients
clinfo on server nodes
Hardware address takeover (HWAT)

Copyright IBM Corporation 2005

Figure 3-63. Unit Summary (2 of 2) AU546.0

Notes:

3-98 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Student Notebook

Uempty Unit 4. Planning for Applications and Resource


Groups

What This Unit Is About


This unit describes the considerations for making an application highly
available in an HACMP environment

What You Should Be Able to Do


After completing this unit, you should be able to:
List and explain the requirements for an application to be
supported in an HACMP environment
Describe the HACMP start and stop scripts
Describe the resource group behavior policies supported by
HACMP
Enter the configuration information into the Planning Worksheets

How You Will Check Your Progress


Accountability:
Checkpoint Questions

References
SC23-4864-06 HACMP for AIX, Version 5.3: Concepts and Facilities
Guide
SC23-4861-06 HACMP for AIX, Version 5.3 Planning and Installation
Guide

Copyright IBM Corp. 1998, 2005 Unit 4. Planning for Applications and Resource Groups 4-1
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Unit Objectives
After completing this unit, you should be able to:
List and explain the requirements for an application to be
supported in an HACMP environment
Describe the HACMP start and stop scripts
Describe the resource group behavior policies supported by
HACMP
Enter the configuration information into the Planning
Worksheets

Copyright IBM Corporation 2005

Figure 4-1. Unit Objectives AU546.0

Notes:

4-2 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Student Notebook

Uempty
How to Define an Application to HACMP
Two steps to define an Application to HACMP
Step 1 Create Resource
Application Server: Defines start and stop scripts
Step 2 Create Resource Group:

Resource Group

Node 1 Node 2 Node 3

Shared
Disk
List of Nodes
Policies: Where to run
Resources
Application Server
Service Address
Volume Group
Copyright IBM Corporation 2005

Figure 4-2. How to Define an Application to HACMP AU546.0

Notes:

Two steps to define an application to HACMP


In order to have HACMP manage an application you must do 2 things:
i. Create an HACMP resource called an application server.
The application server defines a start and a stop script for the application
ii. Create an HACMP resource group. This in turn will require two steps:
1) The basic resource group definition:
- Defines a list of which nodes where the application can run. The default
priority is the order in the list. The first node listed is called the home node.
- Names which policies to use that will control which node the application
actually runs on.
2) Add resource(s) to the Resource Group. These are the resources that
HACMP will move during a fallover.
- Application server name, Service address, and Volume group.

Copyright IBM Corp. 1998, 2005 Unit 4. Planning for Applications and Resource Groups 4-3
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Application Considerations
Automation
No intervention
Dependencies
Using names unique to one node
Other applications
Interference
Conflicts with HACMP
Robustness
Application can withstand problems
Implementation
Other aspects to plan for
Monitoring using HACMP

Copyright IBM Corporation 2005

Figure 4-3. Application Considerations AU546.0

Notes:

Introduction
Many applications can be put under the control of HACMP but there are some
considerations that should be taken into account.

Automation
One key requirement for an application to function successfully under HACMP is that
the application be able to start and stop without any manual intervention. Since the
cluster daemons call the start and stop scripts, there is no option for interaction.
Additionally, upon an HACMP fallover, the recovery process calls the start script to bring
the application online on a standby node. This allows for a fully automated recovery
Other requirements for start and stop scripts will be covered on the next visual.

4-4 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Student Notebook

Uempty Dependencies
Dependencies to be careful of when coding the scripts include:
i. Referring to a locally attached device.
ii. Hard coding such as /dev/tty0 which may not be the same on another node.
iii. Using a hostname which is not the same on other nodes.
iv. Software licensing: Software can be licensed to a particular CPU ID. If this is the
case with your application, it is important to realize that a fallover of the software
will not successfully restart. You may be able to avoid this problem by having a
copy of the software resident on all cluster nodes. Know whether your application
uses software that is licensed to a particular CPU ID.
Application dependencies:
Dependencies that in the past you had to worry about but now you may not have to:
i. One application must be up before another one
ii. Applications that must both run on the same node
These can now be handled by Run-Time Dependency options. An overview of these
is given later in this unit.

Interference
An application may execute properly on both the primary and standby nodes. However,
when HACMP is started, a conflict with the application or environment could arise that
prevents HACMP from functioning successfully. Two areas to lookout for are: Using
IPX/SPX Protocol and Manipulating Network Routes.

Robustness
Beyond basic stability, an application under HACMP should meet other robustness
characteristics, such as successful start after hardware failure and survival of real
memory loss. It should also be able to survive the loss of the kernel or processor state.

Implementation
There are a number of aspects of an application to consider as you plan for
implementing it under HACMP. Consider characteristics such as time to start, time to
restart after failure, and time to stop. Also consider:
Writing effective scripts.
Consider file storage locations.
Using inittab and cron Table: Inittab is processed before HACMP is started. Cron
table is local to a each node. Time/date should be synchronized.
We will look at writing scripts and data locations in the following visuals.

Copyright IBM Corp. 1998, 2005 Unit 4. Planning for Applications and Resource Groups 4-5
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Monitoring using HACMP


HACMP provides another run-time option called application monitoring. With
monitoring, failure of the application can generate a fallover. Also an availability analysis
tool is provided. These are covered in detail in the HACMP Administration II
Administration and Problem Determination course.

4-6 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Student Notebook

Uempty
Writing Start and Stop Scripts
Items to check:
Environment is what is expected
Multiple instances issue
Location of scripts
Handle Errors from previous termination
Correct coding

Using Assists

Copyright IBM Corporation 2005

Figure 4-4. Writing Start and Stop Scripts AU546.0

Notes:

Introduction:
Application start scripts should not assume the state of the environment; defensive
programming may correct any irregular conditions that occur. Remember that cluster
manager spawns these scripts off a separate job in the background and carries on
processing. The application start scripts must be able to handle an unknown previous
shutdown state.

Items to check
- Environment:
Verify the environment. Are the prerequisite conditions satisfied? These may include
access to a file system, adequate paging space, IP labels and free file system
space. The start script should exit and run a command to notify system
administrators if the requirements are not met.

Copyright IBM Corp. 1998, 2005 Unit 4. Planning for Applications and Resource Groups 4-7
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

- Multiple instances issue:


When starting an application with multiple instances, only start the instances
applicable for each node. Certain database startup commands read a configuration
file and start all known databases at the same time. This may not be a desired
configuration for all environments.
- Location:
Scripts must be available and executable on all nodes of the resource group.
- Handle any previous state:
Was previous termination successful? Is data recovery needed? Always assume the
database is in an unknown state since the conditions that occurred to cause the
takeover cannot be assumed.
- Correct Coding:
Scripts should start with declaring a shell (that is, #!/bin/usr/KS)
Scripts should not kill an HACMP process (for example via misuse of the grep
command)
Scripts should exit with RC=0
The stop script should make sure the application is really stopped.

Using Assists
IBM provides a priced feature for HACMP that provides all the code and monitoring for
3 applications: WebSphere, DB2, and Oracle Real Application Server (RAC). In these
cases you would not have to write the scripts yourself.
There are also plug-in filesets that provide help for integrating print server, DHCP, and
DNS. These filesets are part of the base HACMP product.

4-8 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Student Notebook

Uempty
Where Should Data Go?
Private Storage
Operating system components
Shared Storage
Dynamic data
Web server content
Log Files
Files updated by application
It Depends
Configuration files
Application binaries
License files

Copyright IBM Corporation 2005

Figure 4-5. Where Should Data Go? AU546.0

Notes:

Introduction
Deciding where data should go should be thought out well. For some data, the answer
is clear. For other cases, it depends. Putting data on shared storage allows for only one
copy but may not be available when needed. Putting data on private storage is subject
to having different copies but upgrades can be done easier.

Private storage
Private storage must be used for the operating system components. It may also be
used for configuration files, license files and application binaries subject to the
trade-offs mentioned in the introduction above.

Copyright IBM Corp. 1998, 2005 Unit 4. Planning for Applications and Resource Groups 4-9
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Shared storage
Shared storage must be used for dynamic data, Web server content, data that is
updated by the application and application log files (be sure time is same on the nodes).
Again configuration files, application binaries, and license files could go here subject to
the trade-offs mentioned in the introduction above.

It depends
License files deserves a special mention. If using node locked, then you should use
private storage. In any case, you must learn the license requirements of the application
to make a proper determination.

4-10 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Student Notebook

Uempty
Shared Storage Questions
Some questions to ask your user or customer:

For each highly available application in the cluster:


How much shared storage is required?
Upon deployment of the cluster?
In six months?
In one year?
In two years?
How is data organized?
Files within file systems versus production database storage
Is the application:
I/O bandwidth intensive?
Random I/O intensive?
What's required to validate application data after a crash?
How important is REALLY fast recovery from failures?
How will it be backed up?
How much private storage is required?
Usually not enough to be a concern

Copyright IBM Corporation 2005

Figure 4-6. Shared Storage Questions AU546.0

Notes:

Some questions to ask your user or customer


In order to plan how to configure shared storage, you are going to need information
about how the application deals with data. Start this process early as it can take a while
to work out some of the details. Also, expect to have to educate the user or customer to
a certain extent as they are unlikely to have ever encountered questions like which
data can reside in private storage and which data must reside in shared storage?

Copyright IBM Corp. 1998, 2005 Unit 4. Planning for Applications and Resource Groups 4-11
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Resource Group Policies

Three initial policies


Startup Policy
Fallover Policy
Fallback Policy

Additional run-time options


Settling time (Startup)
Delayed Fallback (Fallback)

Copyright IBM Corporation 2005

Figure 4-7. Resource Group Policies AU546.0

Notes:

Three initial policies


In HACMP, you specify in the resource group definition 3 policies that control which
node a resource group (application) runs on:
1) Startup (of Cluster Services)
When Cluster Services starts up on a node, each resource group definition is
read to determine if this node is listed and if so whether or not that resource
group has not already been started on another node. If this is the case then
the startup policy is examined to further determine if Cluster Services should
activate the resource group and start the application.
2) Fallover
If there is node failure, then the Fallover policy determines which other node
should takeover and activate the resource group and start the application
there.

4-12 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Student Notebook

Uempty 3) Fallback
If a node earlier in the list of nodes (that is, higher priority) for the resource
group is started after a fallover, then the Fallback policy determines if the
resource group should be stopped and started back up on the higher priority
node.

Additional Run-Time options


In addition to the policies, there are 2 run time options that affect these policies.
1) Settling time affects one of the Startup policies
2) Delayed fallback timer affects how the Fallback policy works
Run time options are covered in more detail in the HACMP Administration II
Administration and Problem Determination course.

Copyright IBM Corp. 1998, 2005 Unit 4. Planning for Applications and Resource Groups 4-13
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Startup Policy
Online on Home Node Only

Online on First Available Node


Run-time Settling Time may be set

Using Distribution Policy

On All Available Nodes

Copyright IBM Corporation 2005

Figure 4-8. Startup Policy AU546.0

Notes:

Online on home node only


When starting Cluster Services on the nodes, only the Cluster Services on the home
node (first node listed in the resource group definition) will activate the resource group
(and start the application). This policy requires the home node to be available.

Online on first available node


When starting Cluster Services on the nodes, the first Cluster Services up on a node
that is in the list of nodes for the resource group will activate the resource group and
start the application.

4-14 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Student Notebook

Uempty Online using node distribution policy


Similar to the Online on first available node except that only one resource group can
be active on a given node. If the first node in the resource groups list of nodes already
has another resource group started on it then the next node in the list of nodes is tried.

Online on all available nodes


Cluster Services on every node will activate the resource group and start the
application. This is equivalent to the concurrent resource group behavior in previous
releases of HACMP. If you select this option for the resource group, ensure that
resources in this group can be brought online on multiple nodes simultaneously.

Run-time settling time


A Settling Time value can be set for the Online on first available node policy. If you set
the settling time, Cluster Services will wait up to the duration of the settling time interval
to see if the home node joins the cluster or at the end of the interval choose the highest
priority node rather than simply activating the resource group on the first possible node
that reintegrates into the cluster. This minimizes the resource group from bouncing.

Copyright IBM Corp. 1998, 2005 Unit 4. Planning for Applications and Resource Groups 4-15
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Online On All Available Nodes


Application runs on all available nodes concurrently
No fallover/fallback just less/more nodes running the
application

Resource group restrictions:


No JFS or JFS2 filesystems (only raw logical volumes)
No service IP Labels / Addresses (which means no
IPAT)
Application must provide own lock manager

Potential to provide essentially zero downtime

Copyright IBM Corporation 2005

Figure 4-9. Online On All Available Nodes AU546.0

Notes:

Application runs on all available nodes concurrently


If a node belongs to a resource group with this startup policy, when Cluster Services
start on the node, Cluster Services will start the application and make all the resources
mentioned available on this node. In this case, it does not matter if the resource group is
already active on another node so the application ends up being started on all nodes
where Cluster Services are started.
This policy is also referred to as concurrent.

Resource group restrictions


There are restrictions when defining a resource group that will use this policy. The data
can not be part of a JFS type logical volume, it must be defined as a raw logical volume.
You cannot include a service address in the resource group definition. Finally, it is up to
the application to provide a lock manager to ensure data isnt being updated

4-16 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Student Notebook

Uempty simultaneously from multiple nodes. Oracle Real Application Server (RAC) is an
application that uses this type of startup policy.

Potential to provide essentially zero downtime


Because the application is running on multiple nodes, the loss of a node does not result
in the loss of the application.

Copyright IBM Corp. 1998, 2005 Unit 4. Planning for Applications and Resource Groups 4-17
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Fallover Policy

Fallover To Next Priority Node In The List

Fallover Using Dynamic Node Priority

Bring Offline (On Error Node)

Copyright IBM Corporation 2005

Figure 4-10. Fallover Policy AU546.0

Notes:

Fallover to next priority node in the list


In the case of fallover, a resource group that is online on only one node at a time follows
the list in the resource groups definition to find the next highest priority node currently
available.

Fallover using dynamic node priority


If you select this option for the resource group, you can choose one of the following
three methods to have HACMP choose the fallover node dynamically:
i. highest_mem_free (most available memory)
ii. highest_idle_cpu (most available processor time)
iii. lowest_disk_busy (least disk activity)
Dynamic node priority is useful in a cluster that has more that two nodes.

4-18 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Student Notebook

Uempty Bring offline (On Error Node Only).


Select this option to bring a resource group offline on a node during an error condition.
This option represents the behavior of a concurrent resource group and ensures that if
a particular node fails, the resource group goes offline on that node only, but remains
online on other nodes. Selecting this option as the fallover preference when the startup
preference is not Online On All Available Nodes may allow resources to become
unavailable during error conditions. If you do so, HACMP issues an error.

Copyright IBM Corp. 1998, 2005 Unit 4. Planning for Applications and Resource Groups 4-19
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Fallback Policy

Fallback To Higher Priority Node In The List


Can use a run time Delayed Fallback Timer preference

Never Fallback

Copyright IBM Corporation 2005

Figure 4-11. Fallback Policy AU546.0

Notes:

Fallback to higher priority node


When HACMP Cluster Services start on a node, HACMP looks to see if there is a
resource group with this node in the list and which is currently active on another node. If
this node is higher in the list than the node the resource group is currently running on
and this policy has been chosen then the resource group is moved and the application
is started on this node.

Run-time delayed fallback timer


A run-time fallback timer policy can be set to a time in the future when the fallback
should happen. The following example describes a case when configuring a delayed
fallback timer would be beneficial. If a node in a cluster failed, and was later repaired,
you may want to integrate the node into a cluster during off-peak hours. Rather than
writing a script or a cron job to do the work, which are both time-consuming and prone
to error, you could set the delayed fallback timer for a specified resource group to the

4-20 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Student Notebook

Uempty appropriate time. After starting the node, HACMP automatically starts the resource
group fallover at the specified time.
Run-time policies will be covered in more detail in the HACMP Administration II
Administration and Problem Determination course.

Copyright IBM Corp. 1998, 2005 Unit 4. Planning for Applications and Resource Groups 4-21
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Valid Combinations of policies

Copyright IBM Corporation 2005

Figure 4-12. Valid Combinations of policies AU546.0

Notes:

Valid combinations
HACMP allows you to configure only valid combinations of startup, fallover, and fallback
behaviors for resource groups.

Preferences are not the only factor in determining node


In addition to the node preferences described in the previous table, other issues may
determine the resource groups that a node acquires. We will look at other issues in the
administration and event units later in this course.

4-22 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Student Notebook

Uempty
Dependent Applications/Resource Groups

Node 1
Parent RG

Node 1 Node 2
Child/Parent Parent/Child
RG RG

Child RG

One resource group can be the parent of another resource group


HACMP 5.2 introduced start/stop/fallover dependencies
HACMP 5.3 adds location dependency
Implemented as Run-Time Policy

Copyright IBM Corporation 2005

Figure 4-13. Dependent Applications/Resource Groups AU546.0

Notes:

One resource group can be a parent of another resource group


In HACMP 5.2 and higher, it is possible to have cluster wide resource group
online/offline dependencies.
i. Parent will be brought online before child
ii. Parent will be brought offline after child
iii. Parent/child can be on different nodes
iv. Three levels of dependencies are supported
In HACMP 5.3 it is also possible to specify resource location dependencies
i. Online on same node
ii. Online on different nodes
iii. Online on same site

Copyright IBM Corp. 1998, 2005 Unit 4. Planning for Applications and Resource Groups 4-23
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Implemented as Run-Time Policy


Run-Time Policies will be covered in more detail in the HACMP Administration II
Administration and Problem Determination course.

4-24 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Student Notebook

Uempty
Checkpoint
1. True or False
Applications are defined to HACMP in a configuration file
that lists what binary to use.
2. What policies would be the best to use for a 2 node mutual
takeover cluster using IPAT to minimize both applications
running on the same node?
a. home, next, never
b. first, next, higher
c. distribution, next, never
d. all, error, never
e. home, next, higher
3. Which type of data should not be placed in private data
storage?
a. Log data
b. License file
c. Configuration files
d. Application binaries
4. Which policy is not a Run-time policy?
a. Settling
b. Delayed Fallback Timer
c. Dynamic Node Priority
Copyright IBM Corporation 2005

Figure 4-14. Checkpoint AU546.0

Notes:

Copyright IBM Corp. 1998, 2005 Unit 4. Planning for Applications and Resource Groups 4-25
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Unit Summary
To define an application to HACMP, you must:
Create an application server resource (start and stop scripts)
Create a resource group (node list, policies, resources)
Considerations for putting an application under HACMP control
Automation
Dependencies
Interference
Robustness
Implementation details
Monitoring
Shared storage requirements
Considerations for start and stop scripts:
Environment
Multiple instances
Script location
Error handling
Coding issues
Resource group policies control how HACMP manages the application
Startup policy (with optional Settling timer)
Fallover policy
Fallback policy (with optional Delayed fallback)

Copyright IBM Corporation 2005

Figure 4-15. Unit Summary AU546.0

Notes:

4-26 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty Unit 5. HACMP Installation

What This Unit Is About


This unit describes the installation process for HACMP 5.3 for AIX 5L

What You Should Be Able to Do


After completing this unit, you should be able to:
State where installation fits in the implementation process.
Describe how to install HACMP 5.3
List the prerequisites for HACMP 5.3
List and explain the purpose of the major HACMP 5.3 components.

How You Will Check Your Progress


Accountability:
Checkpoint
Machine Exercise

References
SC23-4864-06 HACMP for AIX, Version 5.3: Concepts and Facilities
Guide
SC23-4861-06 HACMP for AIX, Version 5.3 Planning and Installation
Guide
SC23-4862-06 HACMP for AIX, Version 5.3 Administration Guide
SC23-5177-00 HACMP for AIX, Version 5.3 Troubleshooting Guide
SC23-4867-05 HACMP for AIX, Version 5.3 Master Glossary
www.ibm.com/servers/eserver/pseries/library/hacmp_docs.html#hacm
pv53HACMP Manuals

Copyright IBM Corp. 1998, 2005 Unit 5. HACMP Installation 5-1


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Unit Objectives
After completing this unit, you should be able to:
State where installation fits in the implementation process
Describe how to install HACMP 5.3
List the prerequisites for HACMP 5.3
List and explain the purpose of the major HACMP 5.3
components

Copyright IBM Corporation 2005

Figure 5-1. Unit Objectives AU546.0

Notes:

What this unit covers


This unit discusses the installation and the code components of HACMP 5.3.

A quick (and only somewhat accurate) history lesson:


In the early to mid 1990s, IBM developed a product called High Availability Cluster Multi
Processing for AIX which became known as HACMP for AIX or just HACMP (the long
name is still the official product name). This high availability product was developed to
run on conventional RS/6000 servers (in this context, conventional means not IBM
RS/6000 SP servers).
At some point after HACMP was a reasonably established product, a decision was
made to create a variant of HACMP which could be used within an SP cluster. Such a
variant would need to be different than the non-SP variant as the SP variant would need
to be able to co-exist with SP management software called PSSP. Since PSSP already

5-2 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty had a set of high availability daemons which provided most but not all of the
functionality required by HACMP, the SP variant of HACMP was developed to take
advantage of these high availability daemons within PSSP. The end result was that the
SP variant of HACMP did not include certain functionality, like the ability to monitor
networks and nodes using heartbeating, found in the non-SP variant because the SP
variant used PSSPs high availability daemons to provide this functionality.
The SP variant of HACMP came to be known as HACMP enhanced scalability or
HACMP/ES. The other HACMP became known as classic.
Over the years, the non-SP variant of HACMP and the SP variant of HACMP continued
to evolve as essentially separate products. By about the year 2000, the two products
provided roughly equivalent functionality although they were implemented in quite
different ways. The major functional difference between the two was that HACMP/ES
supported the SP switch and the non-SP variant did not. Another major difference
between the two was that the HACMP/ES variant required PSSP and the non-SP
variant, of course, did not.
In about 1998, a decision was made to reunite the two products. At about this time, the
non-SP variant came to be called HACMP classic to distinguish it from the HACMP/ES
variant. In order to make it possible to run HACMP/ES on non-PSSP capable servers,
the portions of PSSP used by HACMP were extracted into a new entity called Reliable
Scalable Cluster Technology (RSCT) and ported to conventional (that is, non-SP)
servers.

Long term goal


An IBM Statement of Direction issued at about this time stated that the long-term goal
was to bring the two variants of HACMP to the point where they were essentially
identical from a feature perspective. Once this was achieved, the stated plan was to
eventually discontinue the classic variant entirely.
The goal of feature equivalency was achieved in about 2000 with the release of HACMP
4.3.1. With the release of HACMP 5.1 in 2003, the plan to discontinue the classic
variant of HACMP has also been realized. Then came HACMP 5.2
This brings us to where we are today.

Copyright IBM Corp. 1998, 2005 Unit 5. HACMP Installation 5-3


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

5-4 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty 5.1 Installing the HACMP 5.3 Software

Copyright IBM Corp. 1998, 2005 Unit 5. HACMP Installation 5-5


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Topic 1 Objectives:
Installing the HACMP Software
After completing this Topic, you should be able to:
Where does Installation fit in the implementation process
Describe how to install HACMP 5.3
List the prerequisites for HACMP 5.3

Copyright IBM Corporation 2005

Figure 5-2. Topic 1 Objectives: Installing the HACMP Software AU546.0

Notes:
This topic covers the installation of the HACMP 5.3 filesets.

5-6 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
Steps for Successful Implementation
HACMP should not be installed upon a system which is in
production

Step Step Description Comments


1 Plan Use planning worksheets and documentation.
2 Assemble hardware Install adapters, connect shared disk and network.
3 Install AIX Ensure you update to the latest maintenance level.
4 Configure networks Requires detailed planning.
5 Configure shared storage Set up shared volume groups and filesystems.
6 Install HACMP Install on all nodes in the cluster (don't forget to install latest fixes).
7 Reboot each node Required after installing or patching HACMP.
8 Configure topology
Define/discover the cluster topology Review what you end up with to make sure that it is what you expected.
9 Configure application servers You will need to write your start and stop scripts.
10 Configure cluster resources Refer to your planning worksheets.
11 Synchronize the cluster Ensure you "actually" do this.
12 Start HACMP Watch the console for messages.
13 See comment Skip this step if you are superstitious.
14 Test the cluster Document your tests and results.

Copyright IBM Corporation 2005

Figure 5-3. Steps for Successful Implementation AU546.0

Notes:

Steps to building a cluster


Here are the steps to building a successful cluster. Okay, so we could have included
more steps or combined a few steps, but the principle is that you should plan and follow
a methodical process which includes eventual testing and documentation of the cluster.
It is often best to configure the clusters resources iteratively. Get basic resource groups
working first and then add the remaining resources gradually, testing as you go, until the
cluster does everything that it is supposed to do.

Different opinions
Different people have different ideas about the exact order in which a cluster should be
configured. For example, some people prefer to leave the configuration of the shared
storage (step 5 above) until after theyve synchronized the clusters topology (step 7) as

Copyright IBM Corp. 1998, 2005 Unit 5. HACMP Installation 5-7


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

this allows them to take advantage of HACMPs C-SPOC facility to configure the shared
storage.
One other area where different views are common is exactly when to install and
configure the application. If the application is installed, configured and tested
reasonably thoroughly prior to installing and configuring HACMP then most issues
which arise during later cluster testing are probably HACMP issues rather than
application issues. The other common perspective is that HACMP should be installed
and configured prior to installing and configuring the applications as this allows the
applications to be installed into the exact context that they will ultimately run in. There is
no correct answer to this issue. When to install and configure the applications is just
one more point that will have to be resolved during the cluster planning process.

Where there is agreement


There is general agreement among the experts that the first step in configuring a
successful cluster is to plan the cluster carefully. For a more comprehensive discussion
of the process of planning and implementing a cluster, refer to:
- SC23-4861-03 HACMP for AIX 5L, Version 5.3 Planning and Installation Guide
- SC23-4862-03 HACMP for AIX, Version 5.3: Administration
- SC23-4862 HACMP for AIX 5L, Version 5.3: Troubleshooting Guide

Using a standalone system as the first HACMP node


A word of advice: one is often tempted to take a standalone system which is already
running an application in production and use the system as the first node of a two node
cluster. Unless the user community is comfortable with a significant outage (at least a
few days and possibly longer), this is almost certainly a mistake. Trying to configure and
test an HACMP cluster in a few hours is a very high risk exercise. HACMP clusters tend
to be quite complex and taking the time to get it done right AND performing thorough
testing to ensure that it was done right is very important to the long term success of the
cluster.
If you do embark on turning an existing standalone production system into the first node
of a two node cluster, make sure that you take a complete backup of the stand-alone
system at the very start of the outage period, and that you leave yourself enough time at
the end of the outage period to do a complete restore from the backup should things not
go according to plan.

5-8 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
Where Are We in the Implementation?
9Plan for network, storage, and application resource groups
Eliminate single points of failure
9Define and configure the AIX environment
Storage (adapters, LVM volume group, filesystem)
Networks (IP interfaces, /etc/hosts, non-IP networks and
devices)
Application start and stop scripts
Install the HACMP filesets and reboot
Configure the HACMP environment
Topology
Cluster, node names, HACMP IP and non-IP networks
Resources:
Application Server
Service labels
Resource group:
Identify name, nodes, policies
Resources: Application Server, service label, VG, filesystem
Synchronize then start HACMP
Copyright IBM Corporation 2005

Figure 5-4. Where Are We in the Implementation? AU546.0

Notes:

What we have done so far


In the units 2, 3, and 4 we planned and built the storage, network, and application
environments for our cluster. So we are now ready to install the HACMP filesets.

Copyright IBM Corp. 1998, 2005 Unit 5. HACMP Installation 5-9


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Before All Else Fails . . .


Study the appropriate HACMP manuals:

HACMP for AIX Planning and Installation Guide SC23-4861-06


Contains Planning Worksheets in Appendix A
Can be installed as single fileset from the CD
Online Planning Worksheets
Can be installed as single fileset from the CD
Release notes:
On the CD as README_hacmp5.3
Installed into /usr/es/sbin/cluster as release_notes

Copyright IBM Corporation 2005

Figure 5-5. Before All Else Fails... AU546.0

Notes:

There are other references


There are other HACMP manuals which may prove useful. Check out the references at
the start of this unit for a complete list.

5-10 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
What Is On the CD?
README_hacmp53
Directories
AIX5.2,5.3, pubs
Installp/ppc, usr/sys/inst.images
cluster.adt.es cluster.msg.<lang>.cspoc
cluster.doc.en_US.es cluster.msg.<lang>.es
cluster.es cluster.msg.<lang>.hativoli
cluster.es.cfs cluster.msg.<lang>.haview
cluster.es.clvm
cluster.es.cspoc
cluster.es.plugins rsct.core.lprm.2.4.2.1.bff
cluster.es.worksheets rsct.core.rmc.2.4.2.3.bff
cluster.hativoli rsct.core.sec.2.4.2.3.bff
cluster.haview rsct.core.sensorrm.2.4.2.1.bff
cluster.license rsct.core.utils.2.4.2.3.bff
cluster.man.en_US.es.data rsct.core.cimrm.2.4.2.1.bff

Copyright IBM Corporation 2005

Figure 5-6. What Is On the CD? AU546.0

Notes:

Files on the CD
This visual shows the files that are on the CD. They will be expanded to show the table
of contents when using SMIT to do the install.

Copyright IBM Corp. 1998, 2005 Unit 5. HACMP Installation 5-11


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Install the HACMP Filesets (1 of 2)


Here's a listing of the HACMP 5.3 filesets:
bos.clvm.enh.5.3.0.31.U rsct.basic.hacmp.2.3.3.0.U
bos.rte.lvm.5.3.0.31.U rsct.basic.hacmp.2.3.3.1.U
cluster.adt.es.5.3.0.0.I rsct.basic.rte.2.3.3.0.U
cluster.doc.en_US.es.5.3.0.0.I rsct.basic.rte.2.3.3.1.U
cluster.es.5.3.0.0.I rsct.basic.sp.2.3.3.0.U
cluster.es.cfs.5.3.0.0.I rsct.compat.basic.hacmp.2.3.3.0.U
cluster.es.clvm.5.3.0.0.I rsct.compat.basic.rte.2.3.3.0.U
cluster.es.cspoc.5.3.0.0.I rsct.compat.basic.sp.2.3.3.0.U
cluster.es.plugins.5.3.0.0.I rsct.compat.clients.hacmp.2.3.3.0.U
cluster.es.worksheets.5.3.0.0.I rsct.compat.clients.rte.2.3.3.0.U
cluster.hativoli.5.3.0.0.I (requires Tivoli) rsct.compat.clients.sp.2.3.3.0.U
cluster.haview.4.5.0.0.I (requires Netview) rsct.core.auditrm.2.3.3.0.U
cluster.license.5.3.0.0.I rsct.core.errm.2.3.3.0.U
cluster.man.en_US.es.5.3.0.0.I rsct.core.errm.2.3.3.1.U
cluster.msg.en_US.cspoc.5.3.0.0.I ALSO cluster.msg.En*, Ja*, ja*
cluster.msg.en_US.es.5.3.0.0.I
cluster.msg.en_US.hativoli.5.3.0.0.I (requires Tivoli)
cluster.msg.en_US.haview.4.5.0.0.I (requires Netview)

Your requirements may vary!

Copyright IBM Corporation 2005

Figure 5-7. Install the HACMP Filesets (1 of 2) AU546.0

Notes:

Fileset considerations
Listed above are the filesets that you see when doing smit install_all in HACMP 5.3.
Using smit update_all will not show the msg filesets so you should use install_all and
select the filesets.
The same filesets should be installed on all nodes or Verify will give warnings every
time it executes.
You should install the documentation filesets on at least one non-cluster node (ensuring
that the HACMP PDF-based documentation is available even if none of the cluster
nodes will boot could prove REALLY handy someday).
Notice that some of the filesets require other products such as Tivoli or NetView. You
should not install these filesets unless you have these products. HAView is never
installed on the cluster node, it is installed on the NetView server.

5-12 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty The cluster.es.cfs fileset can only be used if GPFS is installed. You may not need the
plug-ins. The cluster.es.clvm fileset is required for Enhanced Concurrent Mode volume
group support and this fileset requires an RSCT fileset which you will see in lab.

Example of basic install -- will be used in our lab


- cluster.adt.es
- cluster.doc.en_US.es
- cluster.es
- cluster.es.clvm
- cluster.es.cspoc
- cluster.license
- cluster.man.en_US.es
- cluster.msg.en_US.cspoc
- cluster.msg.en_US.es

Copyright IBM Corp. 1998, 2005 Unit 5. HACMP Installation 5-13


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Install the HACMP Filesets (2 of 2)


cluster.es:
5.3.0.0 ES Base Server Runtime cluster.es.server.rte 5.3.0.0
5.3.0.0 ES Client Libraries cluster.es.client.lib 5.3.0.0
5.3.0.0 ES Client Runtime cluster.es.client.rte 5.3.0.0
5.3.0.0 ES Client Utilities cluster.es.client.utils 5.3.0.0
5.3.0.0 ES Cluster Test Tool cluster.es.server.testtool 5.3.0.0
5.3.0.0 ES Server Diags cluster.es.server.diag 5.3.0.0
5.3.0.0 ES Server Events cluster.es.server.events 5.3.0.0
5.3.0.0 ES Server Utilities cluster.es.server.utils 5.3.0.0
5.3.0.0 ES Two-Node Configuration Assistant cluster.es.server.cfgast 5.3.0.0
5.3.0.0 Web based Smit cluster.es.server.wsm 5.3.0.0

Copyright IBM Corporation 2005

Figure 5-8. Install the HACMP Filesets (2 of 2) AU546.0I

Notes:

The cluster.es fileset


The visual shows all the components of the cluster.es fileset. The Web based smit is not
to be confused with WebSM. Web based smit is Web application that allows you to see
the HACMP smit configuration screens and to see status.

5-14 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
Don't Forget the Prerequisites
AIX:
AIX 5L V5.2 ML4
AIX 5L V5.3 ML2
RSCT
rsct.compat.basic.hacmp 2.4.2.0 (AIX 5L V5.3) 2.3.6.0 (AIX 5L V5.2)
rsct.compat.clients.hacmp 2.4.2.1 (AIX 5L V5.3) 2.3.6.1 (AIX 5L V5.2)
rsct.core.sec 2.4.2.0 (AIX 5L V5.3) 2.3.6.0 (AIX 5L V5.2)
rsct.core.rmc 2.4.2.1 (AIX 5L V5.3) 2.3.6.1 (AIX 5L V5.2)
PSSP 3.5 on SP systems
Otherwise optional AIX filesets
(see student notes below):
Other prerequisites
Enhanced concurrent mode:
bos.rte.lvm 5.1.0.25 or higher
bos.clvm.enh 5.2.0.11
CSPOC with vpath
SDD 1.3.1.3 or later
Online Planning Worksheets
AIX 5L Java Runtime Environment
Copyright IBM Corporation 2005

Figure 5-9. Dont Forget the Prerequisites AU546.0

Notes:

Installation suggestions
You might be able to get HACMP 5.3 installed without all of the above prerequisites, but
you are unlikely to enjoy the experience of working with HACMP 5.3 without the
appropriate prerequisites.
Since you are unlikely to want to upgrade a new cluster anytime soon, it is generally
wisest to start with the latest available AIX and HACMP patches (the URL for checking
on the latest patches is on the next foil).

Other AIX filesets


bos.adt.lib bos.data bos.rte.libpthreads
bos.rte.libc bos.net.tcp.client bos.rte.odm
bos.adt.libm bos.net.tcp.server bos.rte.SRC
bos.adt.syscalls bos.rte.libcur

Copyright IBM Corp. 1998, 2005 Unit 5. HACMP Installation 5-15


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Verify That You Have the Required APARs


The following APARs are required as noted:
AIX
IY72852, IY72916, IY71500 (AIX 5L V5.3) IY72082, IY72946
(AIX 5L V5.2)
RSCT
IY61770 (for power 5 hardware)
HACMP
IY68370 (or higher), IY68387, IY72928 (DARE for sites)
DLPAR
power 5: IY73050, IY73051
Hardware power 5:
AIX 5.3: IY60930, IY61034, IY62191
AIX 5.2: IY56554, IY61014
Virtual devices:
IY70082, IY72974, iFix IY71303
VIOS Fixpack 6.2

The above lists are almost certainly out of date. Check


http://
www.ibm.com/servers/eserver/support/pseries/aixfixes.html

Copyright IBM Corporation 2005

Figure 5-10. Verify That You Have the Required APARs AU546.0

Notes:

Obtaining fixes
Use the above URL to check for and download the latest fixes.

5-16 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
Some Final Things to Check
Code installation
Each node must be rebooted once HACMP has been installed
Correct filesets and prerequisites have been installed
Documentation is installed and accessible

Network setup
/etc/hosts file is configured on all nodes correctly
Name resolution works
IP and non-IP networks are configured
Subnets configured correctly
The subnet mask identical.
All interfaces on different subnets
Routing configured correctly
Test connectivity
Shared storage configured correctly

You have a written plan describing configuration and testing


procedures!
Copyright IBM Corporation 2005

Figure 5-11. Some Final Things to Check AU546.0

Notes:

Description of checklist
This is a checklist of items which you should verify before starting to configure an
HACMP cluster. It is not a complete list as each situation is different. It would probably
be wise to develop your own checklist during the cluster planning process and then
verify it just before embarking on the actual HACMP configuration of the cluster.

Code installation
Correct filesets includes making sure that the same HACMP filesets are installed on
each node. Documentation can be installed before installing HACMP. The
documentation is delivered as either html or pdf.

Copyright IBM Corp. 1998, 2005 Unit 5. HACMP Installation 5-17


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Network setup
The /etc/hosts file should have entries for all IP labels and all nodes. The file should be
the same on all nodes. Name resolution should be tested on all labels and nodes. To do
this you can use the host command. You should test address to name and name to
address and verify that they are the same on all nodes. You should ensure that a route
exists to all logical networks from all nodes. Finally, you should test connectivity by
pinging all nodes from all nodes on all interfaces.

Shared storage
Check to see that the disks are configured and recognized the same (if possible) and
can be accessed from all nodes that will share it.

5-18 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
Install HACMP Client Machine
Setup Network
Configure network interface to reach cluster server node
Same subnet as service address
/etc/hosts file updated everywhere
Install prerequisites:
bos.adt.libm
bos.adt.syscalls
bos.data
Install HACMP client filesets:
cluster.adt.es
cluster.es
ES Client Libraries
ES Client Runtime
ES Client Utilities
cluster.msg.en_US.es
cluster.man.en_US.es
Configure /usr/es/sbin/cluster/clhosts
Can copy /usr/es/sbin/cluster/etc/clhosts.client
Test connectivity

Copyright IBM Corporation 2005

Figure 5-12. Install HACMP Client Machine AU546.0

Notes:

Client machine properties


A client machine is a node running AIX and only the client filesets from HACMP. It can
be used to monitor the cluster nodes as well as to test connectivity to an application
during fallover or to be a machine that is used to access a highly available application

Installing and setting up the client machine


Make sure the network is setup so that the client machine can access the cluster nodes.
If the client machine is on the same LAN then choose an address that is in the same
subnet as the service address of the application that you want to monitor.
Also make sure clinfo is setup (clhosts file) to be able to find the cluster node(s). In
HACMP 5.3 a clhosts file is generated for clients when you synchronize the cluster. The
name of the file is /usr/es/sbin/cluster/etc/clhosts.client

Copyright IBM Corp. 1998, 2005 Unit 5. HACMP Installation 5-19


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Lets Review
1. What is the first step in implementing a cluster?
a. Order the hardware
b. Plan the cluster
c. Install AIX and HACMP
d. Install the applications
e. Take a long nap
2. True or False?
HACMP 5.3 is compatible with any version of AIX 5L V5.x.
3. True or False?
Each cluster node must be rebooted after the HACMP software is installed.
4. True or False?
You should take careful notes while you install and configure HACMP so
that you know what to test when you are done.

Copyright IBM Corporation 2005

Figure 5-13. Lets Review AU546.0

Notes:

5-20 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty 5.2 What Was Installed

Copyright IBM Corp. 1998, 2005 Unit 5. HACMP Installation 5-21


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Topic 2 Objectives: What Was Installed


After completing this Topic, you should be able to:
Describe the purpose of the major HACMP 5.3
components

Copyright IBM Corporation 2005

Figure 5-14. Topic 2 Objectives: What Was Installed AU546.0

Notes:

5-22 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
The Layered Look
Here are the layers of software on an HACMP 5.3 cluster node:

Application Layer
Contains the highly available applications
that use HACMP services

HACMP Layer
Provides highly available services to
applications

RSCT, RMC Layer


Provides monitoring, event management and
coordination of subsystems for HACMP clusters

AIX Layer
Provides operating system services (SRC, snmpd)

LVM Layer TCP/IP Layer


Manages disk space Manages communication
at the logical level at the logical level

Copyright IBM Corporation 2005

Figure 5-15. The Layered Look AU546.0

Notes:

The application layer


The top most layer of the software stack is the application layer. Any application or
service which the cluster node is making highly available is considered to be running at
the application layer (in a sense, this includes rather low level AIX facilities like NFS
when the cluster is acting as a highly available NFS server).

The HACMP layer


The next layer is the HACMP layer. This layer is responsible for providing a number of
services to the application layer including:
- Tracking the state of the cluster in cooperation with the other cluster nodes
- Initiating fallovers and other recovery actions as required
- (optionally) monitoring the applications and initiating recovery procedures when
they fail

Copyright IBM Corp. 1998, 2005 Unit 5. HACMP Installation 5-23


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

- Doing whatever else it takes to make the applications highly available


Note that since most applications are not really aware of how they are started and
stopped or if they are being monitored and recovered or even if they are being made
highly available, the applications running within the application layer, as a rule, are
blissfully unaware of the existence of the HACMP layer or even the RSCT layer.
In order to make the applications highly available and in order to know when to start and
stop and, if configured, monitor and recover the applications, the HACMP layer must
be aware of the overall status of the cluster including the state of the topology (which
nodes, networks and network interfaces are in working order) and the resources (which
resources are being made available where).
The HACMP layer relies upon the RSCT layer to provide a number of key services
including topology status information and a reliable messaging service.

The RSCT layer


The RSCT layer includes daemons responsible for monitoring the state of the clusters
topology, recognizing when the state of the cluster changes (for example, a node
crashes), coordinating the response to these events, and keeping RSCT-aware clients
informed as to the state of the cluster (HACMP is itself an RSCT-aware client). RSCT
itself is distributed with AIX.

The AIX and below layers


The AIX layer represents all of the operating system services provided by AIX to
programs running on the operating system. These programs include, of course, the
programs at the application layer, the programs at the HACMP layer and the programs
at the RSCT layer.
The AIX layer takes advantage of all sorts of facilities provided by the AIX kernel. The
two that are highlighted in the diagram are the Logical Volume Manager or storage
management facility and the TCP/IP or IP networking facility. As should be clear from
the rather heavy emphasis given to storage and networking in this course so far, these
are cornerstone facilities from the perspective of HACMP.
Finally, please keep in mind that the layers in the software stack illustrated above are,
in many respects, more apparent than real. All of the layers above the AIX layer tend to
interact heavily and directly with the AIX layer regardless of whether there are layers
between them and the AIX layer. The same can be said in many respects about the
LVM and the TCP/IP components - all of the layers above them tend to interact heavily
although usually not quite as directly with the LVM and TCP/IP components.

5-24 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
HACMP Components and Features
The HACMP software has the following components:
Cluster Manager
Cluster Secure Communication Subsystem
IBM RS/6000 Reliable Scalable Cluster Technology Availability
Services (RSCT and RMC)
snmpd monitoring programs
Cluster Information Program
Highly Available NFS Server
Shared External Disk Access

Copyright IBM Corporation 2005

Figure 5-16. HACMP Components and Features AU546.0

Notes:

Pre version 5 Traditional HACMP components


HACMP has historically consisted of the following components:
A cluster manager
RSCT
Note that RSCT was not part of the pre version 5 classic releases
SNMP related facilities
The Cluster Information Program
A highly available NFS server
Note that NFS server support was not part of all pre version 5 releases
Shared external disk access
The (optional) concurrent resource manager facility

Copyright IBM Corp. 1998, 2005 Unit 5. HACMP Installation 5-25


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

New version 5 components


HACMP 5 systems also have a component called the Cluster Secure Communication
Subsystem and the Concurrent Resource Manager facility is no longer optional.

5-26 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
Cluster Manager
A subsystem/daemon which runs on each cluster node
Primarily responsible for responding to unplanned events:
Recover from software and hardware failures
Respond to user-initiated events:
Request to online/offline a node
Request to move/online/offline a resource group
And so forth
A client to RSCT and RMC
Provides snmp retrievable status information
Implemented by the subsystem clstrmgrES
Started in /etc/inittab

Copyright IBM Corporation 2005

Figure 5-17. Cluster Manager AU546.0

Notes:

The cluster managers role


The cluster manager is, in essence, the heart of the HACMP product. Its primary
responsibility is to respond to unplanned events. From this responsibility flows most of
the features and facilities of HACMP. For example, in order to respond to unexpected
events, it is necessary to know when they occur. This is the job of the RSCT component
to monitor for certain failures.
In HACMP 5.3 the clstrmgrES subsystem is always running. It runs even when the
cluster services are stopped.

Copyright IBM Corp. 1998, 2005 Unit 5. HACMP Installation 5-27


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Cluster Secure Communication Subsystem

Provides communication infrastructure for HACMP


HACMP 5.3 provides two authentication security options:
Connection Authentication
Standard
Uses /usr/es/sbin/cluster/rhosts file
Kerberos (SP only)
Kerberos used with PSSP.
Virtual Private Networks (VPN) using persistent labels.
VPNs are configured within AIX.
HACMP is then configured to use VPNs
Message Authentication and/or Message Encryption
HACMP provides methods for key distribution
Implemented using the clcomdES subsystem

Copyright IBM Corporation 2005

Figure 5-18. Cluster Secure Communication Subsystem AU546.0

Notes:

Introduction to the cluster communication subsystem


The cluster secure communication subsystem is part of HACMP 5 systems. It provides
connection level security for all HACMP related communication, eliminating the need for
either /.rhosts files or a Kerberos configuration on each cluster node. Although only
necessary when the configuration of the cluster was being changed, the need for these
/.rhosts files has been a source of some concern for many customers.
This facility goes beyond eliminating the need for /.rhosts files by providing the ability to
send all cluster communication through a Virtual Private Network (VPN) using
persistent labels. Although unlikely to be necessary in most clusters, this capability will
allow HACMP to operate securely in hostile environments.
In addition, it is possible to make use of Message level authentication and/or Message
Encryption in HACMP 5.2 and later. You can have HACMP generate and/or distribute
keys.

5-28 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty Only supported for HACMP generated requests


Finally, this subsystem is not supported for use by user commands outside of the
cluster manager and CSPOC. For these commands the administrator must configure
their own remote command method.

Copyright IBM Corp. 1998, 2005 Unit 5. HACMP Installation 5-29


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Cluster Communication Daemon (clcomd)


Removes requirement for /.rhosts

Caches coherent copies of other nodes' ODMs

Long term socket connections on tcp port 6191

Implements the principle of least privilege:


Nodes no longer require root access to each other

Started via /etc/inittab entry

Managed by the SRC


startsrc, stopsrc, refresh

Copyright IBM Corporation 2005

Figure 5-19. Cluster Communication Daemon (clcomd) AU546.0

Notes:

clcomd basics
The most obvious part of the cluster secure communication facility is the cluster
communication daemon (clcomd). This daemon replaces a number of ad hoc
communication mechanisms with a single facility thus funneling all cluster
communication through one point. This funneling, in turn, makes it feasible to then use
a VPN to actually send the traffic between nodes and to be sure that all the traffic is
going through the VPN.

Improves performance
The clcomds approach to supporting the verification and synchronization of cluster
configuration changes has an important additional benefit - by eliminating numerous rsh
calls across the cluster during the verification and synchronization operation and
replacing them with a purpose-built facility, the time that it takes to verify and
synchronize a cluster configuration change has improved noticeably.

5-30 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty Other aspects of clcomds implementation which further improve performance include:
- Caching coherent copies of each nodes ODMs which reduces the amount of
information which must be transmitted across the cluster during a verification
operation
- Maintaining long term socket connections between nodes avoids the necessity to
constantly create and destroy the short term sessions which are a natural result of
using rsh and other similar mechanisms

Copyright IBM Corp. 1998, 2005 Unit 5. HACMP Installation 5-31


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

clcomd Standard Connection Authentication


Look for source IP address in:
Special rhosts file: /usr/es/sbin/cluster/etc/rhosts
HACMP adapter ODM
Take the following action:
Block communication if the special rhosts file is missing
Assume new cluster if the special rhosts file is empty
Else use the rhosts file or HACMP Adapter ODM file for
authentication
Authentication done as follows:
Connect back and ask for the hostname
Connection is considered authentic if the hostname matches else
connection is rejected
Must be empty if cluster unconfigured
Security hole at installation time

Copyright IBM Corporation 2005

Figure 5-20. clcomd Standard Connection Authentication AU546.0

Notes:

How clcomd authentication works


The clcomd daemon authenticates each in-bound session by checking the sessions
source IP address against a list of addresses in /usr/sbin/cluster/etc/rhosts and the
addresses configured into the cluster itself (in other words, in the HACMPadapter and
HACMPnode ODM files). In order to defeat any attempt at IP-spoofing (a very
timing-dependent technique which involves faking a sessions source IP address), each
non-callback session is checked by connecting back to the source IP address and
verifying who the sender is.
The action taken to a request depends on the state of the /usr/sbin/cluster/etc/hosts
file as shown in the visual. If a cluster node is being moved to a new cluster or if the
entire cluster configuration is being redone from scratch, it may be necessary to empty
/usr/es/sbin/cluster/etc/rhosts or manually populate it with the appropriate IP addresses
for the new cluster.

5-32 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty Security hole at installation time


The empty /usr/es/sbin/cluster/etc/rhosts file provides a window of opportunity between
installation and when HACMP is configured. To further reduce this window it is possible to
edit this file just after the installation if it is felt that this window will be a problem.

Copyright IBM Corp. 1998, 2005 Unit 5. HACMP Installation 5-33


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

RSCT
Included with AIX (originally part of PSSP for SP systems)
Provides:
Scalability to large clusters
Cluster Failure notification
Coordination of changes
Key components:
Topology Services
Heartbeat services
Group Services
Coordinates/monitors state changes of an application in the cluster
RMC: Resource Monitoring and Control
Provides resource threshold monitoring and Allows application to be
notified of resource
HACMP's Cluster Manager is an RSCT client/application

Copyright IBM Corporation 2005

Figure 5-21. RSCT AU546.0

Notes:

What RSCT provides


RSCTs role in an HACMP cluster is to provide:
- Failure detection and diagnosis for topology components (nodes, networks and
network adapters)
- Notification to the cluster manager of events which it has expressed an interest in -
primarily events related to the failure and recover of topology components
- Coordination of the recovery actions involved in dealing with the failure and recovery
of topology components (in other words, fallovers, fallbacks and dealing with
individual NIC failures by moving or swapping IP addresses)

5-34 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
HACMP from an RSCT Perspective

AIX
Process
Monitor
HACMP

HA Recovery Driver
Database RSCT ~
Resource RMC Cluster Recovery
Monitor (ctrmc) Manager Programs

Switch
Resource
Monitor

Recovery
RSCT Commands
RSCT
Group ~
Topology
Services Services HACMP Event
Scripts

processor, LAN
heartbeats messages
membership
information

Copyright IBM Corporation 2005

Figure 5-22. HACMP from an RSCT Perspective AU546.0

Notes:

The RSCT environment


This diagram includes all of the major RSCT components plus the HACMP cluster
manager and event scripts. It also illustrates how they communicate with each other.

Monitors
The monitors in the upper left of the diagram monitor various aspects of the local nodes
state including the status of certain processes (for example, the application if
application monitoring has been configured), database resources and the SP Switch (if
one is configured on the node). These monitors report state changes related to
monitored entities to the RSCT RMC Manager.

Copyright IBM Corp. 1998, 2005 Unit 5. HACMP Installation 5-35


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Topology Services
Also reporting state changes to the RSCT RMC Manager is the RSCT Topology
Services component which is responsible for failure detection/diagnosis of topology
components and the transmission of any RSCT-related messages between cluster
nodes.

Group Services
Associated with RSCT Topology Services is the RSCT Group Services daemon which
is responsible for coordinating and monitoring changes to the state of an application
running on multiple nodes. In the HACMP context, the application running on multiple
nodes is the HACMP cluster manager.

RMC Manager
The RSCT RMC Manager receives notification of events from the monitors and from
RSCT Topology Services. It analyzes these events and notifies RSCT clients of those
events which they have expressed an interest in.
The HACMP cluster manager, an RSCT client, registers itself with both the RSCT RMC
Manager and the RSCT Group Services components. The key interface is with the
RSCT RMC Manager which notifies the HACMP Cluster Manager of events that the
Cluster Manager has told the RMC Manager that it is interested in (for example, node
failures).

Cluster Manager
Once an event has been reported to the HACMP Cluster Manager, it responds to this
event the use of HACMPs recovery commands and event scripts to respond to the
event. The scripts are coordinated via the RSCT group services component.

5-36 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
Heartbeat Rings

Heartbeat

25.8.60.6 25.8.60.5

25.8.60.2 25.8.60.4

25.8.60.3

Heartbeat one way in order of high to low IP address.

Copyright IBM Corporation 2005

Figure 5-23. Heartbeat Rings AU546.0

Notes:

RSCT Topology Services functions


The RSCT Topology Services component is responsible for the detection and diagnosis
of topology component failures. As discussed in the networking unit, the mechanism
used to detect failures is to send heartbeat packets between interfaces. Rather than
send heartbeat packets between all combinations of interfaces, the RSCT Topology
Services component sorts the IP addresses of the interfaces on a given logical IP
subnet and then arranges to send heartbeats in a round robin fashion from high to low
IP addresses in the sorted list.

Example
For example, the IP addresses in the foil can be sorted as 25.8.60.6, 25.8.60.5,
25.8.60.4, 25.8.60.3 and 25.8.60.2. This ordering results in the following heartbeat path:
25.8.60.6 --> 25.8.60.5-->25.8.60.4-->25.8.60.3-->25.8.60.2-->25.8.60.6

Copyright IBM Corp. 1998, 2005 Unit 5. HACMP Installation 5-37


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

HACMP's SNMP Support


HACMP uses SNMP to provide:
Notification of cluster events
Cluster configuration/state information
Support in HACMP 5.3 is provided by the cluster manager
A client (smux peer) of AIX's snmpdv3
Support consists of:
Maintaining a management information base (MIB)
Responding to SNMP queries for HACMP information
Generating SNMP traps
Used by clinfoES and HAView
Available to any snmp manager and the snmpinfo command

Copyright IBM Corporation 2005

Figure 5-24. HACMPs SNMP Support AU546.0

Notes:

HACMP support of SNMP


In HACMP 5.3 SNMP manager support is provided by the cluster manager component.
This SNMP manager allows the cluster to be monitored via SNMP queries and SNMP
traps. In addition, HACMP includes an extension to the Tivoli NetView product called
HAView. This extension can be used to make Tivoli NetView HACMP-aware. The clinfo
daemon as well as any SNMP manager and the snmpinfo command can interface to
this SNMP manager. This is discussed in more detail in the course HACMP
Administration II: HACMP Administration and Problem Determination.

5-38 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
Cluster Information Daemon (clinfo)
An SNMP-aware client to the cluster manager
Provides
A cluster information API to the HACMP SNMP manager
Focused on providing HACMP cluster information
Easier to work with than the SNMP APIs
Support for ARP cache issues
Is used by:
The clstat command
Customer written utility/monitoring tools
Implemented as the clinfoES subsystem

Copyright IBM Corporation 2005

Figure 5-25. Cluster Information Daemon (clinfo) AU546.0

Notes:

What the clinfo daemon provides


The clinfo daemon provides an interface (covered in Unit 3) for dealing with ARP cache
related issues as well as an Application Program Interface (API) which can be used to
write C and C++ programs which meet customer-specific needs related to monitoring
the cluster.

Where clinfo runs


The clinfo daemon can run on HACMP cluster server nodes or on any machine which
has the clinfo code installed.

Clinfo is required for some status commands


Clinfo must be running on a node or client machine in order to use any of the clstat
related commands (clstat, xclstat, clstat.cgi)

Copyright IBM Corp. 1998, 2005 Unit 5. HACMP Installation 5-39


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Starting Clinfo
Starting Clinfo on an HACMP server node:
The clinfo daemon can be started in a number of ways (see the HACMP
Administration Guide) but probably the best way is to start it along with the rest of
the HACMP daemons by setting the Startup Cluster Information Daemon? field to
true when using the smit Start Cluster Services screen (which will be discussed in
the next unit).
Starting Clinfo on a Client:
Use the /usr/es/sbin/cluster/etc/rc.cluster script or the startsrc command to start
clinfo on a client.
You can also use the standard AIX 5L startsrc command startsrc -s clinfoES

5-40 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
Highly Available NFS Server Support
Cluster administrator can:
Define NFS exports at directory level to all clients
Define NFS mounts and network to HACMP nodes
Specify export options for HACMP to set
HACMP preserves file locks and dupcache across fallovers
Limitations
Lock support is limited to two node clusters
Resource group is only active on one node at a time

Copyright IBM Corporation 2005

Figure 5-26. Highly Available NFS Server Support AU546.0

Notes:

HACMP NFS Support


The HACMP software provides the following availability enhancements to NFS
operations:
- Reliable NFS server capability that allows a backup processor to recover current
NFS activity should the primary NFS server fail, preserving the locks on NFS
filesystems and the duplicate request cache
- Ability to specify a network for NFS mounting
- Ability to define NFS exports and mounts at the directory level
- Ability to specify export options for NFS-exported directories and filesystems

Limitations
- The locking function is available only for two-node clusters
- The resource group must behave as non-concurrent - active on one node at a time

Copyright IBM Corp. 1998, 2005 Unit 5. HACMP Installation 5-41


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Shared External Disk Access


Provides two types of shared disk support:
Serially reusable shared disks:
Varied on by one node at a time under the control of HACMP
LVM or RSCT ensures no access by 2 nodes at once
Two types of volume groups:
non-concurrent mode
Enhanced Concurrent Mode running in non-concurrent mode
Concurrent access shared disks:
Used by concurrent applications writing to raw logical volumes
Two types of volume groups:
SSA Concurrent Mode Volume Groups
Enhanced Concurrent Mode running in concurrent mode
clvm fileset required for Concurrent/Enhanced Concurrent

Copyright IBM Corporation 2005

Figure 5-27. Shared External Disk Access AU546.0

Notes:

Shared disk support


As you know by now, HACMP supports shared disks. See the shared storage unit for
more information on HACMPs shared external disk support. Recall that enhanced
concurrent mode can be used in a non-concurrent mode to provide heartbeat over disk
and fast disk takeover for resource group policies where the resource group is active on
only one node at a time.
Note that the clvm fileset is required for enhanced concurrent support even if using it in
non-concurrent mode.

5-42 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
Checkpoint
1. What component detects an adapter failure?
a. Cluster manager
b. RSCT
c. clcomd
d. clinfo
2. What component provides SNMP information?
a. Cluster manager
b. RSCT
c. clsmuxpd
d. clinfo
3. What component is required for clstat to work?
a. Cluster manager
b. RSCT
c. clcomd
d. clinfo

4. What component removes requirement for the /.rhosts file


a. Cluster manager
b. RSCT
c. clcomd
d. clinfo

Copyright IBM Corporation 2005

Figure 5-28. Checkpoint AU546.0

Notes:

Copyright IBM Corp. 1998, 2005 Unit 5. HACMP Installation 5-43


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Unit Summary

Having completed this unit, you should be able to:


State where Installation fits in the implementation process
Describe how to install HACMP 5.3
List the prerequisites for HACMP 5.3
Describe the installation process for HACMP 5.3
List and explain the purpose of the major HACMP 5.3
components

Copyright IBM Corporation 2005

Figure 5-29. Unit Summary AU546.0

Notes:

5-44 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Student Notebook

Uempty Unit 6. Initial Cluster Configuration

What This Unit Is About


In this unit, you will learn how to configure a cluster using the SMIT
HACMP interface. You will learn how to perform simple and more
advanced cluster configuration. You will also learn how and when to
verify and synchronize your cluster.

What You Should Be Able to Do


After completing this unit, you should be able to:
Configure a standby HACMP 5.3 cluster
- Use the 2 node Configuration Assistant
- Use the standard path
Configure a Mutual Takeover HACMP 5.3 cluster
Configure Topology to include:
- IP Address Takeover via alias
- Non-IP networks (rs232, diskhb)
- Persistent address
Verify, synchronize and test a cluster
Start and stop cluster services
Save a cluster configuration

How You Will Check Your Progress


Accountability:
Checkpoint
Machine exercises

References
SC23-4861-06 HACMP for AIX, Version 5.3 Planning and Installation
Guide
SC23-4862-06 HACMP for AIX, Version 5.3 Administration Guide
SC23-5177-00 HACMP for AIX, Version 5.3 Troubleshooting Guide

Copyright IBM Corp. 1998, 2005 Unit 6. Initial Cluster Configuration 6-1
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Unit Objectives
After completing this unit, you should be able to:

Configure a standby HACMP 5.3 cluster


Use the two-node cluster configuration assistant
Use the standard path
Configure a mutual takeover HACMP 5.3 cluster
Configure HACMP topology to include:
IP address takeover via alias
Non-IP networks (rs232,diskhb)
Persistent address
Verify, synchronize and test a cluster
Start and stop cluster services
Save cluster configuration

Copyright IBM Corporation 2005

Figure 6-1. Unit Objectives AU546.0

Notes:

Objectives
This unit will show how to configure a two-node mutual takeover HACMP cluster with a
heartbeat over disk and rs232 non-IP network. It will then demonstrate how to startup
and shutdown HACMP, modify the configuration of the cluster by adding and deleting
nodes, adding and deleting resource groups, implementing hardware address takeover
and configuring target-mode SSA non-IP networks.

6-2 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Student Notebook

Uempty
What We Are Going to Achieve?

Now: A two node hot standby configuration


resource group xwebserver_group with bondar as its home
(primary) node and hudson as its backup node

bondar hudson

X X

Then: A two node hot mutual takeover configuration


2nd resource group with hudson as its home (primary) node
and bondar as its backup node
Copyright IBM Corporation 2005

Figure 6-2. What We Are Going to Achieve? AU546.0

Notes:

First we will configure a standby configuration


During this course, you and your team will configure a two-node cluster, initially with one
resource group in a hot standby mode.
The X in the figure represents the application xwebserver and the arrow represents
what happens on a fallover.

What happens next


The cluster will then be expanded to add a second resource group in a mutual takeover
configuration.
The cluster will be tested for reaction to node, network, and network adapter failure and
later in the week we will also configure additional features including NFS export and
cross-mount.

Copyright IBM Corp. 1998, 2005 Unit 6. Initial Cluster Configuration 6-3
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Where Are We in the Implementation?


9Plan for network, storage, and application
Eliminate single points of failure
9Define and configure the AIX environment
Storage (adapters, LVM volume group, filesystem)
Networks (IP interfaces, /etc/hosts, non-IP)
Application start and stop scripts
9Install the HACMP filesets and reboot
Configure the HACMP environment
Topology
Cluster, node names, HACMP IP and non-IP networks
Resources, resource group, attributes:
Resources: Application Server, service label
Resource group: Identify name, nodes, policies
Attributes: Application Server, service label, VG, filesystem
Synchronize
Start HACMP
Test configuration
Save configuration
Copyright IBM Corporation 2005

Figure 6-3. Where Are We in the Implementation? AU546.0

Notes:

Ready for configuration


Now that the HACMP filesets are installed and the systems have been rebooted, we
can start to configure HACMP.

Where do we go from here


First, as mentioned on the previous visual, we will configure a standby configuration
with one application using the Two-Node Cluster Configuration Assistant. We will
compare this to what has to be done if using the Standard Configuration method
Then we will use the standard path to configure a second resource group to create a
mutual takeover configuration.
Finally, in this topic, we will use the extended path to deal with some initial configuration
choices that cannot be done with the standard path.

6-4 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Student Notebook

Uempty
The Topology Configuration
Here's the key portion of the /etc/hosts file that we'll be using in this
unit:
192.168.15.29 bondar-if1 # bondar's first interface IP label
192.168.16.29 bondar-if2 # bondar's second interface IP label
192.168.5.29 bondar-per # persistent node IP label on bondar
192.168.15.31 hudson-if1 # hudson's first interface IP label
192.168.16.31 hudson-if2 # hudson's second interface IP label
192.168.5.31 hudson-per # persistent node IP label on hudson
192.168.5.92 cxweb # the IP label for the application
# normally resident on bondar
Hostnames: bondar, hudson
bondar's network configuration (defined via smit chinet) :
en0 - 192.168.15.29
en1 - 192.168.16.29
hudson's network configuration:
en0 - 192.168.15.31
en1 - 192.168.16.31
These network interfaces are all connected to the same
physical network
The subnet mask is 255.255.255.0 on all networks/NICs
An enhanced concurrent mode volume group "ecmvg" has
been created to support the xweb application and will be
used for a disk non-IP heartbeat network
Copyright IBM Corporation 2005

Figure 6-4. The Topology Configuration AU546.0

Notes:

A sample network configuration


Every discussion must occur within a particular context. The above network
configuration is the context within which the first phase of this unit will occur. Please
refer back to this page as required over the coming visuals.
It is important to physically identify which enX corresponds to each interface address in
order to properly attach the Ethernet cables to the node(s).

Copyright IBM Corp. 1998, 2005 Unit 6. Initial Cluster Configuration 6-5
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Configuration Methods
HACMP provides two menu paths with three methods to
configure topology and resources:
Initialization and Standard Configuration
Two-node cluster configuration assistant
Does everything (Topology, Resources, Resource Group)
Only supports a 2 node standby cluster
Standard configuration
Topology done in one step
You then must configure resource groups and synchronize
Extended Configuration
More flexible but with all the options

Copyright IBM Corporation 2005

Figure 6-5. Configuration Methods AU546.0

Notes:

Configuration methods
- Standard Configuration
With this method you must do the following:
i. Topology (simplified via Add Nodes to an HACMP Cluster)
ii. Configure Resources and Resource Groups
iii. Verify and Synchronize
- Two-Node Cluster Configuration Assistant
With this method, all the steps of Standard Configuration are done at once including
adding a non-IP disk heartbeat network if you created an enhanced concurrent
volume group.
- Extended Configuration
With this method you follow similar steps as the Standard Configuration but
Topology has more steps and there are many more options. Some options can only
be done using this method such as adding a non-IP network.

6-6 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Student Notebook

Uempty
Plan Two-Node Configuration Assistant
Plan configuration path to other node = hudson_if1
This node will be the home (primary) node
Plan Application Server name = xwebserver
Used to name the cluster and the resource group
Ensure Application Server start and stop scripts exist and
are put on Bondar (where 2 node assistant will be run from):

/mydir/xweb_start
/mydir/xweb_stop

Plan service IP Label = xweb


Recommended: an enhanced concurrent volume group

Copyright IBM Corporation 2005

Figure 6-6. Plan Two-Node Configuration Assistant AU546.0

Notes:

Setting up for using the two-node Configuration Assistant


We start with the Two-Node Cluster Configuration Assistant. You would ensure that the
start and stop scripts were placed on Bondar and that you choose an interface
name/address for the other node (hudson). As you will see shortly, the application
server name will be used to generate the HACMP names for the cluster and resource
group.

Copyright IBM Corp. 1998, 2005 Unit 6. Initial Cluster Configuration 6-7
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Starting at the Very Beginning . . .


System Management
Move cursor to desired item and press Enter.
Software Installation and Maintenance
Software License Management
Devices
System Storage Management (Physical & Logical Storage)
Security & Users
Communications Applications and Services
Print Spooling
Problem Determination
Performance & Resource Scheduling
System Environments
Processes & Subsystems
Applications
Installation Assistant
Cluster System Management
Using SMIT (information only)

F1=Help F2=Refresh F3=Cancel F8=Image


F9=Shell F10=Exit Enter=Do

Copyright IBM Corporation 2005

Figure 6-7. Starting at the Very Beginning... AU546.0

Notes:

Starting at the main smit menu


Heres the top level smit screen which everyone familiar with AIX would know. HACMP
is under the Communications Applications and Services selection because,
presumably, it is considered to be primarily a networking or communications related
product.
More often, this menu (and the next one shown on the next visual) would be skipped by
entering the command smit hacmp or smitty hacmp but for the sake of completeness
we will start at the beginning of smit.

6-8 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Student Notebook

Uempty
Almost There . . .

Communications Applications and Services


Move cursor to desired item and press Enter.
TCP/IP
NFS
HACMP for AIX

F1=Help F2=Refresh F3=Cancel F8=Image


F9=Shell F10=Exit Enter=Do

Copyright IBM Corporation 2005

Figure 6-8. Almost There... AU546.0

Notes:

HACMP shows up on the Communications smit menu


Once youve installed the HACMP software, a new menu selection appears in the
Communications Applications and Services menu.

Copyright IBM Corp. 1998, 2005 Unit 6. Initial Cluster Configuration 6-9
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

The Top-Level HACMP smit Menu


# smit hacmp

HACMP for AIX

Move cursor to desired item and press Enter.

Initialization and Standard Configuration


Extended Configuration
System Management (C-SPOC)
Problem Determination Tools

F1=Help F2=Refresh F3=Cancel F8=Image


F9=Shell F10=Exit Enter=Do

Copyright IBM Corporation 2005

Figure 6-9. The Top-Level HACMP smit Menu AU546.0

Notes:

The main HACMP smit menu


This is the top level HACMP smit menu. Youll find it often simplest to get here using the
smit fastpath shown above.
As implied by the # prompt, there is little point in being here if you dont have root
privileges!

6-10 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Student Notebook

Uempty
The Standard Configuration Menu
Initialization and Standard Configuration

Move cursor to desired item and press Enter.

Configuration Assistants
Add Nodes to an HACMP Cluster
Configure Resources to Make Highly Available
Configure HACMP Resource Groups
Verify and Synchronize HACMP Configuration
HACMP Cluster Test Tool
Display HACMP Configuration

F1=Help F2=Refresh F3=Cancel F8=Image


F9=Shell F10=Exit Enter=Do

Copyright IBM Corporation 2005

Figure 6-10. The Standard Configuration Menu AU546.0

Notes:

The Installation and Standard Configuration menu


HACMP provides a playpen model which is used when doing configuration work on
the cluster. Configuration changes made using the HACMP smit screens do not take
effect until they are verified and synchronized (see the third from the bottom selection in
this menu). Instead, they are managed on the node from which the configuration work is
performed. During synchronization, the files are propagated to the other nodes and will,
in most cases, cause HACMP to be dynamically reconfigured. More about dynamic
reconfiguration in a later unit.
Note however that the Two-Node Cluster Configuration Assistant does do the
synchronization step.

Copyright IBM Corp. 1998, 2005 Unit 6. Initial Cluster Configuration 6-11
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Caution
If changes are made on one node but not synchronized and then more changes are
made on a second node and then synchronized then the changes made on the first
node are lost.
If you want to avoid losing work, make sure that you dont flip back and forth between
nodes while doing configuration work (that is, work on only one node at least until
youve synchronized your changes).
Also be careful of having two persons work on two-nodes at the same time. The first
synchronization will wipe out the second persons changes.

Recommendation
Pick one of your cluster nodes to be the one node that you use to make changes.

Configuration Assistants
Besides the Two-Node Cluster Configuration Assistant, HACMP 5.3 provides, via an
additional feature, assistants for WebSphere, Oracle, and DB2.

6-12 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Student Notebook

Uempty
Two-Node Cluster Configuration Assistant
Two-Node Cluster Configuration Assistant
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
[Entry Fields]
* Communication Path to Takeover Node [hudson_if1] +
* Application Server Name [xwebserver]
* Application Server Start Script [/mydir/xweb_start]
* Application Server Stop Script [/mydir/xweb_stop]
* Service IP Label [xweb] +

F1=Help F2=Refresh F3=Cancel F4=List


F5=Reset F6=Command F7=Edit F8=Image
F9=Shell F10=Exit Enter=Do

Copyright IBM Corporation 2005

Figure 6-11. Two-Node Cluster Configuration Assistant AU546.0

Notes:

The Two-Node Cluster Configuration Assistant smit menu


Well, here is the menu that we planned for a few pages back. If your network is setup
correctly and you have configured a shared enhanced concurrent mode volume group,
then HACMP will use this menu to build a complete two-node cluster including
Topology, Resources, Resource group and a non-IP network using heartbeat over disk.
Also, synchronization is done and you are all ready to start cluster services on both
nodes.

Copyright IBM Corp. 1998, 2005 Unit 6. Initial Cluster Configuration 6-13
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Let's See What We've Done


# /usr/es/sbin/cluster/utilities/cltopinfo
Cluster Name: xwebserver_cluster
Cluster Connection Authentication Mode: Standard
Cluster Message Authentication Mode: None
Cluster Message Encryption: None
Use Persistent Labels for Communication: No
There are 2 node(s) and 2 network(s) defined
NODE bondar:
Network net_ether_01
bondar-if1 192.168.15.29
bondar-if2 192.168.16.29
Network net_diskhb_01
bondar_hdisk5_01 /dev/hdisk5
NODE hudson:
Network net_ether_01
hudson-if1 192.168.15.31
hudson-if2 192.168.16.31
Network net_diskhb_01
hudson_hdisk5_01 /dev/hdisk5
Resource Group xwebserver_group
Startup Policy Online On Home Node Only
Fallover Policy Fallover To Next Priority Node in List
Fallback Policy Never Fallback
Participating Nodes bondar hudson
Service IP Label xweb

Copyright IBM Corporation 2005

Figure 6-12. Let's See What We've Done AU546.0

Notes:

Seeing what was done


One utility that displays what was done is the cltopinfo command. This command
displays the clusters topology. Notice that each nodes IP labels on the physical
Ethernet have been defined on the net_ether_01 HACMP network. The non-IP diskhb
network was also configured and appears with communication devices (dev/hdisk5) on
each of the two-nodes. Notice what policies you get automatically configured when
using this approach.
Another utility is cldisp. This command shows what is configured from the application
point of view.

6-14 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Student Notebook

Uempty Points to observe


- The Two-Node Configuration Assistant did everything -- created topology objects
including a non-IP heartbeat over disk network when it saw an enhanced concurrent
volume group, created resource groups, and verified and synchronized the cluster.
- The Two-Node Configuration Assistant assigns names so you will have to decide if
you like them.
- The assist also takes for HACMP all network adapters found. You may have to
remove ones for interfaces that you dont want HACMP to have.
- Only one application and 2 nodes are supported.
- You need to pre-configure the shared volume group. If it is Enhanced Concurrent
Mode then a non-IP heartbeat over disk network is configured. Else you are on your
own to configure a non-IP network.
- The Fallback policy is set to Never Fallback.

Copyright IBM Corp. 1998, 2005 Unit 6. Initial Cluster Configuration 6-15
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

The Standard Configuration Method


Initialization and Standard Configuration

Move cursor to desired item and press Enter.

Configuration Assistants
Add Nodes to an HACMP Cluster
Configure Resources to Make Highly Available
Configure HACMP Resource Groups
Verify and Synchronize HACMP Configuration
HACMP Cluster Test Tool
Display HACMP Configuration

F1=Help F2=Refresh F3=Cancel F8=Image


F9=Shell F10=Exit Enter=Do

Copyright IBM Corporation 2005

Figure 6-13. The Standard Configuration Method AU546.0

Notes:

Standard Configuration Menu


If the Two-Node Cluster Configuration Assistant is not appropriate, then one alternative
may be the Standard Configuration method. The topology is simplified here and you
have more control over names but you have to do the work of building the resource
groups and synchronizing.
The starting place is to choose Add Nodes to an HACMP Cluster.

6-16 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Student Notebook

Uempty
Add Nodes to an HACMP Cluster
Configure Nodes to an HACMP Cluster (standard)

Type or select values in entry fields.


Press Enter AFTER making all desired changes.

[Entry Fields]
* Cluster Name [mycluster]

New Nodes (via selected communication paths) [bondar-if1 hudson-if1] +


Currently Configured Node(s)

F1=Help F2=Refresh F3=Cancel F8=Image


F9=Shell F10=Exit Enter=Do

Copyright IBM Corporation 2005

Figure 6-14. Add Nodes to an HACMP Cluster AU546.0

Notes:

Input for the standard configuration method


Assuming your network planning and setup was done correctly you need only decide on
a name for the cluster and choose one IP interface for each node that will be in the
cluster including the node where you see this screen.
Notice that you can select the interfaces from a pick list and at this point in time, the
Currently Configured Node(s) field is empty.

Copyright IBM Corp. 1998, 2005 Unit 6. Initial Cluster Configuration 6-17
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

What Did We Get


# /usr/es/sbin/cluster/utilities/cltopinfo
Cluster Name: mycluster
Cluster Connection Authentication Mode: Standard
Cluster Message Authentication Mode: None
Cluster Message Encryption: None
Use Persistent Labels for Communication: No
There are 2 node(s) and 2 network(s) defined
NODE bondar:
Network net_ether_01
bondar-if1 192.168.15.29
bondar-if2 192.168.16.29

NODE hudson:
Network net_ether_01
hudson-if1 192.168.15.31
hudson-if2 192.168.16.31

No resource groups defined

Copyright IBM Corporation 2005

Figure 6-15. What Did We Get AU546.0

Notes:

Output from standard configuration


Notice that there is no non-IP network and there are no resources and no resource
groups yet when using the standard configuration method.

6-18 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Student Notebook

Uempty
What Is Left To Do?
Initialization and Standard Configuration

Move cursor to desired item and press Enter.

Configuration Assistants
Add Nodes to an HACMP Cluster
Configure Resources to Make Highly Available
Configure HACMP Resource Groups
Verify and Synchronize HACMP Configuration
HACMP Cluster Test Tool
Display HACMP Configuration

F1=Help F2=Refresh F3=Cancel F8=Image


F9=Shell F10=Exit Enter=Do

Copyright IBM Corporation 2005

Figure 6-16. What Is Left To Do? AU546.0

Notes:

Not done yet


Since Add Nodes to an HACMP Cluster only does the topology, there is more to do
using the Standard Configuration Method:
- Application Server and Service Address are Resources must be created using the
Configure Resources to Make Highly Available
- Resource group with policies and attributes must be created using the Configure
HACMP Resource Groups
- Extended Configuration method must be used to add non-IP heartbeat network(s)
- The cluster definitions must be propagated to the other node(s) using Verify and
Synchronize

Copyright IBM Corp. 1998, 2005 Unit 6. Initial Cluster Configuration 6-19
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Game plan
We are going to assume for the moment that these additional steps were done in order
to avoid duplication in the lecture material. We will show the menus for these steps as
part of defining the second resource group for the mutual takeover environment.
For the moment we will assume the Two-Node Cluster Configuration Assistant was
used to accomplish these additional steps.
So, now we have a cluster configured.

6-20 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Student Notebook

Uempty
Where Are We in the Implementation?
9Plan for network, storage, and application
Eliminate single points of failure
9Define and configure the AIX environment
Storage (adapters, LVM volume group, filesystem)
Networks (IP interfaces, /etc/hosts, non-IP)
Application start and stop scripts
9Install the HACMP filesets and reboot
9 Configure the HACMP environment
Topology
Cluster, node names, HACMP IP and non-IP networks
Resources, resource group, attributes:
Resources: Application Server, service label
Resource group: Identify name, nodes, policies
Attributes: Application Server, service label, VG, filesystem
Synchronize
Start HACMP
Test configuration
Save configuration
Copyright IBM Corporation 2005

Figure 6-17. Where Are We in the Implementation? AU546.0

Notes:

Standby configuration is implemented


Wow, all done except for starting HACMP.

Copyright IBM Corp. 1998, 2005 Unit 6. Initial Cluster Configuration 6-21
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Starting Cluster Services (1 of 4)


HACMP for AIX

Move cursor to desired item and press Enter.

Initialization and Standard Configuration


Extended Configuration
System Management (C-SPOC)
Problem Determination Tools

F1=Help F2=Refresh F3=Cancel F8=Image


F9=Shell F10=Exit Enter=Do
Copyright IBM Corporation 2005

Figure 6-18. Starting Cluster Services (1 of 4) AU546.0

Notes:

How to start HACMP Cluster Services


Starting HACMP involves a trip to the top level HACMP menu since we need to go
down into the System Management (C-SPOC) part of the tree. C-SPOC will be covered
in more detail in the next unit.
It might be worth pointing out that if you use the Web based smit for HACMP fileset then
there is a navigation menu that allows you to skip from one menu path to another one
without having to go back to the top.
After a few times, you will probably learn to use the command smit clstart or smitty
clstart to bypass this menu and the next 2 menus.

6-22 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Student Notebook

Uempty
Starting Cluster Services (2 of 4)
System Management (C-SPOC)

Move cursor to desired item and press Enter.


Manage HACMP Services
HACMP Communication Interface Management
HACMP Resource Group and Application Management
HACMP Log Viewing and Management
HACMP Security and Users Management
HACMP Logical Volume Management
HACMP Concurrent Logical Volume Management
HACMP Physical Volume Management

Open a SMIT Session on a Node

F1=Help F2=Refresh F3=Cancel F8=Image


F9=Shell F10=Exit Enter=Do
Copyright IBM Corporation 2005

Figure 6-19. Starting Cluster Services (2 of 4) AU546.0

Notes:

The C-SPOC menu


The Manage HACMP Services choice is where we want to go next.

Copyright IBM Corp. 1998, 2005 Unit 6. Initial Cluster Configuration 6-23
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Starting Cluster Services (3 of 4)


Manage HACMP Services
Move cursor to desired item and press Enter.

Start Cluster Services


Stop Cluster Services
Show Cluster Services

F1=Help F2=Refresh F3=Cancel F8=Image


F9=Shell F10=Exit Enter=Do

Copyright IBM Corporation 2005

Figure 6-20. Starting Cluster Services (3 of 4) AU546.0

Notes:

The Manage HACMP Services menu


Were almost there...

6-24 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Student Notebook

Uempty
Starting Cluster Services (4 of 4)
# smit clstart
Start Cluster Services
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
[Entry Fields]
* Start now, on system restart or both now +
Start Cluster Services on these nodes [bondar,hudson] +
BROADCAST message at startup? true +
Startup Cluster Information Daemon? true +
Reacquire resources after forced down ? false +

F1=Help F2=Refresh F3=Cancel F4=List


F5=Reset F6=Command F7=Edit F8=Image
F9=Shell F10=Exit Enter=Do

Copyright IBM Corporation 2005

Figure 6-21. Starting Cluster Services (4 of 4) AU546.0

Notes:

Startup choices
There are a few choices to make. For the moment we will just recommend the defaults
except selecting both nodes and turning on the Cluster Information Daemon. A more
detailed discussion of the options are discussed in the next unit.

Remember the fast path


Notice the smit clstart fastpath. This is often much faster than working your way
through the menu tree.

Copyright IBM Corp. 1998, 2005 Unit 6. Initial Cluster Configuration 6-25
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Stopping Cluster Services


# smit clstop

Stop Cluster Services


Type or select values in entry fields.
Press Enter AFTER making all desired changes.
[Entry Fields]
* Stop now, on system restart or both now +
Stop Cluster Services on these nodes [bondar] +
BROADCAST cluster shutdown? true +
* Shutdown mode graceful +
+--------------------------------------------------------------------------+
Shutdown mode

Move cursor to desired item and press Enter.

graceful
takeover
forced

F1=Help F2=Refresh F3=Cancel
F1 F8=Image F10=Exit Enter=Do
F5 /=Find n=Find Next
F9+--------------------------------------------------------------------------+

Copyright IBM Corporation 2005

Figure 6-22. Stopping Cluster Services AU546.0

Notes:

Stopping HACMP Cluster Services on one node


Were not actually going to stop HACMP right now but heres the screen to do it from.
Note that the clstrmgrES subsystem continues to run after cluster services are stopped.
This is new to HACMP 5.3
The fields on this screen present us with a number of options. For the moment we will
use the defaults. A more detailed discussion will be included in the next unit. With the
default options, there is no fallover of the resource groups currently running on this
node.

Notice the fast path


Notice the smit clstart fastpath. This is often much faster than working your way
through the menu tree.

6-26 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Student Notebook

Uempty
Configuring Mutual Takeover
Now let's add a second resource group called adventure,
which uses hudson as the primary node and bondar as the
backup.
X

bondar hudson

X X

A A

Copyright IBM Corporation 2005

Figure 6-23. Configuring Mutual Takeover AU546.0

Notes:

Ready for configuring a mutual takeover environment


Weve got a simple two-node cluster. It is now time to add a second resource group.
The figure shows the new A application running normally on the hudson node but will
fall over to the bondar node. So each application in a mutual takeover environment
normally runs on its own node except when there is a fallover and both will run on the
same node. You must plan, therefore, for the case that both applications may be
running on the same node.

Real life issues


In real life, you would almost certainly stop before adding the second resource group
and do some serious testing of the xwebserver resource group and save the
configuration. In this unit however, we will cover testing and saving the configuration
after configuring the mutual takeover environment.

Copyright IBM Corp. 1998, 2005 Unit 6. Initial Cluster Configuration 6-27
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Prepare for Adventure RG Resources


Create start, stop scripts for yweb
Create ywebvg, filesystem, add yweb service label to /etc/hosts
Run discovery
Extended Configuration
Move cursor to desired item and press Enter.
Discover HACMP-related Information from Configured Nodes
Extended Topology Configuration
Extended Resource Configuration
Extended Event Configuration
Extended Performance Tuning Parameters Configuration
Security and Users Configuration
Snapshot Configuration
Export Definition File for Online Planning Worksheets
Extended Verification and Synchronization
HACMP Cluster Test Tool

F1=Help F2=Refresh F3=Cancel F8=Image


F9=Shell F10=Exit Enter=Do

Copyright IBM Corporation 2005

Figure 6-24. Prepare for Adventure RG Resources AU546.0

Notes:

You may need to run Discovery


If you did not prepare for the second resource group before starting the first resource
group, then you need to re-generate the pick lists. This requires using the Extended
Configuration menu shown above. This applies to network objects as well as LVM
objects.

6-28 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Student Notebook

Uempty
Configure the Resources
smit hacmp -> Initialization and Standard Configuration

Configure Resources to Make Highly Available

Move cursor to desired item and press Enter.

Configure Service IP Labels/Addresses


Configure Application Servers
Configure Volume Groups, Logical Volumes and Filesystems
Configure Concurrent Volume Groups and Logical Volumes

F1=Help F2=Refresh F3=Cancel F8=Image


F9=Shell F10=Exit Enter=Do

Copyright IBM Corporation 2005

Figure 6-25. Configuring the Resources AU546.0

Notes:

The plan for creating the second resource group


Were about to define our second resource group. This time we will use the standard
menus which are listed below the Two-Node Configuration Assistant selection. Some
people prefer to define the resources first and then define the resource group. Others
prefer to define the resource group before defining the resources.
Were going to start by defining the resources first. This menu selection can be found
using the smit path: smit hacmp -> Initialization and Standard Configuration ->
Configure HACMP Resource Groups
We will continue to use IPAT via aliasing as we did with the first resource group.

Copyright IBM Corp. 1998, 2005 Unit 6. Initial Cluster Configuration 6-29
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Adding the Second Service IP Label

Configure Service IP Labels/Addresses


Move cursor to desired item and press Enter.

Add a Service IP Label/Address


Change/Show a Service IP Label/Address
Remove Service IP Label(s)/Address(es)

F1=Help F2=Refresh F3=Cancel F8=Image


F9=Shell F10=Exit Enter=Do

Copyright IBM Corporation 2005

Figure 6-26. Adding the Second Service IP Label AU546.0

Notes:

The standard configuration path menu


This is the menu for managing service IP labels and addresses within the standard
configuration path.

6-30 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Student Notebook

Uempty
Adding Adventure Service Label (1 of 3)
Add a Service IP Label/Address (standard)
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
[Entry Fields]
* IP Label/Address [] +
* Network Name [] +
+--------------------------------------------------------------------------+
IP Label/Address

Move cursor to desired item and press Enter.

(none) ((none))
bondar (192.168.5.29)
hudson (192.168.5.31)
yweb (192.168.5.70)
xweb (192.168.5.92)

F1=Help F2=Refresh F3=Cancel
F1 F8=Image F10=Exit Enter=Do
F5 /=Find n=Find Next
F9+--------------------------------------------------------------------------+

Copyright IBM Corporation 2005

Figure 6-27. Adding Adventure Service Label (1 of 3) AU546.0

Notes:

Selecting the service label


This is the HACMP smit screen for adding a service IP label in the standard
configuration path.
The popup for the IP Label/Address field gives us a list of the IP labels which were
found in /etc/hosts but not associated with NICs.This could be quite a long list
depending on how many entries there are in the /etc/hosts file. Although, in practice, the
list is fairly short as /etc/hosts on cluster nodes tends to only include IP labels which are
important to the cluster.
The service IP label that we intend to associate with the adventure resource groups
application is yweb.

Copyright IBM Corp. 1998, 2005 Unit 6. Initial Cluster Configuration 6-31
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Adding Adventure Service Label (2 of 3)


Add a Service IP Label/Address (standard)
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
[Entry Fields]
* IP Label/Address [yweb] +
* Network Name [] +

+--------------------------------------------------------------------------+
Network Name

Move cursor to desired item and press Enter.

net_ether_01 (192.168.15.0/24 192.168.16.0/24)

F1=Help F2=Refresh F3=Cancel
F1 F8=Image F10=Exit Enter=Do
F5 /=Find n=Find Next
F9+--------------------------------------------------------------------------+

Copyright IBM Corporation 2005

Figure 6-28. Adding Adventure Service Label (2 of 3) AU546.0

Notes:

Choosing the network name


Next, we need to specify which network the service IP label will be used on. The
automatically generated network names are a bother to type so weve used the popup
list which contains the only IP network defined on this cluster.
Notice that the popup list entry names the network and indicates the IP subnets
associated with each network. This is potentially useful information at this point as we
must specify a service IP label which is not in either of these subnets in order to satisfy
the rules for IPAT via IP aliasing.

6-32 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Student Notebook

Uempty
Adding Adventure Service Label (3 of 3)
Add a Service IP Label/Address (standard)
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
[Entry Fields]
* IP Label/Address [yweb] +
* Network Name [net_ether_01] +

F1=Help F2=Refresh F3=Cancel F4=List


F5=Reset F6=Command F7=Edit F8=Image
F9=Shell F10=Exit Enter=Do

Copyright IBM Corporation 2005

Figure 6-29. Adding Adventure Service Label (3 of 3) AU546.0

Notes:

Menu filled in
This screen shows the parameters for the Adventure resource groups service IP label.
Once were sure that this is what we intend to do, press Enter to define the service IP
label. The label is then available from a pick list when you add resources to a resource
group later.

Copyright IBM Corp. 1998, 2005 Unit 6. Initial Cluster Configuration 6-33
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Add Adventure Application Server (1 of 2)

Configure Application Servers


Move cursor to desired item and press Enter.

Add an Application Server


Change/Show an Application Server
Remove an Application Server

F1=Help F2=Refresh F3=Cancel F8=Image


F9=Shell F10=Exit Enter=Do

Copyright IBM Corporation 2005

Figure 6-30. Adding Adventure Application Server (1 of 2) AU546.0

Notes:

Configuring the Application Server resource


Weve now got to define an Application Server for the discovery resource group. This
Configure Application Servers menu appears under the Configure Resources to
Make Highly Available menu in the standard configuration path.

6-34 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Student Notebook

Uempty
Add Adventure Application Server (2 of 2)
Add Application Server
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
[Entry Fields]
* Server Name [ywebserver]
* Start Script [/usr/local/scripts/startyweb]
* Stop Script [/usr/local/scripts/stopyweb]

F1=Help F2=Refresh F3=Cancel F4=List


F5=Reset F6=Command F7=Edit F8=Image
F9=Shell F10=Exit Enter=Do

Copyright IBM Corporation 2005

Figure 6-31. Adding Adventure Application Server (2 of 2) AU546.0

Notes:

Filling out the Add Application Server menu


An application server has a name and consists of a start script and a stop script. Use
full path for script names. The server name is then available from a pick list when
adding resources to a resource group later.

Review of start and stop scripts


The start script is invoked by HACMP when it needs to start the application. The stop
script is invoked when HACMP needs to stop the application (typically as part of a
fallback to a higher priority node).
The start script should first verify that all the required resources are actually available
and log a clear and useful message if it detects a problem. If the start script doesnt
check for the required resources, then the application might appear to function for quite
sometime before someone realizes that a critical resource isnt available.

Copyright IBM Corp. 1998, 2005 Unit 6. Initial Cluster Configuration 6-35
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

The start script should then start the application. It should then probably wait until it is
sure that the application has completely started.
The stop scripts responsibility is to stop the application. It must not exit until the
application is totally stopped as HACMP will start to unmount filesystems and release
other resources as soon as the stop script terminates. The attempt to release these
resource might fail if there are remnants of the application still running.
The start and stop scripts must exist on all cluster nodes defined in the resource group
(that is, they must reside on a local non-shared filesystem) or you will not be able to
verify and synchronize the cluster.
HACMP 5.2 and later provides a file collection facility to help keep the start and stop
scripts in synch.

6-36 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Student Notebook

Uempty
Adding the Second Resource Group
smit hacmp -> Initialization and Standard Configuration

Configure HACMP Resource Groups


Move cursor to desired item and press Enter.
Add a Resource Group
Change/Show a Resource Group
Remove a Resource Group
Change/Show Resources for a Resource Group (standard)

F1=Help F2=Refresh F3=Cancel F8=Image


F9=Shell F10=Exit Enter=Do

Copyright IBM Corporation 2005

Figure 6-32. Adding the Second Resource Group AU546.0

Notes:

Menu to choose Add a Resource Group


Now, we are ready to create the Adventure Resource Group definition.

Copyright IBM Corp. 1998, 2005 Unit 6. Initial Cluster Configuration 6-37
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Setting Name, Nodes, and Policies

Add a Resource Group

*Resource Group Name [adventure]


*Participating Nodes(Default Node Priority) [hudson bondar]
Startup Policy Online On Home Node O> +
Fallover Policy Fallover To NextPrio> +
Fallback Policy Fallback To Higher Pr> +

F1=Help F2=Refresh F3=Cancel F4=List


F5=Reset F6=Command F7=Edit F8=Image
F9=Shell F10=Exit Enter=Do

Copyright IBM Corporation 2005

Figure 6-33. Setting Name, Nodes, and Policies AU546.0

Notes:

Filling out the Add a Resource Group Menu


Well call this resource group adventure. It will be defined to operate on two-nodes -
bondar and hudson. This time the order is reversed with hudson being the home or
highest priority node.
The policies will be chosen as listed in the visual. Depending on the type of resource
group and how it is configured, the relative priority of nodes within the resource group
might be quite important.
Note that we must also change the Fallback Policy of the xwebserver resource group to
match this visual in order to completely implement mutual takeover.

6-38 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Student Notebook

Uempty

Adding Resources to the Adventure RG (1)


Configure HACMP Resource Groups
Move cursor to desired item and press Enter.
Add a Resource Group
Change/Show a Resource Group
Remove a Resource Group
Change/Show Resources for a Resource Group (standard)

+--------------------------------------------------------------------------+
Select a Resource Group

Move cursor to desired item and press Enter.

xwebserver_group
| adventure

F1=Help F2=Refresh F3=Cancel
F8=Image F10=Exit Enter=Do
F1 /=Find n=Find Next
F9+--------------------------------------------------------------------------+

Copyright IBM Corporation 2005

Figure 6-34. Adding Resources to the Adventure RG (1 of 2) AU546.0

Notes:

Selecting the resource group


Heres the Configure HACMP Resource Groups menu in the standard configuration
path. This menu is found under the standard configuration paths top level menu.
Select the Change/Show Resources for a Resource Group (standard) to get started.
When the Select a Resource Group popup appears, select which resource group you
want to work with and press Enter.

Copyright IBM Corp. 1998, 2005 Unit 6. Initial Cluster Configuration 6-39
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Adding Resources to the Adventure RG (2)

Change/Show All Resources and Attributes for a Resource Group


Type or select values in entry fields.
Press Enter AFTER making all desired changes.
[Entry Fields]
Custom Resource Group Name adventure
Participating Node Names (Default Node Priority) hudson bondar
Startup Behavior Online On First Avail>
Fallover Behavior Fallover To Next Prio>
Fallback Behavior Fallback To Higher Pr>
Service IP Labels/Addresses [yweb] +
Application Servers [ywebserver] +
Volume Groups [ywebvg] +
Use forced varyon of volume groups, if necessary false +
Filesystems (empty is ALL for VGs specified) [] +

F1=Help F2=Refresh F3=Cancel F4=List


F5=Reset F6=Command F7=Edit F8=Image
F9=Shell F10=Exit Enter=Do

Copyright IBM Corporation 2005

Figure 6-35. Adding Resources to the Adventure RG (2 of 2) AU546.0

Notes:

Filling out the Change/Show All Resources and Attributes for a


Resource Group menu
This is the screen for showing/changing resources in a resource group within the
standard configuration path. There really arent a lot of choices to be made: yweb is the
service IP label we created earlier and ywebserver is the application server that we just
defined. ywebvg is a shared volume group containing a few filesystems needed by the
ywebserver application. We could specify the list of filesystems in the Filesystems field
but the default is to mount all filesystems in the volume group which both what we want
and easier to maintain in the long run.
Dont forget to press Enter to actually add the resources to the resource group.

6-40 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Student Notebook

Uempty
Synchronize and Test the Changes
Initialization and Standard Configuration
Move cursor to desired item and press Enter.
Two-Node Cluster Configuration Assistant
Add Nodes to an HACMP Cluster
Configure Resources to Make Highly Available
Configure HACMP Resource Groups
Verify and Synchronize HACMP Configuration
HACMP Cluster Test Tool
Display HACMP Configuration

F1=Help F2=Refresh F3=Cancel F8=Image


F9=Shell F10=Exit Enter=Do

Copyright IBM Corporation 2005

Figure 6-36. Synchronize and Test the Changes AU546.0

Notes:

Using the Standard Configuration to synchronize and test


Once youve defined or changed the clusters topology and/or resources, you need to
request that HACMP
- Verify and synchronize your changes.
- Test your configuration

Verify and Synchronize


These menu choices act immediately in the Standard Configuration. Their actions can
be customized in the Extended Configuration menus which we will not do here.
The verification process collects AIX configuration information from each cluster node
and uses this information, the clusters current configuration (if there is one) and the
proposed configuration to verify that the proposed configuration (and the change it
represents if this is not the first synchronization) are valid. It is possible to override

Copyright IBM Corp. 1998, 2005 Unit 6. Initial Cluster Configuration 6-41
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

verification errors butt only if you are using Extended Configuration. Deciding to do so is
a decision which must be approached with the greatest of care as it is VERY unusual
for a verification error to occur which can be safely overridden.
Also, remember the earlier discussion about synchronization - any HACMP
configuration changes made on any other cluster node will be lost if you complete a
synchronization on this cluster node.

HACMP Cluster Test Tool


This test facility is disruptive to the cluster so you want to run it when not running cluster
services. Thus application downtime is required.
The Standard Configuration automated test procedure performs four sets of tests in the
following order:
1. General topology tests
2. Resource group tests on non-concurrent resource groups
3. Resource group tests on concurrent resource groups
4. Catastrophic failure test
The Cluster Test Tool discovers information about the cluster configuration, and
randomly selects cluster components, such as nodes and networks, to be used in the
testing.
See the Administration Guide Chapter 7 for more details.

6-42 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Student Notebook

Uempty
Extended Configuration
Extended Configuration
Move cursor to desired item and press Enter.

Discover HACMP-related Information from Configured


Nodes
Extended Topology Configuration
Extended Resource Configuration
Extended Event Configuration
Extended Performance Tuning Parameters Configuration
Security and Users Configuration
Snapshot Configuration
Export Definition File for Online Planning\
worksheets
Extended Verification and Synchronization
HACMP Cluster Test Tool

F1=Help F2=Refresh F3=Cancel F8=Image


F9=Shell F10=Exit Enter=Do

Copyright IBM Corporation 2005

Figure 6-37. Extended Configuration AU546.0

Notes:

Reasons to use Extended Path


Heres the top level extended configuration path menu. We need to pop over to this path
in order to perform some steps that cannot be done using the Standard Configuration
such as defining a non-IP network, adding a persistent label and saving the
configuration data. We will explore these steps in this unit.
Extended Configuration is also required for configuring IPAT via Replacement and
Hardware Address Takeover as well as defining an SSA heartbeat network. These will
be explored in the next unit.
Finally, other reasons for using the Extend Path will be covered in the course HACMP
Administration II: Administration and Problem Determination.

Copyright IBM Corp. 1998, 2005 Unit 6. Initial Cluster Configuration 6-43
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Extended Topology Configuration Menu


Extended Topology Configuration
Move cursor to desired item and press Enter.

Configure an HACMP Cluster


Configure HACMP Nodes
Configure HACMP Sites
Configure HACMP Networks
Configure HACMP Communication Interfaces/Devices
Configure HACMP Persistent Node IP Label/Address
Configure HACMP Global Networks
Configure HACMP Network Modules
Configure Topology Services and Group Services
Show HACMP Topology

F1=Help F2=Refresh F3=Cancel F8=Image


F9=Shell F10=Exit Enter=Do

Copyright IBM Corporation 2005

Figure 6-38. Extended Topology Configuration Menu AU546.0

Notes:

Getting to the non-IP network configuration menus


Non-IP networks are elements of the clusters topology so were in the topology section
of the extended configuration paths menu hierarchy.
A non-IP network is defined by specifying the networks end-points. These end-points
are called communication devices so we have to head down into the communication
Interfaces/devices part of the extended topology screens.

6-44 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Student Notebook

Uempty
Communication Interfaces and Devices
Configure HACMP Communication Interfaces/Devices
Move cursor to desired item and press Enter.

Add Communication Interfaces/Devices


Change/Show Communication Interfaces/Devices
Remove Communication Interfaces/Devices
Update HACMP Communication Interface with Operating System
Settings

F1=Help F2=Refresh F3=Cancel F8=Image


F9=Shell F10=Exit Enter=Do

Copyright IBM Corporation 2005

Figure 6-39. Communication Interfaces and Devices AU546.0

Notes:

The Communication Interfaces and Devices menu


This is the communication and devices part of the extended configuration path. We will
select the Add Communication Interfaces/Devices option.

Copyright IBM Corp. 1998, 2005 Unit 6. Initial Cluster Configuration 6-45
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Defining a Non-IP Network (1 of 3)


Configure HACMP Communication Interfaces/Devices
Move cursor to desired item and press Enter.
Add Communication Interfaces/Devices
Change/Show Communication Interfaces/Devices
Remove Communication Interfaces/Devices
Update HACMP Communication Interface with Operating System
Settings

+--------------------------------------------------------------------------+
Select a category

Move cursor to desired item and press Enter.

Add Discovered Communication Interface and Devices
Add Predefined Communication Interfaces and Devices

F1=Help F2=Refresh F3=Cancel
F8=Image F10=Exit Enter=Do
F1 /=Find n=Find Next
F9+--------------------------------------------------------------------------+

Don't risk a potentially catastrophic partitioned cluster by using cheap rs232 cables!

Copyright IBM Corporation 2005

Figure 6-40. Defining a Non-IP Network (1 of 3) AU546.0

Notes:

Deciding which Add to choose


The first question we encounter is whether we want to add discovered or pre-defined
communication interfaces and devices. The automatic discovery that was done when
the added the cluster nodes earlier would have found the rs232 devices so we pick the
Discovered option.

6-46 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Student Notebook

Uempty
Defining a Non-IP Network (2 of 3)
Configure HACMP Communication Interfaces/Devices
Move cursor to desired item and press Enter.
Add Communication Interfaces/Devices
Change/Show Communication Interfaces/Devices
Remove Communication Interfaces/Devices
Update HACMP Communication Interface with Operating System
Settings

+--------------------------------------------------------------------------+
Select a category

Move cursor to desired item and press Enter.

# Discovery last performed: (Feb 12 18:20)
Communication Interfaces
Communication Devices

F1=Help F2=Refresh F3=Cancel
F8=Image F10=Exit Enter=Do
F1 /=Find n=Find Next
F9+--------------------------------------------------------------------------+

Copyright IBM Corporation 2005

Figure 6-41. Defining a Non-IP Network (2 of 3) AU546.0

Notes:

Is it an interface or a device?
Now we need to indicate whether we are adding a communication interface or a
communication device. Non-IP networks use communication devices as end-points
(dev/tty for example) so select Communication Devices to continue.

Copyright IBM Corp. 1998, 2005 Unit 6. Initial Cluster Configuration 6-47
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Defining a Non-IP Network (3 of 3)


Press Enter and HACMP defines a new non-IP network with these communication
devices.
Configure HACMP Communication Interfaces/Devices
Move cursor to desired item and press Enter.
Add Communication Interfaces/Devices
Change/Show Communication Interfaces/Devices
Remove Communication Interfaces/Devices
+--------------------------------------------------------------------------+
Select Point-to-Point Pair of Discovered Communication Devices to Add

Move cursor to desired item and press F7. Use arrow keys to scroll.
ONE OR MORE items can be selected.
Press Enter AFTER making all selections.

# Node Device Device Path Pvid
bondar hdisk5 /dev/hdisk5 000b4a7cd1...
hudson hdisk5 /dev/hdisk5 000b4a7cd1...
> bondar tty1 /dev/tty1
> hudson tty1 /dev/tty1
bondar tmssa1 /dev/tmssa1
hudson tmssa2 /dev/tmssa2

F1=Help F2=Refresh F3=Cancel
F7=Select F8=Image F10=Exit
F1 Enter=Do /=Find n=Find Next
F9+--------------------------------------------------------------------------+

Copyright IBM Corporation 2005

Figure 6-42. Defining a Non-IP Network (3 of 3) AU546.0

Notes:
Were now presented with a list of the discovered communication devices.
You can either choose to add an rs232 (using the /dev/tty entries) network or a diskhb
network (using the /dev/hdisk entries). We will cover SSA in the next unit.

rs232 networks
a. /dev/tty1 on bondar is connected to /dev/tty1 on hudson using a fully wired rs232
null-modem cable (dont risk a potentially catastrophic partitioned cluster by failing to
configure a non-IP network or by using cheap cables). Select these two devices and
press Enter to define the network.
b. Before you use this smit screen to define the non-IP network, make sure that you
verify that the link between the two-nodes is actually working.
c. For our example, the non-IP rs232 network connecting bondar to hudson can be
tested as follows:
i. Issue the command stty < /dev/tty1 on one node. The command should
hang.

6-48 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Student Notebook

Uempty ii. Issue the command stty < /dev/tty1 on the other node. The command
should immediately report the ttys status and the command that was hung on
the first node should also immediately report its ttys status.
iii. These commands should not be run while HACMP is using the tty.
iv. If you get the behavior described above (especially including the hang in the first
step that recovers in the second step) then the ports are probably connected
together properly (check the HACMP log files once the cluster is up to be sure). If
you get any other behavior then you probably are using the wrong cable or the
rs232 cable isnt connected the way that you think it is).

diskhb networks
a. Make sure you choose a pair of entries (such as /dev/hdisk5 shown in the figure),
one for each of two-nodes. Note that it is actually the pvids that must match.
b. You can test the connection using the command /usr/sbin/rsct/bin/dhb_read as
follows:
- On Node A, enter: dhb_read -p hdisk5 -r
- On Node B, enter: dhb_read -p hdisk5 -t
- You should then see on both nodes: Link operating normally

Handling more than two-nodes.


In a cluster with more than two-nodes, the serial network must form a loop. For
example, in a three node cluster the RS232 cables may run from:
- serial port 0 on node A to serial port 1on node B, then from
- serial port 0 on node B to serial port 1on node C, then from
- serial port 0 on node C to serial port 1 on node A.
Such a configuration would require the definition of three serial networks, since each
serial network can only connect between two-nodes.

Copyright IBM Corp. 1998, 2005 Unit 6. Initial Cluster Configuration 6-49
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Defining Persistent Node IP Labels (1 of 3)


Configure HACMP Persistent Node IP Label/Addresses
Move cursor to desired item and press Enter.

Add a Persistent Node IP Label/Address


Change / Show a Persistent Node IP Label/Address
Remove a Persistent Node IP Label/Address

F1=Help F2=Refresh F3=Cancel F8=Image


F9=Shell F10=Exit Enter=Do

Copyright IBM Corporation 2005

Figure 6-43. Defining Persistent Node IP Labels (1 of 3) AU546.0

Notes:

Benefits/risks on using persistent IP labels


Defining a persistent node IP label on each cluster node allows the cluster
administrators to contact specific cluster nodes (or write scripts which access specific
cluster nodes) without needing to worry about whether the service IP address is
currently available or which node it is associated with.
The (slight) risk associated with persistent node IP labels is that users might start using
them to access applications within the cluster. You should discourage this practice as
the application might move to another node. Instead, users should be encouraged to
use the IP address associated with the application (that is, the service IP label that you
configure into the applications resource group).

6-50 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Student Notebook

Uempty
Defining Persistent Node IP Labels (2 of 3)
Configure HACMP Persistent Node IP Label/Addresses
Move cursor to desired item and press Enter.
Add a Persistent Node IP Label/Address
Change / Show a Persistent Node IP Label/Address
Remove a Persistent Node IP Label/Address

+--------------------------------------------------------------------------+
Select a Node

Move cursor to desired item and press Enter.

bondar
hudson

F1=Help F2=Refresh F3=Cancel
F8=Image F10=Exit Enter=Do
/=Find n=Find Next
+--------------------------------------------------------------------------+

Copyright IBM Corporation 2005

Figure 6-44. Defining Persistent Node IP Labels (2 of 3) AU546.0

Notes:

First you Select a Node


Selecting the Add a Persistent Node IP Label/Address choice gets us this prompt for
which node wed like to define the address on.

Copyright IBM Corp. 1998, 2005 Unit 6. Initial Cluster Configuration 6-51
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Defining Persistent Node IP Labels (3 of 3)


Press Enter and then repeat for the hudson persistent IP label on the hudson node.
Add a Persistent Node IP Label/Address
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
[Entry Fields]
* Node Name bondar
* Network Name [net_ether_01] +
* Node IP Label/Address [bondar-per] +

F1=Help F2=Refresh F3=Cancel F4=List


F5=Reset F6=Command F7=Edit F8=Image
F9=Shell F10=Exit Enter=Do

Copyright IBM Corporation 2005

Figure 6-45. Defining Persistent Node IP Labels (3 of 3) AU546.0

Notes:

Filling out the Add a Persistent Node IP Label/Address menu


Once youre on this screen, select the appropriate IP network from the Network Name
and IP Label/Address that you want to use from the pick lists.
You can repeat these persistent menus to choose a persistent label for the other
node(s)
Press Enter to finish the operation.

6-52 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Student Notebook

Uempty
Synchronize
smitty hacmp -> Extended Configuration

Extended Verification and Synchronization

Type or select values in entry fields.


Press Enter AFTER making all desired changes.

[Entry Fields]
* Verify, Synchronize or Both [Both] +
* Automatically correct errors found during [No] +
verification?

* Force synchronization if verification fails? [No] +


* Verify changes only? [No] +
* Logging [Standard] +

Copyright IBM Corporation 2005

Figure 6-46. Synchronize AU546.0

Notes:

The Extended Verification and Synchronization menu


This time the extended configuration paths HACMP Verification and Synchronization
screen was chosen. When the extended path version is chosen it presents a
customization menu (shown above) which the standard path does not do:
Verify, Synchronize or Both - this option is useful to verify a change without
synchronizing it (you might want to make sure that what you are doing makes sense
without committing to actually using the changes yet). Synchronizing without verifying is
almost certainly a foolish idea except in the most exotic of circumstances.
Automatically correct errors found during verification? - this option is discussed in
the unit on problem determination. It is a feature of HACMP 5.2 which can fix certain
errors that clverify detects. By default it is turned off. This option only appears if cluster
services are not started.

Copyright IBM Corp. 1998, 2005 Unit 6. Initial Cluster Configuration 6-53
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Force synchronization if verification fails? - this is almost always a very bad idea.
Make sure that you really and truly MUST set this option to Yes before doing so.
Verify changes only? - setting this option to Yes will cause the verification to focus on
aspects of the configuration which changed since the last synchronization. As a result,
the verification will run slightly faster. This might be useful during the mid to early stages
of cluster configuration. It seems rather risky once the cluster is in production.
Logging - you can increase the amount of logging related to this verification and
synchronization by setting this option to Verbose. This can be quite useful if you are
having trouble figuring out what is going wrong with a failed verification.

6-54 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Student Notebook

Uempty
Save Configuration: snapshot
Snapshot Configuration
Move cursor to desired item and press Enter.
Add a Cluster Snapshot
Change/Show a Cluster Snapshot
Remove a Cluster Snapshot
Apply a Cluster Snapshot
Configure Custom Snapshot Method
Convert Existing Snapshot For Online Planning worksheets

Add a Cluster Snapshot


Type or select values in entry fields.
Press Enter AFTER making all desired changes.
[Entry Fields]
* Cluster Snapshot Name [] /
Custom Defined Snapshot Methods [] +
Save Cluster Log Files in snapshot No +
* Cluster Snapshot Description []

Copyright IBM Corporation 2005

Figure 6-47. Save Configuration: snapshot AU546.0

Notes:

Saving the cluster configuration


You can save the cluster configuration to a snapshot file or to an xml file. The cluster
can be restored either from the snapshot file for the xml file. The xml file can also be
used with the online planning worksheets and potentially with other applications. This
visual looks at the snapshot method and the next visual looks at the xml method.

Creating a snapshot
smit hacmp --->Extended Configuration --->Snapshot Configuration
A snapshot captures the HACMP ODM files which allows you to recover the cluster
definitions. There is also an info file. The info file is discussed further in the AU57
course HACMP Administration II: Administration and Troubleshooting.
If necessary there is, from the Snapshot Configuration menu, another option to restore
(apply) a snapshot.

Copyright IBM Corp. 1998, 2005 Unit 6. Initial Cluster Configuration 6-55
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Save Configuration: xml file


smitty hacmp ->Extended Configuration
Export Definition File for Online Planning Worksheets

Type or select values in entry fields.


Press Enter AFTER making all desired changes.

[Entry Fields]
* File Name [/var/hacmp/log/cluster.haw] /
Cluster Notes []

Snapshot Configuration
Move cursor to desired item and press Enter.
Add a Cluster Snapshot
Change/Show a Cluster Snapshot
Remove a Cluster Snapshot
Apply a Cluster Snapshot
Configure Custom Snapshot Method
Convert Existing Snapshot For Online Planning Worksheets
Copyright IBM Corporation 2005

Figure 6-48. Save Configuration: xml file AU546.0

Notes:

Creating the xml file


Using Extended Configuration, you can save the cluster configuration directly to an xml
file via the menu Export Definition File for Online Planning Worksheets or from a
snapshot via the Snapshot Configuration menu Convert Existing Snapshot For Online
Planning Worksheets.
Once created, you can use the Online Planning worksheets to get an updated view of
the configuration and/or change the configuration. The xml file can potentially be used
from other applications or manually to make and display configuration information. This
will be discussed further in the AU57 course HACMP Administration II: Administration
and Troubleshooting. For the moment, in case you want to know the command to apply
an xml file is /usr/es/sbin/cluster/utilities/cl_opsconfig

6-56 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Student Notebook

Uempty
Removing a Cluster
Use Extended Topology Configuration

Configure an HACMP Cluster

Move cursor to desired item and press Enter.

Add/Change/Show an HACMP Cluster


Remove an HACMP Cluster
Reset Cluster Tunables

F1=Help F2=Refresh F3=Cancel F4=List


F5=Reset F6=Command F7=Edit F8=Image
F9=Shell F10=Exit Enter=Do

cat "" > /usr/es/sbin/cluster/etc/rhosts


Copyright IBM Corporation 2005

Figure 6-49. Removing a Cluster AU546.0

Notes:

Starting over
If you have to start over, you can:
- Stop cluster services on all nodes
- Use Extended Configuration, as shown above to remove the cluster (on all nodes)
- Remove the entries (but not the file) from /usr/es/sbin/cluster/etc/rhosts (on all
nodes)
If you really want to start over, then you can:
- installp -u cluster
- rm -r /usr/es/* (be very careful here)

Copyright IBM Corp. 1998, 2005 Unit 6. Initial Cluster Configuration 6-57
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

We're There!
We've configured a two-node cluster with two resource
groups:
Each resource group has a different home (primary) node
Each resource group falls back to its home node on recovery
This is called a two-node mutual takeover cluster

bondar hudson

D D

A A

Each resource group is also configured to use IPAT via IP aliasing.


This particular style of cluster (mutual takeover with IPAT) is, by
far, the most common style of HACMP cluster.
Copyright IBM Corporation 2005

Figure 6-50. We're There! AU546.0

Notes:

Mutual Takeover completed


Weve finished configuring a two-node HACMP cluster with two resource groups
operating in a mutual takeover configuration. The term mutual takeover derives from
the fact that each node is the home node for one resource group and provides fallover
(that is, takeover) services to the other node.
This is, without a doubt, the most common style of HACMP cluster as it provides a
reasonably economical way to protect two separate applications. It also keeps the folks
with budgetary responsibility happier since each of the systems is clearly doing
something useful all the time (many would argue that a system which is just acting as
a standby for a critical application IS doing something useful but it is a lot easier to
make the case if both systems are actually running an important application at all
times).
The cluster even has the mandatory non-IP network!

6-58 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Student Notebook

Uempty
Checkpoint
1. True or False?
It is possible to configure a recommended simple two-node cluster
environment using just the standard configuration path.
2. In which of the top level HACMP menu choices is the menu for starting and
stopping cluster nodes?
a. Initialization and Standard Configuration
b. Extended Configuration
c. System Management (C-SPOC)
d. Problem Determination Tools
3. In which of the top level HACMP menu choices is the menu for defining a non-
IP heartbeat network?
a. Initialization and Standard Configuration
b. Extended Configuration
c. System Management (C-SPOC)
d. Problem Determination Tools

4. True or False?
It is possible to configure HACMP faster by having someone help you on
the other node.
5. True or False?
You must specify exactly which filesystems you want mounted when you
put resources into a resource group.

Copyright IBM Corporation 2005

Figure 6-51. Checkpoint AU546.0

Notes:

Copyright IBM Corp. 1998, 2005 Unit 6. Initial Cluster Configuration 6-59
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Break Time!

Copyright IBM Corporation 2005

Figure 6-52. Break Time! AU546.0

Notes:
Some notes from the developer :-)
This is a photograph of Lake Louise in the Canadian Rocky Mountains (located about a
90 minute drive west of Calgary). If you are ever there, make sure that you rent one of
the canoes in the photograph and go for a paddle out on the lake. Theres also a
number of quite spectacular and not particularly strenuous hikes that start from near the
point that this photograph was taken. The hike that goes up to the tea house is
definitely worth an afternoon (you can pay money to go up on horseback if you dont
feel like walking for free).
Also, can you read this? ;-)
Aoccdrnig to a rscheearchr at an Elingsh uinervtisy, it deosn't mttaer in waht oredr the
ltteers in a wrod are, the olny iprmoetnt tihng is taht the frist and lsat ltteer is at the rghit
pclae. The rset can be a toatl mses and you can sitll raed it wouthit porbelm. Tihs is
bcuseae we do not raed ervey lteter by it slef but the wrod as a wlohe.

6-60 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Student Notebook

Uempty
Unit Summary

Having completed this unit, you should be able to:

Configure a Standby HACMP 5.3 cluster


Use the Two-Node Cluster Configuration Assistant
Use the Standard Path
Configure a Mutual Takeover HACMP 5.3 cluster
Configure HACMP Topology to include:
IP-based networks enabled for address takeover via
alias
Non-IP networks (rs232,diskhb)
Persistent address
Verify and Synchronize a cluster
Start and Stop Cluster Services
Save cluster configuration

Copyright IBM Corporation 2005

Figure 6-53. Unit Summary AU546.0

Notes:

Copyright IBM Corp. 1998, 2005 Unit 6. Initial Cluster Configuration 6-61
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

6-62 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty Unit 7. Basic HACMP Administration

What This Unit Is About


This unit describes basic administration tasks for HACMP for AIX.

What You Should Be Able to Do


After completing this unit, you should be able to:
Topic 1: Topology and Resource Group Management
- Use the SMIT Standard and Extended menus to make topology
and resource group changes
Topic 2: Cluster Single Point of Control
- Describe the benefits and capabilities of C-SPOC
- Perform routine administrative changes using C-SPOC
- Start and stop cluster services
- Perform resource group move operations
Topic 3: Dynamic Automatic Reconfiguration Event Facility
- Discuss the benefits and capabilities of DARE
- Use the snapshot facility to return to a previous cluster
configuration or to rollback changes
Topic 4: Implementing WebSMIT
- Configure and use WebSMIT

How You Will Check Your Progress


Accountability:
Checkpoint
Machine exercises

References
SC23-4867-05 HACMP for AIX: HACMP Master Glossary
SC23-4864-06 HACMP for AIX: Concepts and Facilities Guide
SC23-4861-06 HACMP for AIX: Planning and Installation Guide
SC23-4862-06 HACMP for AIX: Administration Guide
SC23-5177-00 HACMP for AIX: Troubleshooting Guide

Copyright IBM Corp. 1998, 2005 Unit 7. Basic HACMP Administration 7-1
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Unit Objectives
After completing this unit, you should be able to:
Topic 1: Topology and resource group management
Use the SMIT Standard and Extended menus to make topology
and resource group changes
Topic 2: Cluster Single Point of Control
Describe the benefits and capabilities of C-SPOC
Perform routine administrative changes using C-SPOC
Start and stop cluster services
Perform resource group move operations
Topic 3: Dynamic Automatic Reconfiguration Event facility
Discuss the benefits and capabilities of DARE
Use the snapshot facility to return to a previous cluster
configuration or to rollback changes
Topic 4: Implementing Web SMIT
Configure and use Web SMIT
Copyright IBM Corporation 2005

Figure 7-1. Unit Objectives AU546.0

Notes:

7-2 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty 7.1 Topology and Resource Group Management

Copyright IBM Corp. 1998, 2005 Unit 7. Basic HACMP Administration 7-3
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Topic 1 Objectives:
Topology and Resource Group Management

After completing this topic, you should be able to:


Add a resource group and resources to an existing cluster
Remove a resource group from a cluster
Add a new node to an existing cluster
Remove a node from an existing cluster
Configure a target-mode SSA non-IP heartbeat network.
Configure a non-IP disk heartbeat network

Copyright IBM Corporation 2005

Figure 7-2. Topic 1 Objectives: Topology and Resource Group Management AU546.0

Notes:

7-4 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
Yet Another Resource Group
The users have asked that a third application be added to the
cluster
The application uses very little CPU or memory and there's
money in the budget for more disk drives in the disk enclosure
Minimizing downtime is particularly important for this application
The resource group is called ballerina (nobody seems to know
why)
bondar hudson

D D

A A

B B

Copyright IBM Corporation 2005

Figure 7-3. Yet Another Resource Group AU546.0

Notes:

Introduction
Were now going to embark on a series of hypothetical scenarios to illustrate a number
of routine cluster administration tasks. Some of these scenarios are more realistic than
others.

Add a resource group


In this first scenario, were going to add a resource group to the cluster. This new
resource group is called ballerina (seriously, nobody knows why it is called ballerina).
This resource groups application has been reported to use very little in the way of
system resource, and there is a strong desire to avoid unnecessary ballerina outages.

Copyright IBM Corp. 1998, 2005 Unit 7. Basic HACMP Administration 7-5
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Adding a Third Resource Group


We'll change the startup policy to "Online On First Available Node" so that
the resource group comes up when bondar is started when hudson is down.

Add a Resource Group

Type or select values in entry fields.


Press Enter AFTER making all desired changes.

[Entry Fields]
* Resource Group Name [ballerina]
* Participating Node Names (Default Node Priority) [bondar hudson] +

Startup Policy Online On First Avail> +


Fallover Policy Fallover To Next Prio> +
Fallback Policy Never Fallback +
avoid startup delay by starting on first available node

avoid fallback outage by never falling back

F1=Help F2=Refresh F3=Cancel F4=List


F5=Reset F6=Command F7=Edit F8=Image
F9=Shell F10=Exit Enter=Do

Does the order in which the node names are specified matter?
Copyright IBM Corporation 2005

Figure 7-4. Adding the Third Resource Group AU546.0

Notes:

Add a resource group


We use the Extended path (note the extra span in the bridge icon). It is configured to
start up on whichever node is available first and to never fallback when a node rejoins
the cluster. The combination of these two parameters should go a long way towards
minimizing this resource groups downtime.
If youre familiar with the older terminology of cascading and rotating resource groups,
this resource groups policies make it essentially identical to a cascading without
fallback resource group.

7-6 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
Adding a Third Service IP Label (1 of 2)
The extended configuration path screen for adding a service IP
label provides more options. We choose those which mimic the
standard path.

Configure HACMP Service IP Labels/Addresses


Move cursor to desired item and press Enter.

Add a Service IP Label/Address


Change/Show a Service IP Label/Address
Remove Service IP Label(s)/Address(es)

+--------------------------------------------------------------------------+
Select a Service IP Label/Address type

Move cursor to desired item and press Enter.

Configurable on Multiple Nodes
Bound to a Single Node

F1=Help F2=Refresh F3=Cancel
F8=Image F10=Exit Enter=Do
F1 /=Find n=Find Next
F9+--------------------------------------------------------------------------+

Copyright IBM Corporation 2005

Figure 7-5. Adding a Third Service IP Label (1 or 2) AU546.0

Notes:

Introduction
We need to define a service IP label for the ballerina resource group.

IPAT via IP aliasing required


Creating a third resource group on a cluster with one network and two nodes requires
the use of IPAT via IP aliasing. A cluster which only uses IPAT via IP replacement is for
all practical purposes restricted to one resource group with a service IP label per node
per IP network. Since our cluster has only one IP network, it would not be able to
support three different resource groups with service IP labels if it used IPAT via
replacement.

Copyright IBM Corp. 1998, 2005 Unit 7. Basic HACMP Administration 7-7
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Resource group limits


HACMP 5.2 and above supports a maximum of 64 resource groups and 256 IP
addresses known to HACMP (for example, service and interface IP addresses). There
are no other limits on the number of resource groups with service labels that can be
configured on an IPAT via IP aliasing network (although, eventually, you run out of CPU
power or memory or something for all the applications associated with these resource
groups).

Service IP Label/Address type


Bound to a Single Node is used with IBMs General Parallel File System (GPFS).

Network Name
The next step is to associate this Service Label with one of the HACMP networks. This
is not shown in the visual.

7-8 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
Adding a Third Service IP Label (2 of 2)
The Alternate Hardware Address ... field is used for hardware address takeover
(which we'll configure later).

Add a Service IP Label/Address configurable on Multiple Nodes (extended)


Type or select values in entry fields.
Press Enter AFTER making all desired changes.
[Entry Fields]
* IP Label/Address [zweb] +
* Network Name net_ether_01
Alternate HW Address to accompany IP Label/Address []

F1=Help F2=Refresh F3=Cancel F4=List


F5=Reset F6=Command F7=Edit F8=Image
F9=Shell F10=Exit Enter=Do

Copyright IBM Corporation 2005

Figure 7-6. Adding a Third Service IP Label (2 of 2) AU546.0

Notes:

Adding a service IP label


The visual shows the entry fields for this panel.

Copyright IBM Corp. 1998, 2005 Unit 7. Basic HACMP Administration 7-9
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Adding a Third Application Server


The Add Application Server screen is identical in both
configuration paths.
Add Application Server
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
[Entry Fields]
* Server Name [zwebserver]
* Start Script [/usr/local/scripts/startzweb]
* Stop Script [/usr/local/scripts/stopzweb]

F1=Help F2=Refresh F3=Cancel F4=List


F5=Reset F6=Command F7=Edit F8=Image
F9=Shell F10=Exit Enter=Do

Copyright IBM Corporation 2005

Figure 7-7. Adding a Third Application Server AU546.0

Notes:

Add an application server


You must give it a name and specify a start and stop script.

7-10 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
Adding Resources to the Third RG (1 of 2)
The extended path's SMIT screen for updating the contents of a
resource group is MUCH more complicated!

Change/Show All Resources and Attributes for a Resource Group


Type or select values in entry fields.
Press Enter AFTER making all desired changes.
[TOP] [Entry Fields]
Resource Group Name ballerina
Resource Group Management Policy custom
Inter-site Management Policy ignore
Participating Node Names (Default Node Priority) hudson bondar
Startup Behavior Online On First Avail>
Fallover Behavior Fallover To Next Prio>
Fallback Behavior Fallback To Higher Pr>
Fallback Timer Policy (empty is immediate) [] +
Service IP Labels/Addresses [zweb] +
Application Servers [zwebserver] +
Volume Groups [zwebvg] +
Use forced varyon of volume groups, if necessary false +
Automatically Import Volume Groups false +
Filesystems (empty is ALL for VGs specified) [] +
Filesystems Consistency Check fsck +
[MORE...17]
F1=Help F2=Refresh F3=Cancel F4=List
F5=Reset F6=Command F7=Edit F8=Image
F9=Shell F10=Exit Enter=Do

Copyright IBM Corporation 2005

Figure 7-8. Adding Resources to the Third RG (1 of 2) AU546.0

Notes:

Adding resources to a resource group (Extended Path)


This is the first of two screens to show the Extended Path menu for adding attributes.
Unlike the Standard path, it contains a listing of all the possible attributes.

Copyright IBM Corp. 1998, 2005 Unit 7. Basic HACMP Administration 7-11
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Adding Resources to the Third RG (2 of 2)


Even more choices!
Fortunately, only a handful tend to be used in any given context.
Change/Show All Resources and Attributes for a Resource Group
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
[MORE...17] [Entry Fields]
Filesystems Consistency Check fsck +
Filesystems Recovery Method sequential +
Filesystems mounted before IP configured false +
Filesystems/Directories to Export [] +
+
Filesystems/Directories to NFS Mount [] +
Network For NFS Mount [] +
Tape Resources [] +
Raw Disk PVIDs [] +
Fast Connect Services [] +
Communication Links [] +
Primary Workload Manager Class [] +
Secondary Workload Manager Class [] +
Miscellaneous Data []
[BOTTOM]
F1=Help F2=Refresh F3=Cancel F4=List
F5=Reset F6=Command F7=Edit F8=Image
F9=Shell F10=Exit Enter=Do

Copyright IBM Corporation 2005

Figure 7-9. Adding Resources to the Third RG (2 of 2) AU546.0

Notes:

Adding resources to a resource group (Extended Path)


More choices.

7-12 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
Synchronize Your Changes
The extended configuration path provides verification and
synchronization options.
HACMP Verification and Synchronization
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
[Entry Fields]
* Verify, Synchronize or Both [Both] +
Force synchronization if verification fails? [No] +
* Verify changes only? [No] +
* Logging [Standard] +

F1=Help F2=Refresh F3=Cancel F4=List


F5=Reset F6=Command F7=Edit F8=Image
F9=Shell F10=Exit Enter=Do

Don't forget to verify that you actually implemented what was


planned by executing your test plan.

Copyright IBM Corporation 2005

Figure 7-10. Synchronize Your Changes AU546.0

Notes:

Extended path synchronization


This is the Extended path screen to show the Synchronization menu options which are
not shown in the Standard path.

Copyright IBM Corp. 1998, 2005 Unit 7. Basic HACMP Administration 7-13
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Expanding the Cluster


The Users "find" money in the budget and decide to "invest" it
to improve the availability of the adventure and discovery
applications
Nobody seems to be too worried about the ballerina application

bondar hudson jones

D D D

A A A

B B

Copyright IBM Corporation 2005

Figure 7-11. Expanding the Cluster AU546.0

Notes:

Expanding the cluster


In this scenario, well look at adding a node to a cluster.

7-14 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
Adding a New Cluster Node
1. Physically connect the new node
Connect to IP networks
Connect to the shared storage subsystem
Connect to non-IP networks to create a ring encompassing all nodes
2. Configure the shared volume groups on the new node
3. Add the new node's IP labels to /etc/hosts on one existing node
4. Copy /etc/hosts from this node to all other nodes
5. Install AIX, HACMP and application software on the new node:
Install patches required to bring the new node up to the same level as the
existing cluster nodes
Reboot the new node (always reboot after installing or patching HACMP)
6. Add the new node to the existing cluster (from one of the
existing nodes)
7. Add non-IP networks for the new node
8. Synchronize your changes
9. Start HACMP on the new node
10.Add the new node to the appropriate resource groups
11.Synchronize your changes again
12.Run through your (updated) test plan
Copyright IBM Corporation 2005

Figure 7-12. Adding a New Cluster Node AU546.0

Notes:

Adding a new cluster node


Adding a node to an existing cluster isnt all that difficult from the HACMP perspective
(as we see shortly). The hard work involves integrating the node into the cluster from an
AIX and from an application perspective.
Well be discussing the HACMP part of this work.

Copyright IBM Corp. 1998, 2005 Unit 7. Basic HACMP Administration 7-15
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Add Node -- Standard Path


Configure Nodes to an HACMP Cluster (standard)
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
[Entry Fields]
* Cluster Name [xwebserver_cluster]
New Nodes (via selected communication paths) [jones-if1] +
Currently Configured Node(s) bondar hudson

F1=Help F2=Refresh F3=Cancel F4=List


F5=Reset F6=Command F7=Edit F8=Image
F9=Shell F10=Exit Enter=Do

Copyright IBM Corporation 2005

Figure 7-13. Add Node -- Standard Path AU546.0

Notes:

Add node, standard path


This operation and any other SMIT HACMP operations must be performed from an
existing cluster node. The jones node wont become an existing cluster node until we
synchronize our changes in a few pages, so use an existing node until the cluster is
synchronized.
Cluster Name
SMIT fills this field in based on the previous value. Leave as is or change. The name
that you assign to your cluster is pretty much arbitrary. It appears in log files and the
output of commands.
New Nodes
The new node(s) are specified by giving the IP label or IP address of one currently
active network interface on each node. Use F4 to generate a list, or type one resolvable
IP label or IP address for each node. If more than one node, they should be space

7-16 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty separated.
This path will be taken to initiate communication with the node.
The command launched by this SMIT screen contacts the clcomd at each address and
asks them to come together in a new cluster. Obviously, HACMP must already be
installed on the new node(s).

Copyright IBM Corp. 1998, 2005 Unit 7. Basic HACMP Administration 7-17
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Add Node -- Standard Path (In Progress)


Here's the output shortly after pressing Enter:
COMMAND STATUS
Command: OK stdout: yes stderr: no
Before command completion, additional instructions may appear below.
[TOP]
Communication path jones-if1 discovered a new node. Hostname is
jones. Addin
g it to the configuration with Nodename jones.
Discovering IP Network Connectivity
Retrieving data from available cluster nodes. This could take a few
minutes....

F1=Help F2=Refresh F3=Cancel F6=Command


F8=Image F9=Shell F10=Exit /=Find
n=Find Next
Copyright IBM Corporation 2005

Figure 7-14. Add Node -- Standard Path (In Progress) AU546.0

Notes:

Add Node -- Standard Path (In Progress)


Once the Enter key is pressed on the previous SMIT screen, HACMPs automatic
discovery process begins. Once the nodes have been identified, the discovery process
retrieves the network and disk configuration information from each of the cluster nodes
and builds a description of the new cluster. The network configuration information is
used to create the initial IP network configuration.
The remainder of the output from this SMIT operation isnt particularly interesting
(unless something goes wrong), so well just ignore it for now. You will get an
opportunity to add a node in the lab exercises.

7-18 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
Add Node -- Extended Path
Add a Node to the HACMP Cluster
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
[Entry Fields]
* Node Name [jones]
Communication Path to Node [jones_if1] +

F1=Help F2=Refresh F3=Cancel F4=List


F5=Reset F6=Command F7=Edit F8=Image
F9=Shell F10=Exit Enter=Do

Copyright IBM Corporation 2005

Figure 7-15. Add Node -- Extended Path AU546.0

Notes:

Add Node -- Extended Path


The Extended Path is essentially the same as the Standard Path in this case.

Copyright IBM Corp. 1998, 2005 Unit 7. Basic HACMP Administration 7-19
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Define the Non-IP rs232 Networks (1 of 2)


We've added (and tested) a fully wired rs232 null modem cable
between jones' tty1 and bondar's tty2 so we define that as a non-IP
rs232 network.
Configure HACMP Communication Interfaces/Devices
+--------------------------------------------------------------------------+
Select Point-to-Point Pair of Discovered Communication Devices to Add

Move cursor to desired item and press F7. Use arrow keys to scroll.
ONE OR MORE items can be selected.
Press Enter AFTER making all selections.

# Node Device Device Path Pvid
bondar tty0 /dev/tty0
hudson tty0 /dev/tty0
jones tty0 /dev/tty0
bondar tty1 /dev/tty1
hudson tty1 /dev/tty1
> jones tty1 /dev/tty1
> bondar tty2 /dev/tty2
hudson tty2 /dev/tty2
jones tty2 /dev/tty2

F1=Help F2=Refresh F3=Cancel
F7=Select F8=Image F10=Exit
F1 Enter=Do /=Find n=Find Next
F9+--------------------------------------------------------------------------+

Copyright IBM Corporation 2005

Figure 7-16. Define the Non-IP rs232 Networks (1 of 2) AU546.0

Notes:

Introduction
This visual, and the next one, show how to add two more non-IP networks to our cluster.
Make sure that the topology of the non-IP networks that you describe to HACMP
corresponds to the actual topology of the physical rs232 cables.
In the following notes, we discuss why we need to add two more non-IP links.

Minimum non-IP network configuration: ring


At minimum, the non-IP networks in a cluster with more than two nodes should form a
ring encompassing all the nodes, that is each node is connected to its two directly
adjacent neighbors. A ring provides redundancy (two non-IP heartbeat paths for every
node) and is simple to implement.

7-20 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty Mesh configuration


The most redundant configuration would be a mesh, each node connected to every
other node. However, if you have more than three nodes, this means extra complexity
and can mean a lot of extra hardware, depending on which type of non-IP network you
are using.
Note: For a three node cluster, a ring and a mesh are the same.

Star configuration not recommended


While the HACMP for AIX Planning and Installation Guide discusses using a star, ring
or mesh configuration for non-IP networks, a star is not a good choice. A star means
that the center node is a SPOF for the non-IP networks; losing the center node means
that all the other nodes lose non-IP network connectivity.

Three node example


In the example in the visual, we already have a non-IP network between bondar and
hudson so we need to configure one between jones and bondar (on this page) and
another one between hudson and jones (on the next page).
If, for example, we left out the hudson and jones non-IP network then the loss of the
bondar node would leave the hudson and jones nodes without a non-IP path between
them.

Five node example


In even larger clusters, it is still only necessary to configure a ring of non-IP networks.
For example, if the nodes are A, B, C, D and E then five non-IP networks would be the
minimum requirement: A to B, B to C, C to D, E to F and F to A being one possibility. Of
course, other possibilities exist like A to B, B to D, D to C, C to E and E to F.

Copyright IBM Corp. 1998, 2005 Unit 7. Basic HACMP Administration 7-21
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Define the Non-IP rs232 Networks (2 of 2)


We've also added (and tested) a fully wired rs232 null-modem
cable between hudson's tty2 and jones' tty2 so we define that as a
non-IP rs232 network.

Configure HACMP Communication Interfaces/Devices


+--------------------------------------------------------------------------+
Select Point-to-Point Pair of Discovered Communication Devices to Add

Move cursor to desired item and press F7. Use arrow keys to scroll.
ONE OR MORE items can be selected.
Press Enter AFTER making all selections.

# Node Device Device Path Pvid
bondar tty0 /dev/tty0
hudson tty0 /dev/tty0
jones tty0 /dev/tty0
bondar tty1 /dev/tty1
hudson tty1 /dev/tty1
jones tty1 /dev/tty1
bondar tty2 /dev/tty2
> hudson tty2 /dev/tty2
> jones tty2 /dev/tty2

F1=Help F2=Refresh F3=Cancel
F7=Select F8=Image F10=Exit
F1 Enter=Do /=Find n=Find Next
F9+--------------------------------------------------------------------------+

Copyright IBM Corporation 2005

Figure 7-17. Define the Non-IP rs232 Networks (2 of 2) AU546.0

Notes:

Define non-IP networks


Make sure that the topology of the non-IP networks that you describe to HACMP
corresponds to the actual topology of the physical rs232 cables.

7-22 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
Synchronize Your Changes
HACMP Verification and Synchronization
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
[Entry Fields]
* Verify, Synchronize or Both [Both] +
Force synchronization if verification fails? [No] +
* Verify changes only? [No] +
* Logging [Standard] +

F1=Help F2=Refresh F3=Cancel F4=List


F5=Reset F6=Command F7=Edit F8=Image
F9=Shell F10=Exit Enter=Do

Copyright IBM Corporation 2005

Figure 7-18. Synchronize Your Changes AU546.0

Notes:

Synchronize
Once weve synchronized our changes, the jones node is an official member of the
cluster.

Copyright IBM Corp. 1998, 2005 Unit 7. Basic HACMP Administration 7-23
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Start HACMP on the New Node


# smitty clstart
Start Cluster Services
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
[Entry Fields]
* Start now, on system restart or both now +
Start Cluster Services on these nodes [jones] +
BROADCAST message at startup? true +
Startup Cluster Lock Services? false +
Startup Cluster Information Daemon? false +
Reacquire resources after forced down ? false +

F1=Help F2=Refresh F3=Cancel F4=List


F5=Reset F6=Command F7=Edit F8=Image
F9=Shell F10=Exit Enter=Do

Copyright IBM Corporation 2005

Figure 7-19. Start HACMP on the New Node AU546.0

Notes:

Start HACMP on the new node


Now that jones is an official member of the cluster, we can start HACMP on the node.
This and all future SMIT HACMP operations can be performed from any of the three
cluster nodes.

7-24 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
Add the Node to a Resource Group
Change/Show a Resource Group

Type or select values in entry fields.


Press Enter AFTER making all desired changes.

[Entry Fields]
Resource Group Name adventure
New Resource Group Name []
Participating Node Names (Default Node Priority) [hudson bondar jones] +

Startup Policy Online On Home Node O> +


Fallover Policy Fallover To Next Prio> +
Fallback Policy Fallback To Higher Pr> +

F1=Help F2=Refresh F3=Cancel F4=List


F5=Reset F6=Command F7=Edit F8=Image
F9=Shell F10=Exit Enter=Do

Repeat for the discovery resource group.


Copyright IBM Corporation 2005

Figure 7-20. Add the Node to a Resource Groups AU546.0

Notes:

Add the node to a resource group


Remember that adding the new jones node to the HACMP configuration is the easy
part. You would not perform any of the SMIT HACMP operations shown so far in this
scenario until you were CERTAIN that the jones node was actually capable of running
the application.

Copyright IBM Corp. 1998, 2005 Unit 7. Basic HACMP Administration 7-25
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Synchronize Your Changes


Synchronize the changes and run through the test plan.

HACMP Verification and Synchronization


Type or select values in entry fields.
Press Enter AFTER making all desired changes.
[Entry Fields]
* Verify, Synchronize or Both [Both] +
Force synchronization if verification fails? [No] +
* Verify changes only? [No] +
* Logging [Standard] +

F1=Help F2=Refresh F3=Cancel F4=List


F5=Reset F6=Command F7=Edit F8=Image
F9=Shell F10=Exit Enter=Do

Copyright IBM Corporation 2005

Figure 7-21. Synchronize Your Changes AU546.0

Notes:

Synchronize and test


Although the HACMP configuration work is now done, the task of adding the new jones
node to the cluster is not finished until the (updated) cluster test plan has been
executed successfully.

7-26 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
Shrinking the Cluster
The Auditors aren't impressed with the latest investment
and force the removal of the jones node from the cluster so that it
can be transferred to a new project (some users suspect that
political considerations may have been involved)

bondar hudson jones

D D

A A

B B

Copyright IBM Corporation 2005

Figure 7-22. Shrinking the Cluster AU546.0

Notes:

Removing a node
In this scenario, we take a look at how to remove a node from an HACMP cluster.

Copyright IBM Corp. 1998, 2005 Unit 7. Basic HACMP Administration 7-27
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Removing a Cluster Node


1. Using any cluster node, move resource groups to other nodes
2. Remove the departing node from all resource groups and
synchronize your changes
Ensure that each resource group is left with at least two nodes
3. Stop HACMP on the departing node
4. Using one of the cluster nodes which is not being removed:
Remove the departing node from the cluster's topology
Remove a Node from the HACMP Cluster (Extended Configuration)
Synchronize
Once the synchronization is completed successfully, the departing node is
no longer a member of the cluster
5. Remove the departed node's IP addresses from
/usr/es/sbin/cluster/etc/rhosts on the remaining nodes
Prevents departed node from interfering with HACMP on remaining nodes
6. Physically disconnect the (correct) rs232 cables
7. Disconnect the departing node from the shared storage
subsystem
Strongly recommended as it makes it impossible for the departed node to
screw up the cluster's shared storage
8. Run through your (updated) test plan

Copyright IBM Corporation 2005

Figure 7-23. Removing a Cluster Node AU546.0

Notes:

Removing a node
While removing a node from a cluster is another fairly involved process, some of the
work has little if anything to do with HACMP itself.

7-28 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
Removing an Application
The zwebserver application has been causing problems and a
decision has been made to move it out of the cluster

bondar hudson

D D

A A

B B

Copyright IBM Corporation 2005

Figure 7-24. Removing an Application AU546.0

Notes:

Removing an application
In this scenarios, we get to remove a resource group
It looks like this imaginary organization could do with a bit more long range planning.

Copyright IBM Corp. 1998, 2005 Unit 7. Basic HACMP Administration 7-29
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Removing a Resource Group (1 of 3)


1. Take the resource group offline
2. OPTIONAL: Take a cluster snapshot
3. Using any cluster node and either configuration path:
Remove the departing resource group using the
Remove a Resource Group SMIT screen
Remove any service IP labels previously used by the departing resource
group using the
Remove Service IP Labels/Addresses SMIT screen
Synchronize your changes
This will shutdown the resource group's applications using the application server's
stop script and release any resources previously used by the resource group
4. Clean out anything that is no longer needed by the cluster:
Export any shared volume groups previously used by the application.
Consider deleting service IP labels from the /etc/hosts file
Uninstall the application
5. Run through your (updated) test plan

Copyright IBM Corporation 2005

Figure 7-25. Removing a Resource Group (1 of 3) AU546.0

Notes:

Introduction
The procedure for removing a resource group is actually fairly straightforward.

Cluster snapshot
HACMP supports something called a cluster snapshot. This would be an excellent time
to take a cluster snapshot, just in case we decide to go back to the old configuration.
We will discuss snapshots later in this unit.

Remove unused resources


Do not underestimate the importance of removing unused resources like service IP
labels and volume groups. They will only clutter up the clusters configuration and, in

7-30 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty the case of shared volume groups, tie up physical resources which could presumably
be better used elsewhere.
A cluster should not have any useless resources or components as anything which
simplifies the cluster tends to improve availability by reducing the likelihood of human
error.

Copyright IBM Corp. 1998, 2005 Unit 7. Basic HACMP Administration 7-31
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Removing a Resource Group (2 of 3)


HACMP Extended Resource Group Configuration
Move cursor to desired item and press Enter.
Add a Resource Group
Change/Show a Resource Group
Change/Show Resources and Attributes for a Resource Group
Remove a Resource Group
Show All Resources by Node or Resource Group

+--------------------------------------------------------------------------+
Select a Resource Group

Move cursor to desired item and press Enter.

adventure
ballerina
discovery

F1=Help F2=Refresh F3=Cancel
F8=Image F10=Exit Enter=Do
F1 /=Find n=Find Next
F9+--------------------------------------------------------------------------+

Copyright IBM Corporation 2005

Figure 7-26. Removing a Resource Group (2 of 3) AU546.0

Notes:

Removing a resource group


Make sure that you delete the correct resource group.

7-32 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
Removing a Resource Group (3 of 3)

HACMP Extended Resource Group Configuration


Move cursor to desired item and press Enter.
Add a Resource Group
Change/Show a Resource Group
Change/Show Resources and Attributes for a Resource Group
Remove a Resource Group
Show All Resources by Node or Resource Group

+--------------------------------------------------------------------------+
ARE YOU SURE?

Continuing may delete information you may want
to keep. This is your last chance to stop
before continuing.
Press Enter to continue.
Press Cancel to return to the application.

F1=Help F2=Refresh F3=Cancel
F1 F8=Image F10=Exit Enter=Do
F9+--------------------------------------------------------------------------+

Press enter (if you are sure).

Copyright IBM Corporation 2005

Figure 7-27. Removing a Resource Group (3 of 3) AU546.0

Notes:

Are you sure?


Pause to make sure you know what you are doing. If you arent sure, its easy to go
back and step through the process again.

Copyright IBM Corp. 1998, 2005 Unit 7. Basic HACMP Administration 7-33
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Synchronize Your Changes


Synchronize the changes and run through the test plan.
HACMP Verification and Synchronization
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
[Entry Fields]
* Verify, Synchronize or Both [Both] +
Force synchronization if verification fails? [No] +
* Verify changes only? [No] +
* Logging [Standard] +

F1=Help F2=Refresh F3=Cancel F4=List


F5=Reset F6=Command F7=Edit F8=Image
F9=Shell F10=Exit Enter=Do

Copyright IBM Corporation 2005

Figure 7-28. Synchronize Your Changes AU546.0

Notes:

Synchronize
As we will discuss later in this unit, its a good idea to synchronize after each change,
rather than making many changes and synchronizing once at the end.

7-34 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
Implementing Target Mode SSA
The serial cable being used to implement the rs232 non-IP network
has been borrowed by someone and nobody noticed
A decision has been made to implement a target mode SSA (tmssa)
non-IP network as it won't fail unless complete access to the shared
SSA disks is lost by one of the nodes (and someone is likely to
notice that)

bondar hudson

D D

A A

Copyright IBM Corporation 2005

Figure 7-29. Implementing Target Mode SSA AU546.0

Notes:

Target mode SSA or heartbeat over disk networks


Sadly, the premise behind this scenario is all too real. The problem with rs232 non-IP
networks is that if they become disconnected or otherwise disabled, then it is entirely
possible that nobody notices even though HACMP logs the failure of the connection
when it happens and reports it in the logs if it is down at HACMP startup time. In
contrast, a target mode SSA network wont fail until all paths between the two nodes
fail. Since such a failure will cause one or both nodes to loose access to some or all of
the shared disks, such a failure is MUCH less likely to go unnoticed. The same can be
said for heartbeat over disk, but we focus on SSA in this scenario as we have discussed
heartbeat over disk earlier in the course.

Copyright IBM Corp. 1998, 2005 Unit 7. Basic HACMP Administration 7-35
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Setting the SSA Node Number


The first step is to give each node a unique SSA node number.
We'll set bondar's ssa node number to 1 and hudson's to 2.
Change/Show the SSA Node Number For This System
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
[Entry Fields]
SSA Node Number [1] +#

F1=Help F2=Refresh F3=Cancel F4=List


F5=Reset F6=Command F7=Edit F8=Image
F9=Shell F10=Exit Enter=Do

Use the smitty ssaa fastpath to get to AIX's SSA Adapters menu.
Copyright IBM Corporation 2005

Figure 7-30. Setting the SSA Node Number AU546.0

Notes:

Required software
Target mode SSA support requires that the devices.ssa.tm.rte file set be installed on
all cluster nodes.

SSA node number and HACMP node ID


The first step in configuring a target mode SSA network is to assign a unique SSA node
number to each node. Earlier versions of HACMP required that the SSA node number
be the same as the nodes HACMP node id. HACMP 5.1 (and above) does not have
this requirement (and does not expose the HACMP node id to the administrator). We
assign 1 as the SSA node number for bondar and 2 as the SSA node number for
hudson.

7-36 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty Requirements for SSA node numbers


The minimum requirements for HACMP 5.x are that the SSA node numbers be
non-zero and unique for each node within the cluster. Strictly speaking, the SSA node
numbers must also be unique across all systems with shared access to the SSA
subsystem. This is usually not a concern as allowing non-cluster nodes to have any
form of access to a clusters shared disks is an unnecessary risk that few cluster
administrators would ever accept.

Copyright IBM Corp. 1998, 2005 Unit 7. Basic HACMP Administration 7-37
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Configuring the tmssa Devices


This is a three-step process for a two-node cluster as
each node needs tmssa devices which refer to the other
node:
1. run cfgmgr on one of the nodes (bondar)
bondar is now ready to respond to tmssa queries
2. run cfgmgr on the other node (hudson)
hudson is now ready to respond to tmssa queries
hudson also knows that bondar supports tmssa and has created
the tmssa devices (/dev/tmssa1.im and /dev/tmssa1.tm) which
refer to bondar
3. run cfgmgr again on the first node (bondar)
bondar now also knows that hudson supports tmssa and has
created the tmssa devices (/dev/tmssa2.im and /dev/tmssa2.tm)
which refer to hudson
bondar now has /dev/tmssa2.im /dev/tmssa2.tm
devices which refer to hudson

Copyright IBM Corporation 2005

Figure 7-31. Configuring the tmssa Devices AU546.0

Notes:

Introduction
Once each node has a unique SSA node number, the AIX configuration manager needs
to be used to define the tmssa devices. Each node must have tmssa devices which
refer to each of the other nodes that they can see via the SSA loops. When cfgmgr is
run on a node, it sets up the node to accept tmssa packets, and it then defines tmssa
devices referring to any other nodes which respond to tmssa packets. In order for this to
all work, the other nodes must all be set up to accept and respond to tmssa packets.

Procedure
The end result is that the following procedure gets all the required tmssa devices
defined:

7-38 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty 1. Run cfgmgr on each cluster node in turn. This sets up each node to handle tmssa
packets, and defines the tmssa devices on each node to refer to nodes which have
already been setup for tmssa.
2. Run cfgmgr on each node in turn again (depending upon exactly what order you do this
in, it is actually possible to skip running cfgmgr on one of the nodes, but it is probably
not worth the trouble of being sure that the last cfgmgr run wasnt required).
3. Verify the tmssar devices exist:
Run
# lsdev -C | grep tmssa
on each node. There should be a tmssar device (which is actually a target mode SSA
router acting as a pseudo device) configured on each node.
4. Verify the tmssa devices exist:
Run
# ls /dev/tmssa*
on each node. Note that each node has target mode SSA devices called
/dev/tmssa#.im and /dev/tmssa#.tm where # refers to the other nodes node number.
5. Test the target mode connection:
Enter the following command on the node with id 1 (make sure you specify the tm suffix
and not the im suffix):
# cat < /dev/tmssa2.tm
(This command should hang)
On the node with ID 2, enter the following command (make sure that you specify the im
suffix and not the tm suffix):
# cat /etc/hosts > /dev/tmssa1.im
(The /etc/hosts file should be displayed on the first node)
This validates that the target mode serial network in functional. Please note that any
text file may be substituted for /etc/hosts and you have to specify different tmssa
device names if you configured different SSA node numbers for each node. This is
simply an example.

Copyright IBM Corp. 1998, 2005 Unit 7. Basic HACMP Administration 7-39
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Rediscover the HACMP Information


Next, we need to get HACMP to know about the new communication
devices so we run the auto-discovery procedure again on one of the nodes.

Extended Configuration
Move cursor to desired item and press Enter.
Discover HACMP-related Information from Configured Nodes
Extended Topology Configuration
Extended Resource Configuration
Extended Event Configuration
Extended Performance Tuning Parameters Configuration
Security and Users Configuration
Snapshot Configuration
Extended Verification and Synchronization

F1=Help F2=Refresh F3=Cancel F8=Image


F9=Shell F10=Exit Enter=Do

Copyright IBM Corporation 2005

Figure 7-32. Rediscover the HACMP Information AU546.0

Notes:

HACMP discover
By discovering the new devices, they will appear in SMIT pick lists when we configure
the tmssa non-IP network. Strictly speaking, it is not necessary to rerun the HACMP
discovery as it is possible to configure tmssa networks by entering in the tmssa device
names explicitly. As this is a rather error-prone process, it is probably best to use the
HACMP discovery mechanism to discover the devices for us.

7-40 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
Defining a Non-IP tmssa Network (1 of 3)
This should look very familiar as it is the same procedure that was
used to define the non-IP rs232 network earlier.
Configure HACMP Communication Interfaces/Devices
Move cursor to desired item and press Enter.
Add Communication Interfaces/Devices
Change/Show Communication Interfaces/Devices
Remove Communication Interfaces/Devices
Update HACMP Communication Interface with Operating System Settings

+--------------------------------------------------------------------------+
Select a category

Move cursor to desired item and press Enter.

Add Discovered Communication Interface and Devices
Add Predefined Communication Interfaces and Devices

F1=Help F2=Refresh F3=Cancel
F8=Image F10=Exit Enter=Do
F1 /=Find n=Find Next
F9+--------------------------------------------------------------------------+

Copyright IBM Corporation 2005

Figure 7-33. Defining a Non-IP tmssa Network (1 of 3) AU546.0

Notes:

Defining a non-IP tmssa network


The procedure for defining a non-IP tmssa network is pretty much identical to the
procedure used earlier to define the non-IP rs232 network.

Copyright IBM Corp. 1998, 2005 Unit 7. Basic HACMP Administration 7-41
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Defining a Non-IP tmssa Network (2 of 3)


Configure HACMP Communication Interfaces/Devices
Move cursor to desired item and press Enter.
Add Communication Interfaces/Devices
Change/Show Communication Interfaces/Devices
Remove Communication Interfaces/Devices
Update HACMP Communication Interface with Operating System Settings

+--------------------------------------------------------------------------+
Select a category

Move cursor to desired item and press Enter.

# Discovery last performed: (Feb 12 18:20)
Communication Interfaces
Communication Devices

F1=Help F2=Refresh F3=Cancel
F8=Image F10=Exit Enter=Do
F1 /=Find n=Find Next
F9+--------------------------------------------------------------------------+

Copyright IBM Corporation 2005

Figure 7-34. Defining a Non-IP tmssa Network (2 of 3) AU546.0

Notes:

7-42 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
Defining a Non-IP tmssa Network (3 of 3)
Now, we need to define the tmssa network using a process
Configure HACMP Communication Interfaces/Devices
Move cursor to desired item and press Enter.
Add Communication Interfaces/Devices
+--------------------------------------------------------------------------+
Select Point-to-Point Pair of Discovered Communication Devices to Add

Move cursor to desired item and press F7. Use arrow keys to scroll.
ONE OR MORE items can be selected.
Press Enter AFTER making all selections.

# Node Device Device Path Pvid
> hudson tmssa1 /dev/tmssa1
> bondar tmssa2 /dev/tmssa2
bondar tty0 /dev/tty0
hudson tty0 /dev/tty0
bondar tty1 /dev/tty1
hudson tty1 /dev/tty1

F1=Help F2=Refresh F3=Cancel
F7=Select F8=Image F10=Exit
F1 Enter=Do /=Find n=Find Next
F9+--------------------------------------------------------------------------+

Copyright IBM Corporation 2005

Figure 7-35. Defining a Non-IP tmssa Network (3 of 3) AU546.0

Notes:

Final step
Select the tmssa devices on each node and press Enter to define the network.
Refer to Chapter 13 of the HACMP v5.3 Planning and Installation Guide for information
on configuring all supported types of non-IP networks.

Copyright IBM Corp. 1998, 2005 Unit 7. Basic HACMP Administration 7-43
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Synchronize Your Changes


Synchronize the changes and run through the test plan
HACMP Verification and Synchronization
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
[Entry Fields]
* Verify, Synchronize or Both [Both] +
Force synchronization if verification fails? [No] +
* Verify changes only? [No] +
* Logging [Standard] +

F1=Help F2=Refresh F3=Cancel F4=List


F5=Reset F6=Command F7=Edit F8=Image
F9=Shell F10=Exit Enter=Do

Copyright IBM Corporation 2005

Figure 7-36. Synchronize Your Changes AU546.0

Notes:

7-44 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
Removing a Cluster
Use Extended Topology Configuration

Configure an HACMP Cluster

Move cursor to desired item and press Enter.

Add/Change/Show an HACMP Cluster


Remove an HACMP Cluster
Reset Cluster Tunables

F1=Help F2=Refresh F3=Cancel F4=List


F5=Reset F6=Command F7=Edit F8=Image
F9=Shell F10=Exit Enter=Do

Make /usr/es/sbin/cluster/etc/rhosts a null file:


cat "" > /usr/es/sbin/cluster/etc/rhosts
Copyright IBM Corporation 2005

Figure 7-37. Removing a Cluster AU546.0

Notes:

Before you remove the cluster


For our final scenario, well remove the cluster.
The resource groups must be offline first.
Also, you may want to take a snapshot before you remove the cluster - just in case you
discover you need to go back. We will discuss cluster snapshots later in this unit.

Removing the cluster


Use the SMIT panel shown in the visual. The SMIT path to this menu is:
smitty hacmp -> Extended Configuration -> Extended Topology Configuration
-> Configure an HACMP Cluster
Finally, you should clear the /usr/es/sbin/cluster/etc/rhosts file.

Copyright IBM Corp. 1998, 2005 Unit 7. Basic HACMP Administration 7-45
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Lets Review: Topic 1


1. True or False?
It is impossible to add a node while HACMP is running.
2. Which of the following are not supported by HACMP 5.1? (select all that apply)
a. Cascading resource group with IPAT via IP aliasing.
b. Custom resource group with IPAT via IP replacement.
c. HWAT in a resource group which uses IPAT via IP aliasing.
d. HWAT in a custom resource group.
e. More than three custom resource groups in a two node cluster.
3. Which of the following sequences of steps implement HWAT in a cluster
currently using custom resource groups?
a. Delete custom RGs, define cascading RGs, places resources in new RGs,
disable IPAT via IP aliasing on network, delete old service IP labels, define
new service IP labels, synchronize
b. Delete custom RGs, define cascading RGs, places resources in new RGs,
delete old service IP labels, disable IPAT via IP aliasing on network, define
new service IP labels, synchronize
c. Delete custom RGs, disable IPAT via IP aliasing on network, delete old
service IP labels, define new service IP labels, define cascading RGs,
places resources in new RGs, synchronize
d. Delete custom RGs, delete old service IP labels, disable IPAT via IP
aliasing on network, define new service IP labels, define cascading RGs,
places resources in new RGs, synchronize

Copyright IBM Corporation 2005

Figure 7-38. Lets Review: Topic 1 AU546.0

Notes:

7-46 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty 7.2 Cluster Single Point of Control

Copyright IBM Corp. 1998, 2005 Unit 7. Basic HACMP Administration 7-47
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Topic 2 Objectives:
Cluster Single Point of Control
After completing this topic, you should be able to:
Discuss the need for change management when using
HACMP
Describe the benefits and capabilities of C-SPOC
Perform routine administrative changes using C-SPOC
Start and stop cluster services
Perform resource group move operations

Copyright IBM Corporation 2005

Figure 7-39. Topic 1 Objectives: Cluster Single Point of Control AU546.0

Notes:

7-48 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
Administering a High Availability Cluster
Administering a HA cluster is different from administering a
stand-alone server:
Changes made to one node need to be reflected on the
other node
Poorly considered changes can have far reaching
implications
Beware the law of unintended consequences
Aspects of the clusters configuration could be quite
subtle and yet critical
Scheduling downtime to install and test changes can be
challenging
Saying oops while sitting at a cluster console could get
you fired!

Copyright IBM Corporation 2005

Figure 7-40. Administering a High Availability Cluster AU546.0

Notes:

Introduction
You must develop good change management procedures for managing an HACMP
cluster. As you will see, C-SPOC utilities can be used to help, but do not do the job by
themselves. Having well documented and tested procedures to follow, as well as
restricting who can make changes, (for example you should not have more than two or
three persons with root privileges) minimizes loss of availability when making changes.
The snapshot utility should be used before any change is made.

Copyright IBM Corp. 1998, 2005 Unit 7. Basic HACMP Administration 7-49
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Recommendations
Implement and adhere to a change control/management
process
Wherever possible, use HACMP's C-SPOC facility to make
changes to the cluster (details to follow)
Document routine operational procedures in a step-by-step
list fashion (for example, shutdown, startup, increasing size
of a filesystem)
Restrict access to the root password to trained High
Availability cluster administrators
Always take a snapshot (explained later) of your existing
configuration before making a change

Copyright IBM Corporation 2005

Figure 7-41. Recommendations AU546.0

Notes:

Some beginning recommendations


These recommendations should probably be considered to be the minimum acceptable
level of cluster administration. There are additional measures and issues which should
probably be carefully considered (for example, problem escalation procedures should
be documented, and both hardware and software support contracts should either be
kept current or a procedure developed for authorizing the purchase of time and
materials support during off hours should an emergency arise).

7-50 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
Change Control or Change Management
A real change control or management process requires a
serious commitment on the part of the entire organization:
Every change must be carefully considered
The onus should be on the requester of the change to
demonstrate that it is necessary
Not on the cluster administrators to demonstrate that it is unwise.
Management must support the process
Defend cluster administrators against unreasonable request or
pressure
Not allow politics to affect a change's priority or schedule
Every change, even the minor ones, must follow the
process
The cluster administrators must not sneak changes past the process
The notion that a change might be permitted without following the
process must be considered to be absurd

The alternative is that the process rapidly becomes a farce


Copyright IBM Corporation 2005

Figure 7-42. Change Control or Change Management AU546.0

Notes:

Introduction
The visual describes some of the serious commitments that are required to successfully
implement a highly available cluster over time.
Many organizations have little or no experience with the sort of long-term commitment
and discipline which is required to successfully manage a highly available cluster over
the long term. The temptation to slip a little change through might seem to be worth the
risk (it almost certainly is not worth the risk) and political considerations can make it
very difficult to withstand pressure from certain interest groups for IMMEDIATE
CHANGES.

Copyright IBM Corp. 1998, 2005 Unit 7. Basic HACMP Administration 7-51
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Choices
There is a choice to be made:
- There is a real long-term commitment by all parties to the change control process (or
the management backbone to stand up to those who lack the commitment).
This path has a strong likelihood of leading to a stable cluster with a satisfied user
community.
- Without a real long-term commitment, the change control process rapidly becomes
a farce.
This second path leads to disaster (the question is when, not if, disaster will strike).

7-52 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
Change Considerations
Every change must be carefully considered:
Is the change necessary?
How urgent is the change?
How important is the change? (not the same thing as urgent)
What impact does the change have on other aspects of the
cluster?
What is the impact if the change is not allowed to occur?
Are all of the steps required to implement the change clearly
understood and documented?
How is the change to be tested?
What is the plan for backing out the change if necessary?
Is the appropriate expertise available should problems develop?
When is the change scheduled?
Have the users been notified?
Does the maintenance period include sufficient time for a full set
of backups prior to the change and sufficient time for a full
restore afterwards should the change fail testing?
Copyright IBM Corporation 2005

Figure 7-43. Change Considerations AU546.0

Notes:

Introduction
Develop a process which encompasses at least the above set of issues. This process
may and probably should include a form which requires appropriate approvals before
the change can go ahead.

Maintenance window
The point about requiring sufficient time during the scheduled outage window for a full
backup, the change, the testing and a full restore may seem a little extreme. Suffice it to
say that there are real customer organizations out there that consider the alternative to
be utter lunacy.

Copyright IBM Corp. 1998, 2005 Unit 7. Basic HACMP Administration 7-53
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Masking or Eliminating Planned Downtime


Elimination of Downtime

Continuous
Availability

Continuous High
Operations Availability

Masking or elimination of Masking or elimination of


planned downtime through unplanned downtime
change management
Copyright IBM Corporation 2005

Figure 7-44. Masking or Eliminating Planned Downtime AU546.0

Notes:

Introduction
As you can see, HACMP addresses two aspects of availability: unplanned downtime
and planned downtime.

Planned downtime
Planned downtime is managed using one of following three methods:
- C-SPOC --
for AIX definition changes (such as users, disk, LVM) which have to be
made/modified on one node and then propagated to the other node
- C-SPOC: Resource Group and Application management --
for manually moving a resource group offline/online or to another node

7-54 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty - Synchronization/DARE --
for HACMP definition (configuration) changes where the HACMP ODM files must be
updated on all the nodes
In this topic, we address the first two methods. DARE is addressed in a later topic in this
unit.
The end result is overall continuous availability.
Without C-SPOC functionality, the system administrator must spend time executing
administrative tasks individually on each cluster node. For example, to add a user you
usually must perform this task on each node. With C-SPOC, a command executed on
one node is also executed on the other nodes. Thus, the use of C-SPOC minimizes
administrative overhead (repetition) and reduces the possibility of inconsistent node
states.

Unplanned downtime
Unplanned downtime is addressed via HACMP monitoring and resource group fallover
processing.

Copyright IBM Corp. 1998, 2005 Unit 7. Basic HACMP Administration 7-55
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Cluster Single Point of Control (C-SPOC)


C-SPOC provides facilities for performing common cluster
wide administration tasks from any node within the cluster.
HACMP 4.x requires either /.rhosts or kerberos to be
configured on all nodes
HACMP 5.x uses the clcomdES socket based subsystem.
C-SPOC operations fail if any target node is down at the time of
execution or selected resource is not available
Any change to a shared VGDA is synchronized automatically if
C-SPOC is used to change a shared LVM component
C-SPOC uses a script parser called the command execution
language
Target Target
node node

Initiating
node

Target Target
node node

Copyright IBM Corporation 2005

Figure 7-45. Cluster Single Point of Control (C-SPOC) AU546.0

Notes:

C-SPOC command execution


C-SPOC commands first execute on the initiating node. Then the AIX rsh command is
used to propagate the command to the target nodes.

HACMP 4.X
In order to use rsh, you must create the ./rhosts file, adding the cluster IP Labels for all
cluster nodes. If kerberos has been implemented the ./rhosts file is not required.

HACMP 5.X
The clcomdES subsystem provides secure communications between nodes, /.rhosts or
Kerberos is no longer required. This daemon provides secure communication between
cluster nodes for all cluster utilities such as verification and synchronization and system

7-56 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty management (C-SPOC). The clcomd daemon is started automatically at boot time by
the init process.

More details
All the nodes in the resource group must be available or the C-SPOC command will not
be successful on any node (all or none).
As you saw in the LVM unit, LVM changes, if made through C-SPOC, are synchronized
automatically.

C-SPOC command line


C-SPOC commands can be executed from the command line (or through SMIT, of
course).
Error messages and warnings returned by the commands are based on the underlying
AIX-related commands.
Appendix C: HACMP for AIX 5L Commands in the HACMP for AIX Administration
Guide provides a list of all C-SPOC commands provided with the HACMP for AIX 5L
software.

Command Execution Language (CEL)


C-SPOC commands are written as execution plans in CEL. Each plan contains
constructs to handle one or more underlying AIX 5L tasks (a command, executable, or
script) with a minimum of user input.
An execution plan becomes a C-SPOC command when the
/usr/es/sbin/cluster/utilities/celpp utility converts it into a cluster aware ksh
script, meaning the script uses the C-SPOC distributed mechanismthe C-SPOC
Execution Engineto execute the underlying AIX 5L commands on cluster nodes to
complete the defined tasks.
CEL is a programming language that lets you integrate dshs distributed functionality
into each C-SPOC script the CEL preprocessor (celpp) generates. When you invoke a
C-SPOC script from a single cluster node to perform an administrative task, the script is
automatically executed on all nodes in the cluster. The language is described further in
Appendix E of the HACMP for AIX Administration and Troubleshooting Guide.

Copyright IBM Corp. 1998, 2005 Unit 7. Basic HACMP Administration 7-57
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

The Top-Level C-SPOC Menu

System Management (C-SPOC)

Move cursor to desired item and press Enter.

Manage HACMP Services


HACMP Communication Interface Management
HACMP Resource Group and Application Management
HACMP Log Viewing and Management
HACMP File Collection Management
HACMP Security and Users Management
HACMP Logical Volume Management
HACMP Concurrent Logical Volume Management
HACMP Physical Volume Management

Open a SMIT Session on a Node

F1=Help F2=Refresh F3=Cancel F8=Image


F9=Shell F10=Exit Enter=Do

Copyright IBM Corporation 2005

Figure 7-46. The Top-Level C-SPOC Menu AU546.0

Notes:

Top-level C-SPOC menu


The top-level C-SPOC menu is one of the four top-level HACMP menus.
C-SPOC scripts are used for Users, LVM, CLVM, and Physical Volume Management.
RGmove is used for Resource Group management.
The other functions are included here as a logical place to put these system
management facilities.

7-58 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
Adding a User to the Cluster (1 of 2)
Add a User to the Cluster

Type or select a value for the entry field.


Press Enter AFTER making all desired changes.

[Entry Fields]
Select nodes by Resource Group [] +
*** No selection means all nodes! ***

F1=Help F2=Refresh F3=Cancel F4=List


F5=Reset F6=Command F7=Edit F8=Image
F9=Shell F10=Exit Enter=Do

Copyright IBM Corporation 2005

Figure 7-47. Adding a User to the Cluster (1 of 2) AU546.0

Notes:

User management
For users of an HACMP for AIX cluster, system administrators must create duplicate
accounts on each cluster node. The user account information stored in the /etc/passwd
file and in other files stored in the /etc/security directory should be consistent on all
cluster nodes. For example, if a cluster node fails, users should be able to log on to the
surviving nodes without experiencing problems caused by mismatches in the user or
group IDs.
System administrators typically keep user accounts synchronized across cluster nodes
by copying the key system account and security files to all cluster nodes whenever a
new account is created or an existing account is changed. For C-SPOC clusters, the
C-SPOC utility simplifies the cluster-wide synchronization of user accounts by
propagating the new account or changes to an existing account across all cluster nodes
automatically.

Copyright IBM Corp. 1998, 2005 Unit 7. Basic HACMP Administration 7-59
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

The SMIT screens for managing users are found under HACMP Security and Users
Management in the top-level C-SPOC menu.

Add a User to the Cluster (1 of 2)


User operations are performed on a resource group basis. To add a user, all the nodes
in the resource group must be available.
If you do not select a resource group, then all cluster nodes will be affected.
If all of the nodes in the cluster are in the resource group then this is the same as all
nodes.

7-60 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
Adding a User to the Cluster (2 of 2)
Add a User to the Cluster

Type or select values in entry fields.


Press Enter AFTER making all desired changes.

[TOP] [Entry Fields]


Select nodes by Resource Group adventure
*** No selection means all nodes! ***

* User NAME [danny]


User ID [500] #
ADMINISTRATIVE USER? false +
Primary GROUP [] +
Group SET [] +
ADMINISTRATIVE GROUPS [] +
Another user can SU TO USER? true +
SU GROUPS [ALL] +
HOME directory [/home/danny]
Initial PROGRAM []
[MORE...34]

F1=Help F2=Refresh F3=Cancel F4=List


F5=Reset F6=Command F7=Edit F8=Image
F9=Shell F10=Exit Enter=Do

Copyright IBM Corporation 2005

Figure 7-48. Adding a User to the Cluster (2 of 2) AU546.0

Notes:

Add a User to the Cluster (2 of 2)


The second screen allows you to define the attributes for the new user. All of the usual
user attributes are available.

Copyright IBM Corp. 1998, 2005 Unit 7. Basic HACMP Administration 7-61
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Removing a User from the Cluster


Remove a User from the Cluster

Type or select values in entry fields.


Press Enter AFTER making all desired changes.

[Entry Fields]
Select nodes by Resource Group
*** No selection means all nodes! ***

* User NAME [paul] +


Remove AUTHENTICATION information? Yes +

F1=Help F2=Refresh F3=Cancel F4=List


F5=Reset F6=Command F7=Edit F8=Image
F9=Shell F10=Exit Enter=Do

Copyright IBM Corporation 2005

Figure 7-49. Removing a User from the Cluster AU546.0

Notes:

Remove a User from the Cluster


In this menu, you first indicate the set of nodes for a resource group (or all nodes). All
the nodes must be available. Then, the User Name and Remove AUTHENTICATION
information lines appear.

AUTHENTICATION information
Remove AUTHENTICATION information indicates if the system should delete the user's
password and other user authentication information from the /etc/security/passwd file.
To change this value, use the Tab key to toggle the Yes/No values.

7-62 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
Passwords in an HACMP Cluster
Passwords in an HACMP cluster

Move cursor to desired item and press Enter.

Change a User's Password in the Cluster


HACMP 5.2-- Change Current Users Password
HACMP 5.2-- Manage List of Users Allowed to Change Password
HACMP 5.2-- Modify System Password Utility

F1=Help F2=Refresh F3=Cancel Esc+8=Image


Esc+9=Shell Esc+0=Exit Enter=Do

Copyright IBM Corporation 2005

Figure 7-50. Passwords in an HACMP Cluster AU546.0

Notes:

Introduction
C-SPOC provides a utility to change user passwords across the cluster. As usual when
managing passwords, there are a number of considerations.

NIS or DCE
If you manage user accounts with a utility such as Network Information Service (NIS) or
Distributed Computing Environment (DCE) Manager, do not use HACMP user
management. Using HACMP user management in this environment might cause
serious system inconsistencies in the database.

Copyright IBM Corp. 1998, 2005 Unit 7. Basic HACMP Administration 7-63
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Security
The security of the password propagated to other nodes is only as secure as the
network used to distribute the password. (In other words, the passwords are sent in
clear text.)

User passwords, prior to HACMP V5.2


Prior to HACMP V5.2, no mechanism was provided for keeping passwords in sync
across the cluster or across the nodes in a resource group other than this screen. If
users are changing their passwords via some mechanism other than NIS+ or an
equivalent central registry manager, the cluster administrator has to implement a way to
keep the passwords in sync.

HACMP V5.2 and later:


Allowing users to change their own passwords across the cluster
In HACMP version 5.2 and up, system administrators can enable the new Cluster
Password (clpasswd) utility. This utility, when enabled, lets authorized users change
their own password across a resource group or cluster (as configured), rather than
having to change their password on each node in the cluster

Modify System Password Utility


This menu lets you configure the system password utility on cluster nodes. You can
either:
- Set /bin/passwd to be the standard AIX command
- Link /bin/passwd to clpasswd, a new command which allows cluster or resource
group wide password changes
Like the other user management C-SPOC commands, you select which nodes by
resource group.

Manage List of Users Allowed to Change Password


This menu lets you select which users are allowed to change their password across the
cluster. You can select ALL_USERS, or a set of individual users. By default the list is
empty (only root is allowed).

Change Current Users Password


Authorized users can change their password using:
- The C-SPOC SMIT menu: Change Current Users Password
- The clpasswd command from the command line

7-64 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty Both of these call the AIX 5L passwd command. The clpasswd command uses the
same arguments as the passwd command. For more information about the clpasswd
command, see its man page.

How it works
The table that follows is the expected behavior for non-root users either running the
clpasswd command, AIX passwd command or AIX passwd command when linked to
clpasswd.

/bin/passwd linked to
Authorized? /bin/passwd not linked to clpasswd
clpasswd
HACMP clpasswd is
run, or the Change
AIX passwd is run Current Users
Password SMIT
screen is used
The password is The password is The password is
Yes changed on all cluster changed only on the changed on all cluster
nodes. local node. nodes.
The password is The password is
Password not
No changed only on the changed only on the
changed
local node. local node.

Copyright IBM Corp. 1998, 2005 Unit 7. Basic HACMP Administration 7-65
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Adding a Physical Disk to a Cluster

Add an SSA Logical Disk

Type or select values in entry fields.


Press Enter AFTER making all desired changes.

[Entry Fields]
Node Name(s) to which disk is attached bondar,hudson +
Device type disk +
Disk Type hdisk
Disk interface ssar
Description SSA Logical Disk Drive
Parent ssar
* CONNECTION address [] +
Location Label []
ASSIGN physical volume identifier yes +
RESERVE disk on open yes +
Queue depth [] +
Maximum Coalesce [] +

F1=Help F2=Refresh F3=Cancel F4=List


F5=Reset F6=Command F7=Edit F8=Image
F9=Shell F10=Exit Enter=Do

Copyright IBM Corporation 2005

Figure 7-51. Adding a Physical Disk to a Cluster AU546.0

Notes:

Adding a disk definition to cluster nodes using C-SPOC


The nodes must be configured as part of the cluster. Only SSA and SCSI disks are
currently supported for this utility.
Take the following steps to add a raw disk on selected cluster nodes:
1) From the top-level C-SPOC screen:
HACMP Physical Volume Management -> Add a Disk to the Cluster
2) SMIT displays a list of nodes in the cluster and prompts you to select the
nodes where the disk definition should be added.
3) Select one or more nodes where you want to have the new disk configured.
The system generates a list of available disk types based on those disk types
known to the first node in the list (above).
4) Select the type of disk you want to add to the cluster. The set of screens that
follow depend on the disk type selected.

7-66 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
Managing Shared LVM Components

HACMP Logical Volume Management

Move cursor to desired item and press Enter.

Shared Volume Groups


Shared Logical Volumes
Shared File Systems
Synchronize Shared LVM Mirrors
Synchronize a Shared Volume Group Definition

F1=Help F2=Refresh F3=Cancel F8=Image


F9=Shell F10=Exit Enter=Do

Copyright IBM Corporation 2005

Figure 7-52. Managing Shared LVM Components AU546.0

Notes:

Introduction
This is the menu for using C-SPOC to perform LVM change management and
synchronization. As was mentioned in the LVM unit, you can make changes in AIX
directly and then synchronize OR. If you make the changes utilizing C-SPOC utilities,
the synchronization is automatic.

C-SPOC simplifies the process


Once youve configured the clusters topology and added a resource group, you can
configure your shared disks using this part of the C-SPOC hierarchy (available directly
from the top level C-SPOC SMIT menu). You will generally find that shared disk
configuration and maintenance is considerably easier and less prone to errors if you
use the C-SPOC for this work.

Copyright IBM Corp. 1998, 2005 Unit 7. Basic HACMP Administration 7-67
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

How it works
Once you create a shared volume group, you must rerun the discovery mechanism
(refer to top-level menu in the enhanced configuration path) to get HACMP to know
about the volume group. You must then add the volume group to a resource group
before you can use C-SPOC to add shared logical volumes or filesystems.

Synchronization
Note that you only need to add the volume group to a resource group using SMIT from
one of the cluster nodes, and then you can start working with C-SPOC from the same
node. You do not need to synchronize the cluster between adding the volume group to a
resource group and working with it using C-SPOC unless you want to use C-SPOC
from some other node. Keep in mind that the volume group is not really be a part of the
resource group until you synchronize the addition of the volume group to the resource
group.

Volume group must be online on at least one node


C-SPOC generally refuses to work with a volume group which is not online on at least
one cluster node (the qualifier at least appears in this sentence because concurrent
volume groups can be online on more than one node at a time).

7-68 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
Creating a Shared Volume Group
Create a Shared Volume Group

Type or select values in entry fields.


Press Enter AFTER making all desired changes.

[Entry Fields]
Node Names bondar,hudson
PVID 00055207bbf6edab 0000>
VOLUME GROUP name [bernhardvg]
Physical partition SIZE in megabytes 64 +
Volume group MAJOR NUMBER [207] #
Enable Cross-Site LVM Mirroring Verification false +
HACMP 5.2--
Warning :
Changing the volume group major number may result
in the command being unable to execute
successfully on a node that does not have the
major number currently available. Please check
for a commonly available major number on all nodes
before changing this setting.

F1=Help F2=Refresh F3=Cancel F4=List


F5=Reset F6=Command F7=Edit F8=Image
F9=Shell F10=Exit Enter=Do

Copyright IBM Corporation 2005

Figure 7-53. Creating a Shared Volume Group AU546.0

Notes:

Creating a shared volume group


You can use C-SPOC to create a volume group but be aware that you must then add
the volume group name to a resource group and synchronize. This is one case of using
C-SPOC where synchronization is not automatic.
Before creating a shared volume group for the cluster using C-SPOC check that:
- All disk devices are properly attached to the cluster nodes
- All disk devices are properly configured on all cluster nodes and the device is listed
as available on all nodes
- Disks have a PVID
(C-SPOC lists the disks by their PVIDs. This ensures that we are using the same
disk on all nodes, even if the hdisk names are not consistent across the nodes.
However, for your sanity, make sure that hdisk names ARE consistent!)

Copyright IBM Corp. 1998, 2005 Unit 7. Basic HACMP Administration 7-69
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Discover, Add VG to a Resource Group


Extended Configuration

Move cursor to desired item and press Enter.

Discover HACMP-related Information from Configured Nodes


Extended Topology Configuration
Extended Resource Configuration
Extended Event Configuration
Extended Performance Tuning Parameters Configuration
Security and Users Configuration
Snapshot Configuration
Export Definition File for Online Planning Worksheets

Extended Verification and Synchronization


HACMP Cluster Test Tool

F1=Help F2=Refresh F3=Cancel Esc+8=Image


Esc+9=Shell Esc+0=Exit Enter=Do

Copyright IBM Corporation 2005

Figure 7-54. Discover, Add VG to Resource Group AU546.0

Notes:

Discover and add VG to resource group


After creating a volume group, you must discover it so that the new volume group will be
available in pick lists for future actions, like adding it to a resource group, and so forth.
You must use the Extended Configuration menu for both of these actions.

7-70 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
Creating a Shared File System (1 of 2)
First create mirrored logical volumes for the filesystem and jfslog.Do not forget to
logform the jfslog logical volume

Add a Shared Logical Volume

Type or select values in entry fields.


Press Enter AFTER making all desired changes.

[TOP] [Entry Fields]


Resource Group Name adventure
VOLUME GROUP name bernhardvg
Reference node
* Number of LOGICAL PARTITIONS [200] #
PHYSICAL VOLUME names
Logical volume NAME [norbertfs]
Logical volume TYPE [jfs]
POSITION on physical volume middle +
RANGE of physical volumes minimum +
MAXIMUM NUMBER of PHYSICAL VOLUMES [] #
to use for allocation
Number of COPIES of each logical 2 +
partition
[MORE...11]

F1=Help F2=Refresh F3=Cancel F4=List


F5=Reset F7=Edit F8=Image
F9=Shell F10=Exit Enter=Do

The volume group must be online somewhere and listed in a resource group or it does not appear in
the pop-up list
Copyright IBM Corporation 2005

Figure 7-55. Creating a Shared File System (1 of 2) AU546.0

Notes:

Creating a shared file system using C-SPOC


It is generally preferable to control the names of all of your logical volumes.
Consequently, it is generally best to explicitly create a logical volume for the file system.
If the volume group does not already have a JFS log, then you must also explicitly
create a logical volume for the JFS log and format it with logform.
However, C-PSOC allows you to add a journaled file system to either:
- A shared volume group (no previously defined cluster logical volume)
SMIT checks the list of nodes that can own the resource group that contains the
volume group, creates the logical volume (on an existing log logical volume if
present, otherwise it creates a new log logical volume) and adds the file system to
the node where the volume group is varied on. All other nodes in the resource group
run an importvg -L.

Copyright IBM Corp. 1998, 2005 Unit 7. Basic HACMP Administration 7-71
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

- A previously defined cluster logical volume (on a shared volume group)


SMIT checks the list of nodes that can own the resource group which contains the
volume group where the logical volume is located. It adds the file system to the node
where the volume group is varied on. All other nodes in the resource group will run
an importvg -L.

7-72 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
Creating a Shared File System (2 of 2)
Then create the filesystem in the now "previously defined logical volume"

add a Standard Journaled File System

Type or select values in entry fields.


Press Enter AFTER making all desired changes.

[Entry Fields]
Node Names bondar,hudson
LOGICAL VOLUME name norbertfs
* MOUNT POINT [/norbert]
PERMISSIONS read/write +
Mount OPTIONS [] +
Start Disk Accounting? no +
Fragment Size (bytes) 4096 +
Number of bytes per inode 4096 +
Allocation Group Size (MBytes) 8 +

F1=Help F2=Refresh F3=Cancel F4=List


F5=Reset F6=Command F7=Edit F8=Image
F9=Shell F10=Exit Enter=Do

Copyright IBM Corporation 2005

Figure 7-56. Creating a Shared File System (2 of 2) AU546.0

Notes:

Creating a shared file system, step 2


Once youve created the logical volume, then create a file system on it.

Copyright IBM Corp. 1998, 2005 Unit 7. Basic HACMP Administration 7-73
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

LVM Change Management


Historically, lack of LVM change management has been a
major cause of cluster failure during fallover. There are
several methods available to ensure LVM changes are
correctly synced across the cluster.
Manual updates to each node to synchronize the ODM records
Lazy update
C-SPOC synchronization of ODM records
C-SPOC LVM operations - cluster enabled equivalents of the
standard SMIT LVM functions
RSCT for Enhanced Concurrent Volume Groups

VGDA = ODM

Copyright IBM Corporation 2005

Figure 7-57. LVM Change Management AU546.0

Notes:

The importance of LVM change management


LVM change management is critical for successful takeover in the event of a node
failure.
Information regarding LVM constructs is held in a number of different locations:
- physical disks: VGDA, LVCB
- AIX files: primarily the ODM, but also /usr/sbin/cluster/etc/vg, files in the /dev
directory and /etc/filesystems
- physical RAM: kernel memory space
This information must be kept in sync on all nodes which may access the shaded
volume group in order for takeover to work.

7-74 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty How to keep LVM synchronized across the cluster


There are a number of ways to ensure this information is kept in sync:
1) Manual update
2) Lazy Update
3) C-SPOC VG synchronization utility
4) C-SPOC LVM operations
5) RSCT (for enhanced concurrent mode volume groups)

Copyright IBM Corp. 1998, 2005 Unit 7. Basic HACMP Administration 7-75
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

LVM Changes, Manual


To perform manual changes the Volume Group must be
varied on to one of the nodes
1. Make necessary changes to the volume group or filesystem
2. Unmount filesystems and varyoff the vg
On all the other nodes that share the volume group
1. Export the volume group from the ODM
2. Import the information from the VGDA
3. Change the auto vary on flag
4. Correct the permissions and ownership's on the logical
volumes as required
5. Repeat to all other nodes
#chfs -a size=+8192 /sharedfs
#unmount /sharedfs
#varyoffvg sharedvg

#
#importvg -V123 -L sharedvg hdisk3
#chvg -an sharedvg
#varyoffvg sharedvg

Copyright IBM Corporation 2005

Figure 7-58. LVM Changes, Manual AU546.0

Notes:
After making a change to an LVM component such as increasing the size of a file system
as shown in the figure, you must propagate the change to the other nodes in the cluster
which are sharing the volume group using the steps above. Make sure that the auto
activate is turned off (chvg -an sharedvg) after the importvg command is executed
since the cluster manager will control the use of the varyonvg command on the node where
the volume group should be varied on.
Other than the sheer complexity of this procedure, the real problem with it is that it requires
that the resource group be down while the procedure is being carried out.
Fortunately, there are better ways...

7-76 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
LVM Changes, Lazy Update
At fallover time, lazy update compares the time stamp value
in the VGDA with one stored in
/usr/sbin/cluster/etc/vg/<vgname>. If the time stamps are
the same, then the varyonvg proceeds.
If the timestamps do not agree, then HACMP does the
export/import cycle similar to a manual update.
NOTE: HACMP does change the VG auto vary on flag
AND it preserves permissions and ownership of the logical
volumes.

12 12
11 1 11 1
10 2 10 2
9 3 9 3
8 4 8 4
7 5 7 5
6 6

Copyright IBM Corporation 2005

Figure 7-59. LVM Changes, Lazy Update AU546.0

Notes:
HACMP has a facility called Lazy Update that it uses to ensure LVM changes are updated
during a fallover.
This works by HACMP keeping a copy of the timestamp from the volume groups VGDA.
AIX updates this timestamp whenever the LVM component is modified. When another
cluster node attempts to vary on the volume group, HACMP for AIX compares its copy of
the timestamp (kept in the /usr/sbin/cluster/etc/vg file) with the timestamp in the VGDA on
the disk. If the values are different, the HACMP for AIX software exports and re-imports the
volume group before activating it. If the timestamps are the same, HACMP for AIX
activates the volume group without exporting and re-importing. The time needed for
takeover expands by a few minutes if a Lazy Update occurs. A Lazy Update is always
performed the first time a takeover occurs in order to create the timestamp file on the
takeover node.
This method requires no downtime although, as indicated above, it does increase the
fallover time somewhat for the first fallover after the LVM change was made.

Copyright IBM Corp. 1998, 2005 Unit 7. Basic HACMP Administration 7-77
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

LVM Changes, C-SPOC Synchronization


Manually make your change to the LVM on one node
Use C-SPOC to propagate the changes to all nodes in the
resource group
This is only available for volume groups that are inactive
everywhere in the cluster (not VARYed on)
Downtime is experienced for the volume group

update vg constructs C-SPOC updates ODM


use C-SPOC syncvg and the time stamp file

Copyright IBM Corporation 2005

Figure 7-60. LVM Changes, C-SPOC Synchronization AU546.0

Notes:

Using C-SPOC to synchronize manual LVM changes


In this method, you manually make your change to the LVM on one node and then
invoke C-SPOC to propagate the change.
Note: The volume group must be inactive everywhere in the cluster, so downtime will be
experienced for this application.
This facility is accessed by using the following SMIT path in HACMP:
smitty hacmp --> System Management (C-SPOC) --> HACMP Logical Volume
Management --> Synchronize a Shared Volume Group Definition
This approach requires downtime, but is relatively fast compared to the manual
approach.

7-78 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
The Best Method: C-SPOC LVM Changes

Journaled File Systems

Move cursor to desired item and press Enter.

Add a Journaled File System


Add a Journaled File System on a Previously Defined Logical Volume
List All Shared File Systems
Change / Show Characteristics of a Shared File System
Remove a Shared File System

F1=Help F2=Refresh F3=Cancel F8=Image


F9=Shell F10=Exit Enter=Do

Copyright IBM Corporation 2005

Figure 7-61. The Best Method: C-SPOC LVM Changes AU546.0

Notes:
You can use C-SPOC to both make the change and to distribute the change.
This approach has two major advantages: no downtime is required and you can be
confident that the nodes are in sync. It might take a little longer to run than the normal chfs
application, but it is well worth the wait.
Other C-SPOC screens exist for pretty much any operation that you are likely to want to do
with a shared volume group.

Copyright IBM Corp. 1998, 2005 Unit 7. Basic HACMP Administration 7-79
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

RSCT (for ECM Volume Groups)


RSCT updates LVM information automatically for enhanced
concurrent mode (ECM) volume groups
Happens immediately on all nodes running cluster
services
Nodes that are not running cluster services will be
updated when cluster services are started
Limitations
Only works on ECM volume groups
/etc/filesystems not updated

Copyright IBM Corporation 2005

Figure 7-62. RSCT (for ECM Volume Groups) AU546.0

Notes:

RSCT as LVM change management


With enhanced concurrent mode (ECM) volume groups, RSCT will automatically
update the ODM on all the nodes which share the volume group when an LVM change
occurs on one node.
However, since it is limited to only ECM volume groups and since /etc/filesystems is
not updated, its probably better to explicitly use C-SPOC to make LVM changes.

7-80 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
LVM Changes, Select Your Filesystem
Journaled File Systems

Move cursor to desired item and press Enter.

Add a Journaled File System


Add a Journaled File System on a Previously Defined Logical Volume
List All Shared File Systems
Change / Show Characteristics of a Shared File System
Remove a Shared File System

+--------------------------------------------------------------------------+
File System Name

Move cursor to desired item and press Enter.

# Resource Group File System
adventure /norbert
discovery /ron

F1=Help F2=Refresh F3=Cancel
F8=Image F10=Exit Enter=Do
F1 /=Find n=Find Next
F9+--------------------------------------------------------------------------+

Copyright IBM Corporation 2005

Figure 7-63. LVM Changes, Select Your File system AU546.0

Notes:

Changing a shared file system using C-SPOC


We have to provide the name of the file system which we want to change. The file
system must be in a volume group which is currently online somewhere in the cluster
and is already configured into a resource group.

Copyright IBM Corp. 1998, 2005 Unit 7. Basic HACMP Administration 7-81
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Update the Size of a Filesystem

Change/Show Characteristics of a Shared File System in the Cluster

Type or select values in entry fields.


Press Enter AFTER making all desired changes.

[Entry Fields]
Resource Group Name discovery
File system name /ron
NEW mount point [/ron]
SIZE of file system [4000000]
Mount GROUP []
Mount AUTOMATICALLY at system restart? no +
PERMISSIONS read/write +
Mount OPTIONS [] +
Start Disk Accounting? no +
Fragment Size (bytes) 4096
Number of bytes per inode 4096
Compression algorithm no

F1=Help F2=Refresh F3=Cancel F4=List


F5=Reset F6=Command F7=Edit F8=Image
F9=Shell F10=Exit Enter=Do

Copyright IBM Corporation 2005

Figure 7-64. Update the Size of a File system AU546.0

Notes:

Changing file system size


Specify a new file system size, in 512 byte blocks, and press Enter. The file system is
re-sized and the relevant LVM information is updated on all cluster nodes configured to
use the file systems volume group.

7-82 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
Stopping Cluster Services

Stop Cluster Services

Type or select values in entry fields.


Press Enter AFTER making all desired changes.

[Entry Fields]
* Stop now, on system restart or both now +
Stop Cluster Services on these nodes [hanode1] +
BROADCAST cluster shutdown? true +
* Shutdown mode graceful +


Shutdown mode

Move cursor to desired item and press Enter.

graceful
takeover
forced

F1=Help F2=Refresh F3=Cancel
F1 F8=Image F10=Exit Enter=Do
F5 /=Find n=Find Next
F9

Copyright IBM Corporation 2005

Figure 7-65. Stopping Cluster Services AU546.0

Notes:

Shutdown mode
When stopping cluster services, you need to tell HACMP exactly what you want it to do:
- Graceful
Local machine shuts itself down gracefully. Remote machine(s) interpret this as a
graceful down and DO NOT takeover resources.
- Takeover
Local machine shuts itself down gracefully. Remote machine(s) interpret this as a
non-graceful down and TAKEOVER resources. This mode is useful for system
maintenance.
- Forced
Local machine shuts down cluster services without releasing any resources.
Remote machine(s) DO NOT takeover resources.

Copyright IBM Corp. 1998, 2005 Unit 7. Basic HACMP Administration 7-83
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

HACMP Resource Group Operations


HACMP Resource Group and Application Management

Move cursor to desired item and press Enter.

Bring a Resource Group Online


Bring a Resource Group Offline
Move a Resource Group to Another Node

Suspend/Resume Application Monitoring


Application Availability Analysis

F1=Help F2=Refresh F3=Cancel Esc+8=Image


Esc+9=Shell Esc+0=Exit Enter=Do

Copyright IBM Corporation 2005

Figure 7-66. HACMP Resource Group Operations AU546.0

Notes:

HACMP Resource Group and Application Management


This visual shows the selections for managing resource groups.

7-84 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
Priority Override Location (POL)
Assigned during a resource group move operation.
The destination node for a resource group online, offline or
move request becomes the resource group's POL
Remains in effect until:
A move to "Restore_Node_Priority_Order" is done
Cluster is restarted (unless option chosen to persist)
HACMP 4.x has the notion of a sticky location which is similar to
the notion of a persistent POL
Can be viewed with the command:
/usr/es/sbin/cluster/utilities/clRGinfo p

*This foil describes how priority override locations work for nonconcurrent
resource groups. Please refer to chapter 14 of the HACMP for AIX 5L
Administration Guide for information on how priority override locations work
for concurrent access resource groups.

Copyright IBM Corporation 2005

Figure 7-67. Priority Override Location (POL) AU546.0

Notes:

Priority override location (POL)


HACMP 5.x introduces the notion of a priority override location. A Priority Override
Location overrides all other fallover/fallback policies and possible locations for the
resource group.
A resource group does not normally have a priority override location. The destination
node that you specify for a resource group move, online or offline request (see next
couple of foils) becomes the priority override location for the resource group. The
resource group remains on that node in an online state (if you moved or on-lined it
there) or offline state (if you off-lined it there) until the priority override location is
cancelled.
For those familiar with the HACMP 4.x terminology, priority override is very similar to the
notion of a sticky location for a resource group.

Copyright IBM Corp. 1998, 2005 Unit 7. Basic HACMP Administration 7-85
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Persistent and non-persistent POL


Priority override locations can be persistent and non-persistent.
- A persistent priority override location remains in effect until explicitly cancelled.
- A non-persistent priority override location is cancelled either explicitly or implicitly
when the HACMP daemons are shut down on all the nodes in the cluster
simultaneously.

Concurrent access resource groups


The behavior of priority override location varies depending on whether the resource
group is a concurrent access resource group or not. The discussion here refers to the
behavior of non-concurrent access resource groups. Please refer to chapter 14 of the
HACMP for AIX 5L Administration Guide for information on how priority override
locations work for concurrent access resource groups.

7-86 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
Moving a Resource Group

Move a Resource Group

Type or select values in entry fields.


Press Enter AFTER making all desired changes.

[Entry Fields]
Resource Group to be Moved adventure
Destination Node hudson
Persist Across Cluster Reboot? false +

F1=Help F2=Refresh F3=Cancel F4=List


F5=Reset F6=Command F7=Edit F8=Image
F9=Shell F10=Exit Enter=Do

Copyright IBM Corporation 2005

Figure 7-68. Moving a Resource Group AU546.0

Notes:

Moving a resource group


You can request that a resource group be moved to any node that is in the resource
groups list of nodes.
In HACMP 5.x systems, the new clRGmove utility program is used, which can also be
invoked from the command line. See the man page for details.
The destination node that you specify becomes the resource groups priority override
location.

Working with the POL


A resource groups priority override location can be cancelled by selecting a destination
node of Restore_Node_Priority_Order.
If Persist Across Cluster Reboot is set to true, then the priority override location will
be persistent. Otherwise, it will be non-persistent.

Copyright IBM Corp. 1998, 2005 Unit 7. Basic HACMP Administration 7-87
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

HACMP 4.5 comparison


People familiar with HACMP 4.5 who are aware of the frequent necessity to request a
sticky resource group move when moving cascading resource groups should be aware
that it is not usually necessary to specify that the move of a cascading resource group
should persist across cluster reboots.

Concurrent access resource groups


The discussion here refers to the behavior of non-concurrent access resource groups.
Please refer to chapter 14 of the HACMP for AIX Administration Guide for information
on how priority override locations work for concurrent access resource groups.

7-88 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
Bring a Resource Group Offline

HACMP Resource Group and Application Management

Move cursor to desired item and press Enter.

Bring a Resource Group Online


Bring a Resource Group Offline
Move a Resource Group to Another Node / Site

Suspend/Resume Application Monitoring


Application Availability Analysis

-
Select an Online Node

Move cursor to desired item and press Enter.

t2toronto

F1=Help F2=Refresh F3=Cancel
F8=Image F10=Exit Enter=Do
F1 /=Find n=Find Next
F9-

Copyright IBM Corporation 2005

Figure 7-69. Bring a Resource Group Offline AU546.0

Notes:

Bring a Resource Group Offline -> Select an Online Node


After making this selection, you must select the resource group you wish to take offline
and select an online node where you want the resource group brought offline.
Note that Restore_Node_Priority_Order is not a choice in this menu.

Copyright IBM Corp. 1998, 2005 Unit 7. Basic HACMP Administration 7-89
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Taking a Resource Group Offline

Bring a Resource Group Offline

Type or select values in entry fields.


Press Enter AFTER making all desired changes.

[Entry Fields]
Resource Group to Bring Offline adventure
Node On Which to Bring Resource Group Offline bondar
Persist Across Cluster Reboot? false +

F1=Help F2=Refresh F3=Cancel F4=List


F5=Reset F6=Command F7=Edit F8=Image
F9=Shell F10=Exit Enter=Do

Copyright IBM Corporation 2005

Figure 7-70. Taking a Resource Group Offline AU546.0

Notes:

Taking a resource group offline


When you bring a resource group offline on a node, the node becomes the resource
groups priority override location. Refer to earlier discussion of priority override locations
(a couple of pages back) for more information.
If Persist Across Cluster Reboot is set to true then the priority override location will
be persistent. Otherwise, it will be non-persistent.
There is no way to bring a resource group off line and remove a priority override
location at the same time.
The discussion here refers to the behavior of non-concurrent access resource groups.
Please refer to chapter 14 of the HACMP for AIX Administration Guide for information
on how priority override locations work for concurrent access resource groups.

7-90 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
Bring a Resource Group Back Online

HACMP Resource Group and Application Management

Move cursor to desired item and press Enter.

Bring a Resource Group Online



Select a Destination Node

Move cursor to desired item and press Enter.

# To choose the highest priority available node for the
# resource group, and to remove any Priority Override Location
# that is set for the resource group, select
# "Restore_Node_Priority_Order" below.
Restore_Node_Priority_Order

# To choose a specific node, select one below.
hanode1
hanode2

F1=Help F2=Refresh F3=Cancel
F8=Image F10=Exit Enter=Do
F1 /=Find n=Find Next
F9

Copyright IBM Corporation 2005

Figure 7-71. Bring a Resource Group Back Online AU546.0

Notes:

Bring a resource group online


When a resource group is brought online on a node, that node becomes the resource
groups priority override location.

Priority override location


If a resource group acquires a priority override location as a result of this request then it
comes online on that node.
A resource groups priority override location can be cancelled by selecting a destination
node of Restore_Node_Priority_Order.
If a node loses its priority override location as a result of specifying a destination node of
Restore_Node_Priority_Order with this request, then it will come online according to
the type or configuration of the resource group.

Copyright IBM Corp. 1998, 2005 Unit 7. Basic HACMP Administration 7-91
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

The discussion here refers to the behavior of non-concurrent access resource groups.
Please refer to chapter 14 of the HACMP for AIX Administration Guide for information
on how priority override locations work for concurrent access resource groups.

7-92 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
Log Files Generated by HACMP
/usr/es/adm/cluster.log "High level view" of cluster activity.

/usr/es/sbin/cluster/history/cluster.mmddyyyy Cluster history files generated daily.


/tmp/cspoc.log Generated by C-SPOC commands.
/tmp/dms_loads.out Stores log messages every time HACMP loads
the deadman switch kernal extension.
/var/hacmp/clverify/clverify.log Contains verbose messages from clverify
(cluster verification utility).
/tmp/emuhacmp.out Output of emulated events.
/tmp/hacmp.out /tmp/hacmp.out.<1-7> Output of today's HACMP event scripts.
AIX error log All sorts of stuff!
/var/ha/log/topsvcs Tracks execution of topology services daemon.
/var/ha/log/grpsvcs Tracks execution of group services daemon.
/var/ha/log/grpglsm Tracks execution of grpglsm daemon.
/tmp/clstrmgr.debug Tracks internal execution of the cluster
manager.
/var/hacmp/clcomd/clcomd.log Tracks activity of clcomd.
/var/hacmp/clcomd/clcomddiag.log Tracks more detailed activity of clcomd when
tracing is turned on.
/var/adm/clavan.log Output of application availability analysis tool.
HACMP 5.2 /var/hacmp/log/
-Two-Node Cluster Configuration Assistant
clconfigassist.log
clutils.log -Generated by utilities and file propagation
cl_testtool.log -Generated by test tool

Copyright IBM Corporation 2005

Figure 7-72. Log Files Generated by HACMP AU546.0

Notes:

Log files
The visual summarizes the HACMP log files.

Copyright IBM Corp. 1998, 2005 Unit 7. Basic HACMP Administration 7-93
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Lets Review: Topic 2


1. True or False?
Using C-SPOC reduces the likelihood of an outage by reducing the
likelihood that you will make a mistake.
2. True or False?
C-SPOC reduces the need for a change management process.
3. C-SPOC cannot do which of the following administration tasks?
a. Add a user to the cluster.
b. Change the size of a filesystem.
c. Add a physical disks to the cluster.
d. Add a shared volume groups to the cluster.
e. Synchronize existing passwords.
f. None of the above.
4. True or False?
It does not matter which node in the cluster is used to initiate a C-
SPOC operation.
5. Which log file provides detailed output on HACMP event script
execution?
a. /tmp/clstrmgr.debug
b. /tmp/hacmp.out
c. /var/adm/cluster.log

Copyright IBM Corporation 2005

Figure 7-73. Lets Review Topic 2 AU546.0

Notes:

7-94 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty 7.3 Dynamic Automatic Reconfiguration Event Facility

Copyright IBM Corp. 1998, 2005 Unit 7. Basic HACMP Administration 7-95
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Topic 3 Objectives: Dynamic


Automatic Reconfiguration Event Facility

After completing this topic, you should be able to:


Discuss the benefits and capabilities of DARE
Make changes to cluster topology and resources in an
active cluster
Use the snapshot facility to return to a previous cluster
configuration or to rollback changes

Copyright IBM Corporation 2005

Figure 7-74. Topic 3 Objectives: Dynamic Automatic Reconfiguration Event Facility AU546.0

Notes:

Dynamic Automatic Reconfiguration Event (DARE)


In this topic, we examine HACMPs capability to perform changes to the cluster
configuration while the cluster is running. This is capability is known as Dynamic
Automatic Reconfiguration Event, or DARE for short.

7-96 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
Dynamic Reconfiguration
HACMP provides a facility that allows changes to cluster
topology and resources to be made while the cluster is active. This
facility is known as DARE or to give it it's full name Dynamic
Automatic Reconfiguration Event. This requires three copies of the
HACMP ODM.

Default Configuration Directory


DCD which is updated by SMIT/command
line: /etc/objrepos

Staging Configuration Directory


SCD which is used during reconfiguration:
/usr/es/sbin/cluster/etc/objrepos/staging
rootvg
Active Configuration Directory from which
clstrmgr reads the cluster configuration:
ACD
/usr/es/sbin/cluster/etc/objrepos/active

Copyright IBM Corporation 2005

Figure 7-75. Dynamic Reconfiguration AU546.0

Notes:

How it works
Dynamic Reconfiguration is made possible by the fact that HACMP holds three copies
of the ODM, known as the Default, Staging and Active configuration directory. By
holding three copies of the ODM, HACMP can make changes on one node and
propagate them to other nodes in the cluster while an active configuration is currently
being used.

Copyright IBM Corp. 1998, 2005 Unit 7. Basic HACMP Administration 7-97
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

What Can DARE Do?


DARE allows changes to be made to most cluster topology
and nearly all resource group components without the need to stop
HACMP, take the application offline or reboot a node. All changes
must be synchronized in order to take effect.
Here are some examples of the tasks that DARE can complete for
Topology and Resources without having to bring HACMP down.

Topology Changes
Adding or removing cluster nodes
Adding or removing networks
Adding or removing communication interfaces or
devices
Swapping a communication interface's IP address
Resource Changes
All resources can be changed
Copyright IBM Corporation 2005

Figure 7-76. What Can DARE Do? AU546.0

Notes:

What can DARE do?


The visual shows some of the changes which can be made dynamically using DARE.

7-98 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
What Limitations Does DARE Have?
DARE cannot change all cluster topology and resource group
components without the need to stop HACMP, take the application
offline or reboot a node
Here are some examples that require a stop and restart of HACMP
for the change to be made
Topology Changes
Change the name of the cluster
Change the cluster ID*
Change the name of a cluster node
Change a communication interface attribute
Changing whether or not a network uses IPAT via IP aliasing or via IP replacement
Change the name of a network module*
Add a network interface module*
Removing a network interface module*
Resource Changes
Change the name of a resource group
Change the name of an application server
Change the node relationship

DARE cannot run if two nodes are not at the same HACMP level
Copyright IBM Corporation 2005

Figure 7-77. What Limitations Does DARE Have? AU546.0

Notes:

Limitations
Some changes require a HACMP restart.
Also, DARE requires that all nodes are at the same HACMP level.

Copyright IBM Corp. 1998, 2005 Unit 7. Basic HACMP Administration 7-99
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

So How Does DARE Work?


DARE uses the three separate copies of the ODM in order to
allow changes to be propagated to all nodes whilst the
cluster is active

1 2 3 4 5
cluster manager reads SCD is deleted
change topology synchronize topology snapshot taken of
or resources in SMIT or resources in SMIT the current ACD ACD and refreshes

HACMP HACMP

Move cursor to desired item and press Enter. Move cursor to desired item and press Enter.

Cluster Configuration Cluster Configuration


Cluster Services Cluster Services
Cluster System Management Cluster System Management
Cluster Recovery Aids Cluster Recovery Aids
RAS Support RASfdsfsfsafsafsfs
fsafsfdsafdsafdsafdsfsdafsdadafsdafsdf
Support

SCD
F1=Help F2=Refresh F3=Cancel Esc+8=Image F1=Help F2=Refresh F3=Cancel Esc+8=Image
Esc+9=Shell Esc+0=Exit Enter=Do Esc+9=Shell Esc+0=Exit Enter=Do

Type
text

DCD SCD ACD ACD SCD


SCD

Copyright IBM Corporation 2005

Figure 7-78. So How Does DARE Work? AU546.0

Notes:

How it works
DARE uses three copies of the HACMP ODM to propagate live updates to the cluster
topology or resource configuration across the cluster. This is done in five steps detailed
above. Although it is possible to make a nearly arbitrarily large set of changes to the
configuration and then synchronize them all in one operation, it is usually better to make
a modest change, synchronize it, verify that it works, and then move on to more
changes.
Note that many changes are incompatible with the clusters current AIX configuration.
Such changes are, therefore, not possible to synchronize using DARE. Instead, the
cluster has to be taken down while the appropriate AIX configuration changes are
applied (it is sometimes possible to remove some resources from a resource group,
synchronize, change the AIX configuration of the resources, add them back into the
resource group and synchronize again although there is likely to be little point in running
the resource group without the resources).

7-100 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty HACMP 5.x synchronizes both topology changes and resource changes whenever it is
run. This is a change from previous releases of HACMP.

Copyright IBM Corp. 1998, 2005 Unit 7. Basic HACMP Administration 7-101
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Verifying and Synchronizing (Standard)


Initialization and Standard Configuration
Move cursor to desired item and press Enter.
Two-Node Cluster Configuration Assistant
Add Nodes to an HACMP Cluster
Configure Resources to Make Highly Available
Configure HACMP Resource Groups
Verify and Synchronize HACMP Configuration
Display HACMP Configuration

F1=Help F2=Refresh F3=Cancel F8=Image


F9=Shell F10=Exit Enter=Do
Copyright IBM Corporation 2005

Figure 7-79. Verifying and Synchronizing (Standard) AU546.0

Notes:

Verifying and Synchronizing (Extended)


This foil highlights the Verify and Synchronize HACMP Configuration menu entry in the
top-level Standard Configuration paths SMIT menu.
Invoking this menu entry initiates an immediate verification and synchronization of the
HACMP configuration from the local nodes DCD (there is no opportunity provided to
modify the process in any way).

7-102 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
Verifying and Synchronizing (Extended)
HACMP Verification and Synchronization (Active Cluster on a Local Node)
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
(When NODE DOWN -- HACMP 5.2) [Entry Fields]
* Verify, Synchronize or Both [Both] +
* Automatically correct errors found during [No] +
verification?
* Force synchronization if verification fails? [No] +
* Verify changes only? [No] +
* Logging [Standard] +
(When NODE UP)
* Emulate or Actual [Actual] +
Force synchronization if verification fails? [No] +
* Verify changes only? [No] +
* Logging [Standard] +

F1=Help F2=Refresh F3=Cancel F4=List


F5=Reset F6=Command F7=Edit F8=Image
F9=Shell F10=Exit Enter=Do

Copyright IBM Corporation 2005

Figure 7-80. Verifying and Synchronizing (Extended) AU546.0

Notes:

Verifying and Synchronizing (Extended)


When the Extended Verification and Synchronization entry in the extended
configuration paths top-level menu is selected, the SMIT screen above appears. It
allows the cluster administrator to modify the default verification and synchronization
procedure somewhat:

Emulate or Actual
The default of Actual causes the changes being verified and synchronized to take
effect (become the actual cluster configuration) if the verification succeeds. Setting this
field to Emulate causes HACMP to verify and then go through the motions of a
synchronize without actually causing the changes to take effect. This is useful to get a
sense of what side effects the synchronization is likely to result in. For example, if the
proposed change would trigger a fallover or a fallback (because node priorities have

Copyright IBM Corp. 1998, 2005 Unit 7. Basic HACMP Administration 7-103
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

changed) then this would be apparent by looking at /tmp/emuhacmp.out or


/var/adm/cluster.log.
Note: Since, in this case, no fallover or fallback actually occurs, it is not possible to
determine if the hypothetical fallback works if actually performed (it might fail for any
number of subtle reasons which simply cannot be discovered by an emulated
synchronization).

Force synchronization if verification fails?


Setting this to true requests that HACMP accept configurations which it does not
consider to be entirely valid. This is potentially a very dangerous request and should not
be made without CONSIDERABLE planning and analysis to ensure that the impact is
acceptable.

Verify changes only?


Setting this to true causes the proposed change to be verified but not synchronized.
This can be used to see if a change is valid without actually putting it into effect.

Logging
This field can be set to Standard to request the default level of logging or to Verbose to
request a more, ummmm, verbose level of logging! If you are having problems getting a
change to verify and do not understand why it will not verify then setting the logging
level to verbose may provide additional information which proves useful.

HACMP 4.x
HACMP 4.x provided an extra option in the Verify and Configure Resources SMIT
screen called Un/configure Cluster Resources. The default of yes resulted in a
verification and synchronization essentially as described above. A value of no caused
the verification and synchronization to proceed as usual except that the operation
stopped just before the step of loading the new ACD into the current cluster managers
and actually acting on the changes. This resulted in the change being staged. It would
then be caused to take effect by stopping HACMP on all cluster nodes and then
restarting HACMP on each cluster node (stopping and then restarting HACMP on one
cluster node while a staged change existed and while other cluster nodes remain
running would almost certainly result in an inconsistent cluster configuration across
cluster nodes which would, in turn, result in one or more nodes crashing).
This facility was used in a number of rather exotic cluster upgrade scenarios. It is no
longer available in either the standard or extended paths verification and
synchronization mechanisms. It is still available in the SMIT screen used to apply a
cluster snapshot (under Snapshot Configuration in the top level menu of the extended
configuration path).

7-104 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty Note that a configuration which has been staged as described above blocks all further
synchronizations until either the cluster is brought down (HACMP daemons stopped on
all cluster nodes) and then brought back up again. This is called a dynamic
reconfiguration lock. We see how to remove such a lock shortly.

Copyright IBM Corp. 1998, 2005 Unit 7. Basic HACMP Administration 7-105
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Discarding Unwanted Changes


Problem Determination Tools
Move cursor to desired item and press Enter.
HACMP Verification
View Current State
HACMP Log Viewing and Management
Recover From HACMP Script Failure
Restore HACMP Configuration Database from Active Configuration
Release Locks Set By Dynamic Reconfiguration
Clear SSA Disk Fence Registers
HACMP Cluster Test Tool
HACMP Trace Facility
HACMP Event Emulation
HACMP Error Notification
Open a SMIT Session on a Node

F1=Help F2=Refresh F3=Cancel F8=Image


F9=Shell F10=Exit Enter=Do

Copyright IBM Corporation 2005

Figure 7-81. Discarding Unwanted Changes AU546.0

Notes:

Rolling back an unwanted change that has not yet been synchronized
If you have made changes which you have decided to not synchronize, they can be
discarded using the Restore HACMP Configuration Database from Active Configuration
menu entry shown above. It is located under the Problem Determination Tools menu
(accessible from the top-level HACMP SMIT menu).
Prior to rolling back the DCD on all nodes, the current contents of the DCD on the node
used to initiate the roll back is saved as a snapshot (in case they should prove useful in
the future). The snapshot will have a rather long name similar to:
Restored_From_ACD.Sep.18.19.33.58
This name can be interpreted to indicate that the snapshot was taken at 19:33:58 on
September 18th (the year is not preserved in the name).
Since the change being discarded is sometimes a change which has been emulated,
this operation is sometimes called rolling back an emulated change. This is a misnomer

7-106 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty as the operation rolls back ANY change which has not yet been verified and
synchronized by restoring all nodes DCDs to the contents of the currently active cluster
configuration.

Copyright IBM Corp. 1998, 2005 Unit 7. Basic HACMP Administration 7-107
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Rolling Back from a DARE Operation


Apply a Cluster Snapshot
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
[Entry Fields]
Cluster Snapshot Name jami
Cluster Snapshot Description Cuz -- he did the lab>
Un/Configure Cluster Resources? [Yes] +
Force apply if verify fails? [No] +

F1=Help F2=Refresh F3=Cancel F4=List


F5=Reset F6=Command F7=Edit F8=Image
F9=Shell F10=Exit Enter=Do

Copyright IBM Corporation 2005

Figure 7-82. Rolling Back from a DARE Operation AU546.0

Notes:

Rolling back an unwanted change that has been synchronized


If you find that a DARE change does not give the desired result, then you may wish to
roll it back. DARE cuts a snapshot of the active configuration immediately prior to
committing HACMP configuration. This snapshot is named active.x.odm (where x is
0...9, 0 being the most recent). It can be used to restore the cluster to an earlier state.

Manual snapshots are useful


If many changes have been made in reasonably rapid succession, then you may lose
track of which active.x snapshot is the one that you want. To defend yourself against
this possibility, it is best to manually take a snapshot before embarking on a series of
changes. This allows you to rollback to a known point rather than having to guess which
active.x snapshot is the right one!

7-108 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty Snapshots are stored in the directory /usr/es/sbin/cluster/snapshots by default (the


default can be overridden by setting the SNAPSHOTPATH environment variable).

Copyright IBM Corp. 1998, 2005 Unit 7. Basic HACMP Administration 7-109
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

What If DARE Fails?


If a dynamic reconfiguration should fail due to an unexpected
cluster event, then the staging configuration directory might still
exist. This prevents further changes being made to the cluster.

1 2 3 4 5
cluster manager reads SCD is deleted
change topology synchronize topology snapshot taken of
or resources in SMIT or resources in SMIT the current ACD ACD and refreshes

HACMP HACMP

Move cursor to desired item and press Enter. Move cursor to desired item and press Enter.

Cluster Configuration Cluster Configuration


Cluster Services Cluster Services
Cluster System Management Cluster System Management
Cluster Recovery Aids Cluster Recovery Aids
RAS Support RASfdsfsfsafsafsfs
fsafsfdsafdsafdsafdsfsdafsdadafsdafsdf
Support

SCD
F1=Help F2=Refresh F3=Cancel Esc+8=Image F1=Help F2=Refresh F3=Cancel Esc+8=Image
Esc+9=Shell Esc+0=Exit Enter=Do Esc+9=Shell Esc+0=Exit Enter=Do

Type
text

Bang!

DCD SCD ACD ACD SCD


SCD

Copyright IBM Corporation 2005

Figure 7-83. What If DARE Fails? AU546.0

Notes:

What if DARE fails?


If a node failure should occur while a synchronization is taking place, then the Staging
Configuration Directory (SCD) was not cleared on all nodes. The presence of the SCD
prevents further configuration changes from being performed. If the SCD is not cleared
at the end of a synchronize, then this indicates that the DARE operation did not
complete or was not successful and hence the SCD acts as a lock against further
changes being made.
Note that the SCD copies are made before the change is copied by each nodes cluster
manager into each nodes ACD. If there is an SCD when HACMP starts up on a node
then it copies it to the ACD, deletes the SCD and uses the new ACD as its
configuration. Since a node failure at any point after any of the SCDs exists could result
in only some of the nodes having the updated SCD, the SCDs must be removed before
HACMP is restarted on any node (or you risk different cluster nodes running with

7-110 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty different configurations, a situation which results in one or more cluster nodes
crashing).

Copyright IBM Corp. 1998, 2005 Unit 7. Basic HACMP Administration 7-111
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Dynamic Reconfiguration Lock


Problem Determination Tools
Move cursor to desired item and press Enter.
HACMP Verification
View Current State
HACMP Log Viewing and Management
Recover From HACMP Script Failure
Restore HACMP Configuration Database from Active Configuration
Release Locks Set By Dynamic Reconfiguration
Clear SSA Disk Fence Registers
HACMP Cluster Test Tool
HACMP Trace Facility
HACMP Event Emulation
HACMP Error Notification
Open a SMIT Session on a Node

F1=Help F2=Refresh F3=Cancel F8=Image


F9=Shell F10=Exit Enter=Do

Copyright IBM Corporation 2005

Figure 7-84. Dynamic Reconfiguration Lock AU546.0

Notes:

Clearing dynamic reconfiguration locks


The SMIT menu entry Release Locks Set By Dynamic Reconfiguration clears out
the SCD and allows further synchronizations to be made to the cluster configuration. If
an SCD exists on any cluster node, then no further synchronizations are permitted until
it is deleted using the above SMIT menu entry.

7-112 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
Lets Review: Topic 3
1. True or False?
DARE operations can be performed while the cluster is running.
2. Which operations can DARE not perform (select all that apply)?
a. Changing the name of the cluster.
b. Removing a node from the cluster.
c. Changing a resource in a resource group.
d. Change whether a network uses IPAT via IP aliasing or via IP
replacement.
3. True or False?
It is possible to roll back from a successful DARE operation using an
automatically generated snapshot.
4. True or False?
Running a DARE operation requires three separate copies of the
HACMP ODM.
5. True or False?
Cluster snapshots can be applied while the cluster is running.
6. What is the purpose of the dynamic reconfiguration lock?
a. To prevent unauthorized access to DARE functions.
b. To prevent further changes being made until a DARE operation
has completed.
c. To keep a copy of the previous configuration for easy rollback.

Copyright IBM Corporation 2005

Figure 7-85. Lets Review: Topic 3 AU546.0

Notes:

Copyright IBM Corp. 1998, 2005 Unit 7. Basic HACMP Administration 7-113
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

7-114 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty 7.4 WebSMIT

Copyright IBM Corp. 1998, 2005 Unit 7. Basic HACMP Administration 7-115
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Topic 4 Objectives:
Implementing WebSMIT

After completing this topic, you should be able to:


Configure and use WebSMIT

Copyright IBM Corporation 2005

Figure 7-86. Topic 4 Objectives: Implementing WebSMIT AU546.0

Notes:

7-116 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
Web-enabled SMIT (WebSMIT)
HACMP 5.2 and up includes a web-enabled user interface
that provides easy access to:
HACMP configuration and management functions
Interactive cluster status display and manipulation
HACMP online documentation
The WebSMIT interface is similar to the ASCII SMIT
interface. You do not need to learn a new user interface or
terminology and can easily switch between ASCII SMIT and
WebSMIT
To use the WebSMIT interface, you must configure and run
a Web server process on the cluster nodes to be
administered

Copyright IBM Corporation 2005

Figure 7-87. Web-enabled SMIT (WebSMIT) AU546.0

Notes:

Introduction
WebSMIT combines the advantages of SMIT with the ease of access from any system
which runs a browser. In addition, it adds a few useful new features, not available in
text-based SMIT.

Copyright IBM Corp. 1998, 2005 Unit 7. Basic HACMP Administration 7-117
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

WebSMIT Index Page

Copyright IBM Corporation 2005

Figure 7-88. WebSMIT Index Page AU546.0

Notes:

Introduction
To connect to WebSMIT, point your browser to the cluster node that you have
configured for WebSMIT.
WebSMIT uses port 42267 by default.
After authentication, this will be the first screen that you see. WebSMIT provides three
basic functions:
- Cluster Status
- Cluster Configuration and Management
- Online Documentation

7-118 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
WebSMIT
Cluster Configuration and Management

Copyright IBM Corporation 2005

Figure 7-89. WebSMIT Cluster Configuration and Management AU546.0

Notes:

Cluster Configuration and Management


This selection takes you to the top of the HACMP SMIT menus.

Copyright IBM Corp. 1998, 2005 Unit 7. Basic HACMP Administration 7-119
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

WebSMIT Tree View

Copyright IBM Corporation 2005

Figure 7-90. WebSMIT Tree View AU546.0

Notes:

Tree View
If you click the Tree View button, a new window pops up. You can use the Tree View
window to quickly navigate between SMIT panels without having to step through all the
intervening menus. It may also be useful, if you have forgotten where to find a particular
panel.

7-120 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
WebSMIT Online Documentation

Copyright IBM Corporation 2005

Figure 7-91. WebSMIT Online Documentation AU546.0

Notes:

Online Documentation
This screen allows you to view the HACMP manuals in either HTML or PDF format. You
must install the HACMP documentation file sets.

Copyright IBM Corp. 1998, 2005 Unit 7. Basic HACMP Administration 7-121
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

WebSMIT Cluster Status Page

Copyright IBM Corporation 2005

Figure 7-92. WebSMIT Cluster Status Page AU546.0

Notes:

Viewing clstat with WebSMIT


The WebSMIT version of clstat (wsm_clstat.cgi) displays cluster information using a
slightly different layout and organization from the ASCII or X Window versions. Each
cluster node is displayed along with a dynamic icon. Expanding the icon reveals
additional information about the node including network, interfaces and active resource
groups.

Navigation from the Cluster Status page


wsm_clstat.cgi is integrated with WebSMIT, thus allowing direct navigation from the
status display to related tasks. WebSMIT displays popup menus when you hold the
cursor over the specific item. Each popup contains options specific to the item. For
example, in the visual, the popup menu for a resource group is shown. (The cursor
does not show up in this screen shot.)

7-122 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
WebSMIT Fast Path

Copyright IBM Corporation 2005

Figure 7-93. WebSMIT FastPath AU546.0

Notes:

WebSMIT FastPath
You can use the FastPath function to navigate to almost any SMIT panel. WebSMIT will
process just about any valid SMIT panel. There are exceptions due to the client-server
nature of an http interface, but in general it will process most any SMIT panel.
In the example, we have used the top_menu fast path to navigate to the top of the
SMIT menus.

Copyright IBM Corp. 1998, 2005 Unit 7. Basic HACMP Administration 7-123
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

WebSMIT Configuration
/usr/es/sbin/cluster/wsm/README
Configure and run a Web server on cluster nodes
httpd.conf
Copy or link files
Security considerations
wsm_smit.conf
wsm_cmd_exec
Log files
wsm_smit.log
wsm_smit.script
Customizing WebSMIT status display
wsm_clstat.conf
Controlling which SMIT panels can be used
wsm_smit.allow
wsm_smit.deny
wsm_smit.redirect
Setting up WebSMIT online documentation
Install cluster.doc.en_US.es.html and cluster.doc.en_US.es.pdf
Create link

Copyright IBM Corporation 2005

Figure 7-94. WebSMIT Configuration AU546.0

Notes:

Documentation
The primary source for information on configuring WebSMIT is the WebSMIT README
file as shown in the visual. The HACMP Planning and Installation Guide provides some
additional information on installation and the HACMP Administration Guide provides
information on using WebSMIT.

Web server
To use WebSMIT, you must configure one (or more) of your cluster nodes as a Web
server. You can use any Apache compliant server including the IBM HTTP Server
(IBMIHS). Refer to the specific documentation for the Web server you choose.
Once you have setup your server, you need to configure it to work with WebSMIT.
Nearly all modifications to Apache are made in the httpd.conf file. In addition, the Web
server software must be able to access the WebSMIT cgi-bin and htdoc files.

7-124 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty httpd.conf
If you are using IBMIHS, this file is found in the /usr/HTTPServer/conf directory.
HACMP provides a sample httpd.conf file:
/usr/es/sbin/cluster/wsm/httpd.conf.sample.
You can either copy this file to your Web servers directory, or you can copy relevant
portions to an existing httpd.conf file.
See the README file for details.
WebSMIT cgi-bin and htdocs directories
You can either copy these files to your Web servers directory, or link them. Creating
links is probably preferred since that allows any updates to the WebSMIT software to be
automatically applied. It also keeps the WebSMIT log files in their default location so
that they will be picked up by snap -e.
See the README file for details.

WebSMIT security
Since WebSMIT gives you root access to all the nodes in your cluster, you must
carefully consider the security implications.
WebSMIT uses a configuration file, wsm_smit.conf, that contains settings for
WebSMIT's security related features. This file is installed as
/usr/es/sbin/cluster/wsm/wsm_smit.conf, and it may not be moved to another
location. The default settings used provide the highest level of security in the default
AIX/Apache environment. However, you should carefully consider the security
characteristics of your system before putting WebSMIT to use. It may be possible to use
different combinations of security settings for AIX, Apache, and WebSMIT to improve
the security of the application in your environment.
WebSMIT uses the following configurable mechanisms to implement a secure
environment:
- Non-standard port
- Secure http (https)
- User authentication
- Session time-out
- wsm_cmd_exec setuid program
Use non-standard port
WebSMIT can be configured to allow access only over a specified port using the
wsm_smit.conf AUTHORIZED_PORT setting. If you do not specify an AUTHORIZED_PORT,
or specify a port of 0, then any connections via any port will be accepted. It is strongly
recommended that you explicitly specify the AUTHORIZED_PORT, and that you use a
non-standard port. The default setting for this configuration variable is 42267.

Copyright IBM Corp. 1998, 2005 Unit 7. Basic HACMP Administration 7-125
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Allow only secure http


If your http server supports secure http, it is strongly recommended that you require all
WebSMIT connections to be established via https. This will ensure that you are not
transmitting sensitive information about your cluster over the Internet in plain text.
WebSMIT can be configured to require secure http access using the wsm_smit.conf
REDIRECT_TO_HTTPS setting. If the value for this setting is 1, then users connecting to
WebSMIT via an insecure connection will be redirected to a secure http connection.
The default value for REDIRECT_TO_HTTPS is 1.
Note: Regarding the REDIRECT_TO_HTTPS variable, the README file states:
This variable will only function correctly if the AUTHORIZED_PORT feature is disabled.
This did not appear to be true in our testing.
Require user authentication
If Apache's built-in authentication is not being used, WebSMIT can be configured to use
AIX authentication using the wsm_smit.conf file REQUIRE_AUTHENTICATION setting. If
the value for this setting is 1 and there is no .htaccess file controlling access to
WebSMIT, the user will be required to provide AIX authentication information before
gaining access.
(Refer to the documentation included with Apache for more details about Apache's
built-in authentication.)
The default value for REQUIRE_AUTHENTICATION is 1. If REQUIRE_AUTHENTICATION is
set, then the HACMP administrator must specify one or more users who are allowed to
access the system. This can be done using the wsm_smit.conf ACCEPTED_USERS
setting. Only users whose names are specified will be allowed access to WebSMIT, and
all ACCEPTED_USERS will be provided with root access to the system. By default, only the
root user is allowed access via the ACCEPTED_USERS setting.
Warning: Because AIX authentication mechanisms are in use, login failures can cause
an account to be locked. It is recommended that a separate user be created for the sole
purpose of accessing WebSMIT. If the root user has a login failure limit, failed WebSMIT
login attempts could quickly lock the root account.
Session time-out
Continued access to WebSMIT is controlled through the use of a non-persistent session
cookie. Cookies must be enabled in the client browser in order to use AIX
authentication for access control. If the session is used continuously, then the cookie
will not expire. However, the cookie is designed to time out after an extended period of
inactivity. WebSMIT allows the user to adjust the time-out period using the
wsm_smit.conf SESSION_TIMEOUT setting. This configuration setting must have a value
expressed in minutes. The default value for SESSION_TIMEOUT is 20 (minutes).

7-126 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty Controlling access to wsm_cmd_exec (setuid)


A setuid program is supplied with WebSMIT that allows non-root users to execute
commands with root permissions (wsm_cmd_exec). The setuid bit for this program must
be turned on in order for the WebSMIT system to function.
It is also very important for security reasons that wsm_cmd_exec does not have read
permission for non-root users. It should not be made possible for a non-root user to
copy the executable to another location or to decompile the program.
Thus the utility wsm_cmd_exec (located in /usr/es/sbin/cluster/wsm/cgi-bin/) must be
set with 4511 permissions.
See the README for details.
Care must be taken to limit access to this executable. WebSMIT allows the user to
dictate the list of users who are allowed to use the wsm_cmd_exec program using the
wsm_smit.conf REQUIRED_WEBSERVER_UID setting. The real user ID of the process
must match the UID of one of the users listed in wsm_smit.conf in order for the
program to carry out any of its functionality. The default value for
REQUIRED_WEBSERVER_UID is nobody.
By default, a Web server CGI process runs as user nobody, and by default it is not
possible for non-root users to execute programs as user nobody. If your http server
configuration executes CGI programs as a different user, it is important to ensure that
the REQUIRED_WEBSERVER_UID value matches the configuration of your Web server. It is
strongly recommended that the http server be configured to run CGI programs as a user
who is not authorized to open a login shell (as with user nobody).

Log files
All operations of the WebSMIT interface are logged to the wsm_smit.log file and are
equivalent to the logging done with smitty -v. Script commands are also captured in
the wsm_smit.script log file.
WebSMIT log files are created by the CGI scripts using a relative path of <../logs>. If
you copy the CGI scripts to the default location for the IBM HTTP Server, the final path
to the logs is /usr/HTTPServer/logs.
The WebSMIT logs are not subject to manipulation by the HACMP Log Viewing and
Management SMIT panel. Also, just like smit.log and smit.script, the files grow
indefinitely.
The snap -e utility captures the WebSMIT log files if you leave them in the default
location (/usr/es/sbin/cluster/wsm/logs); but if you install WebSMIT somewhere else,
snap -e will not find them.

Copyright IBM Corp. 1998, 2005 Unit 7. Basic HACMP Administration 7-127
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Customizing the WebSMIT status panel


wsm_clstat.cgi displays cluster information in the WebSMIT status panel. You can
customize wsm_clstat.cgi by changing the
/usr/es/sbin/cluster/wsm/cgi-bin/wsm_smit.conf file. This file allows you to configure
logging and the popup menus for the WebSMIT status panel.

Controlling which SMIT screens can be used


As mentioned earlier, WebSMIT will process just about any valid SMIT panel. You can
limit the set of panels that WebSMIT will process by configuring one or more of these
files.
- wsm_smit.allow
If this file exists on the server, it will be checked before any SMIT panel is
processed. If the SMIT panel id (fast path) is not contained in the file, the http
request will be rejected. Use this file to limit WebSMIT to a specific set of SMIT
panels. A sample file is provided which contains all the SMIT panel ids for HACMP.
Simply rename this file to wsm_smit.allow if you want to limit access to just the
HACMP SMIT panels.
- wsm_smit.deny
Entering a SMIT panel id in this file will cause WebSMIT to deny access to that
panel. If the same SMIT panel id is stored in both the .allow and .deny files, .deny
processing takes precedence.
- wsm_smit.redirect
Instead of simply rejecting access to a specific page, you can redirect the user to a
different page. The default .redirect file has entries to redirect the user from specific
HACMP SMIT panels that are not supported by WebSMIT.

Using the online documentation feature


To use the online documentation feature, you must install the file sets shown in the
visual and you must link them to your Web servers htdocs directory.
See the README file for details.

7-128 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
Checkpoint
1. True or False?
A star configuration is a good choice for your non-IP networks.
2. True or False?
Using DARE, you can change from IPAT via aliasing to IPAT via
replacement without stopping the cluster.
3. The ___________ utility to allows users to change their passwords
across all nodes
4. True or False?
RSCT will automatically update /etc/filesystems when using enhanced
concurrent mode volume groups
5. True or False?
A resource groups priority override location can be cancelled by
selecting a destination node of Restore_Node_Priority_Order.
6. The basic steps to configure WebSMIT are:
a. Install a web server and edit the ____________ file to configure it to
server WebSMIT pages
b. Link the WebSMIT _________ and _________ directories to the
web servers directory
c. Edit the _________ file to configure WebSMIT security
d. Set 4511 permissions on the ______________ file.

Copyright IBM Corporation 2005

Figure 7-95. Checkpoint AU546.0

Notes:

Copyright IBM Corp. 1998, 2005 Unit 7. Basic HACMP Administration 7-129
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Unit Summary
Implementing procedures for change management is a
critical part of administering a HACMP cluster
C-SPOC provides facilities for performing common cluster
wide administration tasks from any node within the cluster
Perform routine administrative changes
Start and stop cluster services
Perform resource group move operations
Start and stop cluster services
The SMIT Standard and Extended menus are used to make
topology and resource group changes
The Dynamic Automatic Reconfiguration Event facility
(DARE) provides the mechanism to make changes to
cluster topology and resources without stopping the cluster
The Cluster Snapshot facility allows the user to save and
restore a cluster configuration
WebSMIT provides access to HACMP SMIT menus from
any system with a web browser
Copyright IBM Corporation 2005

Figure 7-96. Unit Summary AU546.0

7-130 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty Unit 8. Events

What This Unit Is About


This unit describes the event process in HACMP

What You Should Be Able to Do


After completing this unit, you should be able to:
Describe what an event is
Describe the sequence of events when
- The first node starts in a cluster
- A new node joins an existing cluster
- A node leaves a cluster voluntarily
Explain what happens when HACMP processes an event
Describe how to customize the event flow
State how to monitor other devices

How You Will Check Your Progress


Accountability:
Checkpoint
Machine exercises

References
SC23-4861-06 HACMP for AIX, Version 5.3 Planning and Installation
Guide
SC23-4862-06 HACMP for AIX, Version 5.3 Administration Guide

Copyright IBM Corp. 1998, 2005 Unit 8. Events 8-1


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Unit Objectives
After completing this unit, you should be able to:
Describe what an HACMP event is
Describe the sequence of events when:
The first node starts in a cluster
A new node joins an existing cluster
A node leaves a cluster voluntarily
Explain what happens when HACMP processes an event
Describe how to customize the event flow
State how to monitor other devices

Copyright IBM Corporation 2005

Figure 8-1. Unit Objectives AU546.0

Notes:

8-2 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty 8.1 HACMP Events

Copyright IBM Corp. 1998, 2005 Unit 8. Events 8-3


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Topic 1 Objectives: HACMP Events


After completing this topic, you should be able to:
Describe what an HACMP event is
Explain what happens when HACMP processes an event
Describe the sequence of events when:
The first node starts in a cluster
A new node joins an existing cluster
A node leaves a cluster voluntarily

Copyright IBM Corporation 2005

Figure 8-2. Topic 1 Objectives: HACMP Events AU546.0

Notes:

8-4 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
What is an HACMP Event?
An HACMP event is an incident of interest to HACMP:
A node joins the cluster
A node crashes
A NIC fails
A NIC recovers
Cluster administrator requests a resource group move
Cluster administrator requests a configuration change
(synchronization)
An HACMP event script is a script invoked by a recovery
program to perform the recovery function required.
node_up
node_down
fail_interface
join_interface
rg_move
reconfig_topology_start

Copyright IBM Corporation 2005

Figure 8-3. What is an HACMP Event? AU546.0

Notes:

What the term HACMP event means


The term HACMP event has two contexts:
- An incident which is of interest to the cluster like the failure of a node or the recovery
of a NIC
- A script that is used by HACMP to actually deal with one of these incidents
Unfortunately, it is not all that uncommon for the script word to be left off in a discussion
of event scripts. Fortunately, which meaning is appropriate is almost certainly obvious
from the context of the discussion.

Copyright IBM Corp. 1998, 2005 Unit 8. Events 8-5


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

HACMP Basic Event Flow

Recovery Command
Recovery Command
Recovery __

Programs __ Event Script


__

HACMP Cluster Manager #


## Beginning of Event Definition Node Up ###
#

rules.hacmprd TE_JOIN_NODE
0
/usr/sbin/cluster/events/node_up.rp
2

file
0
# 6) Resource variable only used for event manager events

# 7) Instance vector, only used for event manager events

Group Services/ES

Topology Services/ES

Copyright IBM Corporation 2005

Figure 8-4. HACMP Basic Event Flow AU546.0

Notes:

How an event script is triggered


Most HACMP events result from the detection and diagnostic capabilities of RSCTs
Topology Services component. They arrive at the Cluster Manager which then uses
recovery programs which, in turn, call the event scripts to actually deal with the event.
The coordination of and sequencing of the recovery programs is actually handled by
RSCT group services. The rules for how these recovery programs should be
coordinated and sequenced are described in the
/usr/es/sbin/cluster/events/rules.hacmprd file.
The RMC subsystem is used for implementing Dynamic Node Priority and DLPAR.
Dynamic Node Priority is one of the fallover policies and DLPAR refers to the Dynamic
LPAR capability of HACMP.

8-6 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
Recovery Programs
reconfig_configuration.rp
cluster_notify.rp reconfig_configuration_dependency_acqui
external_resource_state_change. re.rp
rp reconfig_configuration_dependency_comp
external_resource_state_change lete.rp
_complete.rp reconfig_configuration_dependency_relea
fail_interface.rp se.rp
fail_standby.rp reconfig_resource.rp
join_interface.rp reconfig_topology.rp
join_standby.rp resource_state_change.rp
migrate.rp resource_state_change_complete.rp
network_down.rp rg_move.rp
network_up.rp rg_offline.rp
node_down.rp rg_online.rp
node_down_dependency.rp server_down.rp
node_down_dependency_comple server_restart.rp
te.rp site_down.rp
node_up.rp site_isolation.rp
node_up_dependency.rp site_merge.rp
node_up_dependency_complete. site_up.rp
rp swap_adapter.rp

Copyright IBM Corporation 2005

Figure 8-5. Recovery Programs AU546.0

Notes:

Recovery Programs
This visual lists the recovery programs that are called by the resource manager
component of the cluster manager services. These form the first step in processing an
event.

Copyright IBM Corp. 1998, 2005 Unit 8. Events 8-7


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Recovery Program Example

site_up.rp

# This file contains the HACMP/ES recovery program for site_up events
#
# format:
# relationship command to run expected status NULL
#
other "site_up" 0 NULL
#
barrier
#
event "site_up" 0 NULL
#
barrier
#
all "site_up_complete" 0 NULL

Copyright IBM Corporation 2005

Figure 8-6. Recovery Program Example AU546.0

Notes:

Format of a recovery program


The first type of line contains where the event script should run and what the name of
the script is.
The second type of line is the word barrier. This is a wait which is handled by group
services so that other nodes may complete their processing before next step of this
recovery program

8-8 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
Event Scripts
(called by recovery programs) (called by other events)

site_up, site_up_complete,down,down_complet node_up_local, remote


site_merge, site_merge_complete node_down_local, remote
node_up, node_up_complete, down, node_up_local_complete
down_complete node_up_remote_complete
network_up, network_up_complete, down, node_down_local_complete
down_complete node_down_remote_complete
swap_adapter, swap_adapter_complete acquire_aconn_service
swap_address, swap_address_complete acquire_service_addr
fail_standby, join_standby acquire_takeover_addr
fail_interface, join_interface start_server, stop_server
rg_move, rg_move_complete get_disk_vg_fs
rg_online, rg_offline get_aconn_rs
event_error release_service_addr, takeover_addr
config_too_long release_vg_fs, aconn_rs
reconfig_topology_start, complete swap_aconn_protocols
reconfig_resource_release, acquire, complete releasing, acquiring
reconfig_configuration_dependency_acquire rg_up, down, error
reconfig_configuration_dependency_complete rg_temp_error_state
reconfig_configuration_dependency_release rg_acquiring_secondary
node_up_dependency, complete rg_up_secondary
node_down_dependency, complete rg_error_secondary
migrate, migrate_complete resume_appmon
external _resource_state_change suspend_appmon
server_down, server_restart
Copyright IBM Corporation 2005

Figure 8-7. Event Scripts AU546.0

Notes:

Event Scripts
This is the list of HACMP events which are managed by HACMP 5.3.
The events on the left are directly called by recovery programs in response to
unexpected happenings. The events on the right are invoked by primary or other
secondary events on an as-needed basis.
Each of these events can have an optional notify command, one or more pre-event
scripts, one or more post-event scripts and an optional recovery command associated
with it.

Copyright IBM Corp. 1998, 2005 Unit 8. Events 8-9


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

First Node Starts HACMP

r
es
Start
Cluste
servic

1) node_up
lls node_up_local
ca acquire_service_addr
clstrmgrES
RC acquire_takeover_addr
Event
ca l get_disk_vg_fs
Manager ls
RC
2) node_up_complete
node_up_local_complete
start_server
run start script
Copyright IBM Corporation 2005

Figure 8-8. First Node Starts HACMP AU546.0

Notes:

Startup processing
Implicit in this example is the assumption that there is actually a resource group to start
on the node. If there are no resource groups to start on the node, then node_up_local
and node_up_local_complete do very little processing at all.

8-10 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
Another Node Joins the Cluster

g
nnin
ru t
ar
clstrmgrES clstrmgrES St ster s
u e
Event Messages Event Cl rvic
e
Manager Manager ca s
ll
ca C
ll

R
R

C
call

1) node_up 2) node_up

ca
RC

node_up_remote node_up_local

ll
RC
stop_server acquire_service_address
run stop script acquire_takeover_address
release_takeover_address get_disk_vg_fs
release_vg_fs
4) node_up_complete
3) node_up_complete node_up_local_complete
node_up_remote_complete start_server
run start script
Copyright IBM Corporation 2005

Figure 8-9. Another Node Joins the Cluster AU546.0

Notes:

Another node joins the cluster


When another node starts up, it must first join the cluster. After that, the determination is
made to move an already active resource group to the new node (this is the assumption
in this visual). If that is the case, node_up processing on the old node 1) must
inactivate the resource group before node_up processing on the new node 2) can
acquire and activate the resource group.

Copyright IBM Corp. 1998, 2005 Unit 8. Events 8-11


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Node Leaves the Cluster (Stopped)

ning p
run Sto ter
s
clstrmgrES clstrmgrES
Clu vices
ca r
Event Event ll se
Manager Messages Manager
RC 1) node_down
ca C

node_down_local
ll
R

stop_server
3) node_down run stop script

ca
node_down_remote release_takeover_addr

ll
ca

acquire_service_addr RC release_vg_fs
ll
RC

acquire_takeover_addr release_service _addr


get_disk_vg_fs
2) node_down_complete
4) node_down_complete node_down_local_complete
node_down_remote_complete
start_server

Copyright IBM Corporation 2005

Figure 8-10. Node Leaves the Cluster (Stopped) AU546.0

Notes:

Node down processing normal with takeover


Implicit in this example is the assumption that there is actually a resource group on the
departing node which must be moved to one of the remaining nodes.

Node failure
The situation is only slightly different if the node on the right had failed suddenly. Since
it is not in a position to run any events, the events listed under the right hand node do
not get run and the events listed under the left hand node might need to be somewhat
more forceful (for example, the get_disk_vg_fs event may need to break some disk
reservations (see earlier shared storage unit for details).

8-12 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
Lets Review
1. True or False?
HACMP 5.x supports a maximum of five pre and five post events per HACMP
event.
2. Which of the following are examples of primary HACMP events (select all that
apply)?
a. node_up
b. node_up_local
c. node_up_complete
d. start_server
e. Rg_up
3. When a node joins an existing cluster, what is the correct sequence for these
events?
a. node_up on new node, node_up on existing node, node_up_complete on new
node, node_up_complete on existing node
b. node_up on existing node, node_up on new node, node_up_complete on new
node, node_up_complete on existing node
c. node_up on new node, node_up on existing node, node_up_complete on existing
node, node_up_complete on new node
d. node_up on existing node, node_up on new node, node_up_complete on existing
node, node_up_complete on new node

Copyright IBM Corporation 2005

Figure 8-11. Lets Review AU546.0

Notes:

Copyright IBM Corp. 1998, 2005 Unit 8. Events 8-13


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

8-14 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty 8.2 Cluster Customization

Copyright IBM Corp. 1998, 2005 Unit 8. Events 8-15


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Topic 2 Objectives: Event Customization


After completing this topic, you should be able to:
Describe how to customize the event flow
State how to handle devices outside the control of HACMP

Copyright IBM Corporation 2005

Figure 8-12. Topic 2 Objectives: Event Customization AU546.0

Notes:
In this topic, we examine how to customize events in HACMP.

8-16 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
Event Processing Customization
Notify Command

Pre-Event Script (1) Pre-Event Script (n)

Event Manager
Recovery
clcallev HACMP Event
HACMP Event Command

No Counter Yes
RC=0 >0
ODM Yes No
HACMP-
Classes
Boom!
Post-Event Script (1) Post-Event Script (n)

Notify Command

Copyright IBM Corporation 2005

Figure 8-13. Event Processing Customization AU546.0

Notes:

Event processing without customization


Once a decision is made to run a particular HACMP event script on a particular node,
the above event processing logic takes control. If no event-related cluster customization
has been done on the cluster, then the HACMP Event itself (in other words, the HACMP
Event Script), is run and whether or not it works is noted (if it worked then everyone is
happy, if not then you better go look at the problem determination unit which is coming
up later in the week).
Events are logged in the /usr/es/adm/cluster.log file and the /tmp/hacmp.out file.

Event processing with customization


The rather simple procedure described in the last paragraph can be modified by the
cluster configurator or administrator to deal with cluster requirements and/or

Copyright IBM Corp. 1998, 2005 Unit 8. Events 8-17


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

environmental issues beyond the normal scope of HACMP. These customization


opportunities are as follows:
- Each HACMP event can have a single optional Notify Command associated with it.
This command is run once at the very start of processing the event and once again right
as the last step in processing the event. This is the oldest form of HACMP event-related
customization. It is not used all that often anymore because better mechanisms now
exist. It is still supported in order to avoid breaking long existing clusters that rely upon
it.
- Each HACMP event can have zero or more pre-event scripts associated with it. Each
of these pre-event scripts are run after the optional notify command (if it has been
configured). Once all of the pre-event scripts have been executed, the HACMP event
script itself is executed.
- A recovery command can be specified for each HACMP event. This recovery
command is run if the HACMP event script fails. Once the recovery command
completes, the HACMP event script is run again. Associated with each recovery
command is a count of the maximum number of times that the HACMP event script may
fail in a single overall attempt to run the event before HACMP should declare the failure
as not fixable by the recovery command.
- Each HACMP event can have zero or more post-event scripts associated with it. Each
of these are run after the HACMP event script itself completes and before the optional
notify command.

Location of event processing scripts


The HACMP event scripts are stored in /usr/es/sbin/cluster/events. Note there are .rp
scripts (recovery programs) which call the event scripts. The event scripts then may call
other event scripts

8-18 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
Adding/Changing Cluster Events (1 of 3)
Extended Event Configuration

Move cursor to desired item and press Enter.

Configure Pre/Post-Event Commands


Change/Show Pre-Defined HACMP Events
Configure User-Defined Events
Configure Pager Notification Methods
Change/Show Time Until Warning

F1=Help F2=Refresh F3=Cancel F8=Image


F9=Shell F10=Exit Enter=Do

Copyright IBM Corporation 2005

Figure 8-14. Adding/Changing Cluster Events (1 of 3) AU546.0

Notes:

Path to smit menu


smitty hacmp -> Extended Configuration -> Extended Event Configuration

pre, post event scripts


To customize the event processing for pre and post event scripts, you must first create a
custom event object which points to your script. We start here with the SMIT menu
which manages custom cluster events.

Copyright IBM Corp. 1998, 2005 Unit 8. Events 8-19


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Adding/Changing Cluster Events (2 of 3)


Add a Custom Cluster Event
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
[Entry Fields]
* Cluster Event Name [stop_printq]
* Cluster Event Description [stop the print queues]
* Cluster Event Script Filename [/usr/local/cluster/events/stop_printq]

F1=Help F2=Refresh F3=Cancel F4=List


F5=Reset F6=Command F7=Edit F8=Image
F9=Shell F10=Exit Enter=Do

Copyright IBM Corporation 2005

Figure 8-15. Adding/Changing Cluster Events (2 of 3) AU546.0

Notes:

Path to smit menu


smitty hacmp --->Extended Configuration--->Extended Event Configuration ->
Configure Pre/Post-Event Commands --->Add a Custom Cluster Event

Example of creating pre/post custom cluster event object


In this example, we add a new custom cluster event called stop_printq. This event runs
a script of our own creation which in this case resides in /usr/local/cluster/events
(a directory created for this purpose by the HACMP administrator). The custom event
has a description which allows us to identify what the script does when six months down
the line we have forgotten why we wrote the script or the system administrator for the
cluster has changed.
Custom events are given a name, rather than referenced directly by the script path. This
makes it easy to reuse the same custom event script for multiple HACMP events.

8-20 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty Script considerations


HACMP does not develop the script content for you, neither does it synchronize the
script content between cluster nodes (indeed the content can be different on each
node). The only requirements that HACMP imposes are that the script must exist on
each node in a local (non-shared) location, be executable and have the same path and
name on every node.
Of course, an additional requirement is that the script perform as required under all
circumstances!
In HACMP 5.2 and later there is a filecollections feature if you wish to have your
changes kept in sync.

Copyright IBM Corp. 1998, 2005 Unit 8. Events 8-21


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Adding/Changing Cluster Events (3 of 3)


Change/Show Cluster Events
Type or select values in entry fields.
Press Enter AFTER making all desired changes.

[Entry Fields]
Event Name
node_down
Description Script run after the >
* Event Command [/usr/es/sbin/cluster/>
Notify Command []
Pre-event Command [] +
Post-event Command [stop_printq] +
Recovery Command []
* Recovery Counter [0] #

F1=Help F2=Refresh F3=Cancel F4=List


F5=Reset F6=Command F7=Edit F8=Image
F9=Shell F10=Exit Enter=Do

Copyright IBM Corporation 2005

Figure 8-16. Adding/Changing Cluster Events (3 of 3) AU546.0

Notes:

The path to the menu


smitty hacmp -> Extended Configuration -> Extended Event Configuration -> nts
Change/Show Pre-Defined HACMP Events -> node_down

Associating a custom cluster event with the node_down event


Notice that in the menu path you choose Pre-Defined to see the list of standard
HACMP events. On this visual, we see our new custom event object, stop_printq,
being added as a post event to the HACMP event script node_down. Because we are
simply referencing the script by its name, we can run more than one pre- and
post-event script by stringing their names together in the pre- or post-event script field.
Note that for the commands (other than pre and post) on this menu you need not create
a custom object first -- you would come directly to this menu

8-22 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
Recovery Commands
If an event script should fail to exit 0, Recovery commands can be
executed if an event script does not exit 0.

Recovery
HACMP Event
Command

No Counter Yes
RC=0
>0
Yes

Copyright IBM Corporation 2005

Figure 8-17. Recovery Commands AU546.0

Notes:

Recovery command event customization


Recovery commands are another customization that can be made in order to recover
from the failure of an HACMP event script.

Copyright IBM Corp. 1998, 2005 Unit 8. Events 8-23


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Adding/Changing Recovery Commands


Change/Show Cluster Events
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
[Entry Fields]
Event Name start_server
Description Script run to start a>
* Event Command [/usr/es/sbin/cluster/>
Notify Command []
Pre-event Command [] +
Post-event Command [] +
Recovery Command [/usr/local/bin/recover]
* Recovery Counter [3]
#

F1=Help F2=Refresh F3=Cancel F4=List


F5=Reset F6=Command F7=Edit F8=Image
F9=Shell F10=Exit Enter=Do

Copyright IBM Corporation 2005

Figure 8-18. Adding/Changing Recovery Commands AU546.0

Notes:

Recovery command menu


Here we see an example of a recovery command being added to the start_server event
script. This may handle an incorrect application start up.
Recovery commands do not execute unless the recovery counter is > 0.

8-24 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
HACMP Events One More Time
(called by clstmgrES recovery programs) (called by other events)

site_up, site_up_complete,down,down_complet node_up_local, remote


site_merge, site_merge_complete node_down_local, remote
node_up, node_up_complete, down, down_complete node_up_local_complete
network_up, network_up_complete, down, down_complete node_up_remote_complete
swap_adapter, swap_adapter_complete node_down_local_complete
swap_address, swap_address_complete node_down_remote_complete
fail_standby, join_standby acquire_aconn_service
fail_interface, join_interface acquire_service_addr
rg_move, rg_move_complete acquire_takeover_addr
rg_online, rg_offline start_server, stop_server
event_error get_disk_vg_fs
config_too_long get_aconn_rs
reconfig_topology_start, complete release_service_addr, takeover_addr
reconfig_resource_release, acquire, complete release_vg_fs, aconn_rs
reconfig_configuration_dependency_acquire swap_aconn_protocols
reconfig_configuration_dependency_complete releasing, acquiring
reconfig_configuration_dependency_release rg_up, down, error
node_up_dependency, complete rg_temp_error_state
node_down_dependency, complete rg_acquiring_secondary
migrate, migrate_complete rg_up_secondary
rg_error_secondary
resume_appmon
suspend_appmon

Copyright IBM Corporation 2005

Figure 8-19. HACMP Events One More Time AU546.0

Notes:

Each event can be customized


We show the list of events one more time to make the point that each of these can have
notify, pre-event, post-event and recovery script/command associated with them. Some
of them, in fact, probably exist in order to provide a point at which a pre-event,
post-event or recovery script/command can be associated!

Copyright IBM Corp. 1998, 2005 Unit 8. Events 8-25


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Points to Note
The execute bit must be set on all pre-, post-, notify and recovery
scripts.
Synchronization does not copy pre- and post-event script content
from one node to another.
You need to copy all your pre- and post-event scripts to all nodes.
Your pre- and post-event scripts must handle non-zero exit codes.
All scripts must declare the shell they will run in like:
#!/bin/ksh

Test your changes very carefully as a mistake is likely to cause a


fallover to abort.

Copyright IBM Corporation 2005

Figure 8-20. Points to Note AU546.0

Notes:

Test your changes


Without a doubt, the most important point to note is the last one - test your changes
very carefully. An error in a pre-, post- or recovery script/command generally becomes
apparent during a fallover. In other words, at a point in time when you can least afford it
to happen!

Use the CSPOC file collection facility


In HACMP 5.2 and later, you can implement the file collections feature to synchronize
your scripts across the cluster. This facility is covered in more depth in the HACMP
course Administration II: Administration and Problem Determination.

8-26 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
Editing an HACMP Event Script (1 of 2)
It is not recommended that you modify an HACMP event
script.
If you do, please note the following:

All HACMP event scripts are written in the Korn Shell


All scripts are located in /usr/es/sbin/cluster/events
HACMP event scripts are VERY complex as they must
operate in a wide variety of circumstances
Be particularly careful about the event emulation mechanism
Do not interfere with it
Make sure your changes emulate it or do it as required
Consider changing the location of the edited event script
as this prevents the modified script from being overwritten
by an HACMP patch
Refer to Change/Show Cluster Event screen a few foils back

Copyright IBM Corporation 2005

Figure 8-21. Editing an HACMP Event Script (1 of 2) AU546.0

Notes:

Not a good idea


In general, it is a bad idea to do this. If this is done, it should be well documented and
extremely well tested.

Copyright IBM Corp. 1998, 2005 Unit 8. Events 8-27


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Editing an HACMP Event Script (2 of 2)


When changing an HACMP event script :

1. Copy the source event script to a different directory


2. Ideally, use sub routines to isolate your code
3. Thoroughly document any changes that you make to
the HACMP event script
4. Thoroughly test the HACMP event script behavior in
all fallover scenarios

Copyright IBM Corporation 2005

Figure 8-22. Editing an HACMP Event Script (2 of 2) AU546.0

Notes:

Points to consider if editing a pre-defined event script


The event scripts are stored in /usr/es/sbin/cluster/events. You should keep a copy of
the original script somewhere else. You need to understand all the scenarios when this
event will be called. If possible, isolate your code by using sub routines or routines
called from the original event script.

8-28 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
RG_Move Event and Selective Fallover
Selective Fallover allows fallover for 1 resource group
Cluster Manager uses rg_move event for selective fallover
CSPOC can also be used to use cause an rg_move event
Selective fallover can happen for the following failures:
NIC failures
Applications
Communication Links
Volume groups
Selective Fallover can be customized by resource group

Copyright IBM Corporation 2005

Figure 8-23. RG_Move Event and Selective Fallover AU546.0

Notes:

Selective Fallover logic


In general, the following scenarios and utilities can lead HACMP to selectively move an
affected resource group, using the Selective Fallover logic:
- In cases of service IP label failures, Topology Services, which monitors the health of
the service IP labels, starts a network_down event. This causes the selective
fallover of the affected resource group.
- In cases of application failures, the application monitor informs the ClusterManager
about the failure of the application, which causes the selective fallover of the
affected resource group.
- In cases of WAN Connections failures, the Cluster Manager monitors the status of
the SNA links and captures some of the types of SNA link failures. If an SNA link
failure is detected, the selective fallover utility moves the affected resource group.

Copyright IBM Corp. 1998, 2005 Unit 8. Events 8-29


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

- In cases of volume group failures, the occurrence of the AIX error label
LVM_SA_QUORCLOSE indicates that a volume group went off-line on a node in the
cluster. This causes the selective fallover of the affected resource group.
Remember that in each case when HACMP uses Selective Fallover, an rg_move event
is launched as a response to a resource failure. You can recognize that HACMP uses
Selective Fallover when you identify that an rg_move event is run in the cluster.

8-30 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
Customizing Event Flow for Other Devices
HACMP provides smit screens for managing the AIX error logging
facility's error notification mechanism.

Disk adapters Disks

CPU

Other shared devices


Disk subsystems

Copyright IBM Corporation 2005

Figure 8-24. Customizing Event Flow for Other Devices AU546.0

Notes:

Dealing with other failures detected by AIX


Remember that HACMP natively only monitors nodes, networks and network adapters
by default. If you wish to monitor other devices, you may use error notification methods.
Error notification is a facility of AIX which allows the administrator to map an entry in the
AIX error log to a command to execute.
HACMP provides a smit menu to simplify the process.

Copyright IBM Corp. 1998, 2005 Unit 8. Events 8-31


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Error Notification within smit


HACMP Error Notification
Move cursor to desired item and press Enter.
Configure Automatic Error Notification
Add a Notify Method
Change/Show a Notify Method
Remove a Notify Method
Emulate Error Log Entry

F1=Help F2=Refresh F3=Cancel F8=Image


F9=Shell F10=Exit Enter=Do

Copyright IBM Corporation 2005

Figure 8-25. Error Notification within smit AU546.0

Notes:

Menu path
smitty hacmp -> Problem Determination Tools -> HACMP Error Notification

What HACMP provides


This is the smit menu that HACMP provides for managing error notification methods.
- HACMP provides error notification methods that you can add by selecting the option
Configure Automatic Error Notification above. However, in HACMP 5.3, these
Automatic Error Notification methods are automatically added during verification and
synchronization.
- HACMP provides Add a Notify Method to handle any AIX error label which may
not be detected by HACMP
- Finally, HACMP provides a tool to Emulate an Error Log Entry
We will look at these options in this and the subsequent visuals.

8-32 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
Configuring Automatic Error Notification
Configure Automatic Error Notification

Move cursor to desired item and press Enter.

List Error Notify Methods for Cluster Resources


Add Error Notify Methods for Cluster Resources
Remove Error Notify Methods for Cluster Resources

F1=Help F2=Refresh F3=Cancel F8=Image


F9=Shell F10=Exit Enter=Do

Copyright IBM Corporation 2005

Figure 8-26. Configuring Automatic Error Notification AU546.0

Notes:

Removing Automatic Error Notify Methods


Since in HACMP 5.3 the Automatic Error Notify Methods are automatically added you
can come here to remove them but it is not recommended and if you do then after
synchronization you would have to come back here to remove them again.

Copyright IBM Corp. 1998, 2005 Unit 8. Events 8-33


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Listing Automatic Error Notification


COMMAND STATUS
Command: OK stdout: yes stderr: no
Before command completion, additional instructions may appear below.
[TOP]
bondar:
bondar: HACMP Resource Error Notify Method
bondar:
bondar: hdisk0 /usr/es/sbin/cluster/diag/cl_failover
bondar: scsi0 /usr/es/sbin/cluster/diag/cl_failover
bondar: hdisk11 /usr/es/sbin/cluster/diag/cl_logerror
bondar: hdisk5 /usr/es/sbin/cluster/diag/cl_logerror
bondar: hdisk9 /usr/es/sbin/cluster/diag/cl_logerror
bondar: hdisk7 /usr/es/sbin/cluster/diag/cl_logerror
bondar: ssa0 /usr/es/sbin/cluster/diag/cl_logerror
hudson:
hudson: HACMP Resource Error Notify Method
[MORE...9]
F1=Help F2=Refresh F3=Cancel F6=Command
F8=Image F9=Shell F10=Exit /=Find
n=Find Next

Copyright IBM Corporation 2005

Figure 8-27. Listing Automatic Error Notification AU546.0

Notes:

Listing the Automatic Event Notification methods


Heres the full output from this screen for a sample cluster:
bondar:
bondar: HACMP Resource Error Notify Method
bondar:
bondar: hdisk0 /usr/es/sbin/cluster/diag/cl_failover
bondar: scsi0 /usr/es/sbin/cluster/diag/cl_failover
bondar: hdisk11 /usr/es/sbin/cluster/diag/cl_logerror
bondar: hdisk5 /usr/es/sbin/cluster/diag/cl_logerror
bondar: hdisk9 /usr/es/sbin/cluster/diag/cl_logerror
bondar: hdisk7 /usr/es/sbin/cluster/diag/cl_logerror
bondar: ssa0 /usr/es/sbin/cluster/diag/cl_logerror
hudson:
hudson: HACMP Resource Error Notify Method
hudson:
hudson: hdisk0 /usr/es/sbin/cluster/diag/cl_failover
hudson: scsi0 /usr/es/sbin/cluster/diag/cl_failover
hudson: hdisk10 /usr/es/sbin/cluster/diag/cl_logerror

8-34 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty hudson: hdisk4 /usr/es/sbin/cluster/diag/cl_logerror


hudson: hdisk8 /usr/es/sbin/cluster/diag/cl_logerror
hudson: hdisk6 /usr/es/sbin/cluster/diag/cl_logerror
hudson: ssa0 /usr/es/sbin/cluster/diag/cl_logerror

Copyright IBM Corp. 1998, 2005 Unit 8. Events 8-35


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Adding Error Notification Methods


Add a Notify Method
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
[Entry Fields]
* Notification Object Name []
* Persist across system restart? No +
Process ID for use by Notify Method [] +#
Select Error Class None +
Select Error Type None +
Match Alertable errors? None +
Select Error Label [] +
Resource Name [All] +
Resource Class [All] +
Resource Type [All] +
* Notify Method []

F1=Help F2=Refresh F3=Cancel F4=List


F5=Reset F6=Command F7=Edit F8=Image
F9=Shell F10=Exit Enter=Do

Copyright IBM Corporation 2005

Figure 8-28. Adding Error Notification Methods AU546.0

Notes:

Menu path
smitty hacmp -> Problem Determination Tools -> HACMP Error Notification ->
Add a Notify Method

The error notify stanza


errnotify: This is an example of a stanza
en_pid = 0 from /etc/objrepos/errnotify
en_name = ""
en_persistenceflg = 1 Notice the screen above is designed to
en_label = "" create a stanza like this.
en_crcid = 849857919
en_class = ""
en_type = ""
en_alertflg = ""

8-36 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty en_resource = ""


en_rtype = ""
en_rclass = ""
en_symptom = ""
en_err64 = "" The last line is the command to execute
en_dup = ""
en_method = "/usr/lib/ras/notifymeth -l $1 -t CHECKSTOP"

Parameters passed to the error notify method


One or more error notification methods can be added for every error that can appear in
the AIX error log.
The $ parameters that can be used with the en_method are:
$1 Sequence Number
$2 Error ID
$3 Error CLASS
$4 Error Type
$5 Alert Flag
$6 Resource Name
$7 Resource Type
$8 Resource Class
$9 Error Label

Copyright IBM Corp. 1998, 2005 Unit 8. Events 8-37


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Emulating Errors (1 of 2)
HACMP Error Notification
Mo+--------------------------------------------------------------------------+
Error Label to Emulate

Move cursor to desired item and press Enter.

[TOP]
SSA_DISK_ERR3 SSA_DISK_DET_ER
LVM_SA_QUORCLOSE bernhardvg
LVM_SA_QUORCLOSE xwebvg
LVM_SA_QUORCLOSE rootvg
SERVICE_EVENT diagela_SE
FCP_ARRAY_ERR6 fcparray_err
DISK_ARRAY_ERR2 ha_hdisk0_0
DISK_ARRAY_ERR3 ha_hdisk0_1
DISK_ARRAY_ERR5 ha_hdisk0_2
DISK_ERR2 ha_hdisk0_3
[MORE...39]

F1=Help F2=Refresh F3=Cancel
F8=Image F10=Exit Enter=Do
F1 /=Find n=Find Next
F9+--------------------------------------------------------------------------+

Copyright IBM Corporation 2005

Figure 8-29. Emulating Errors (1 of 2) AU546.0

Notes:

Menu path
smitty hacmp -> Problem Determination Tools -> HACMP Error Notification ->
Emulate Error Log Entry

Emulating an error log entry


HACMP provides a menu to allow you to emulate an error log entry. This screen shows
part of the list of error labels which is provided when the Emulate Error Log Entry is
selected in the HACMP Error Notification menu (this menu appears a few foils back).
We are going to generate an emulated loss of quorum on the xwebvg volume group.
This will generate an example of the error LVM_SA_QUORCLOSE in the AIX error log
and run the script associated with the error notification method quorum_lost.
This mechanism for emulating errors allows you to do basic testing of an error
notification method. If at all possible to do so without actually damaging the equipment,

8-38 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty it would be best to cause the actual hardware error that is of concern in order to verify
that the error notification method has been associated with the correct AIX error label.
Note that the emulated error does not have the same resource name as an actual
record but otherwise passes the same arguments to the method as the actual one.

Copyright IBM Corp. 1998, 2005 Unit 8. Events 8-39


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Emulating Errors (2 of 2)
Emulate Error Log Entry
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
[Entry Fields]
Error Label Name LVM_SA_QUORCLOSE
Notification Object Name xwebvg
Notify Method /usr/es/sbin/cluster/>

F1=Help F2=Refresh F3=Cancel F4=List


F5=Reset F6=Command F7=Edit F8=Image
F9=Shell F10=Exit Enter=Do

Copyright IBM Corporation 2005

Figure 8-30. Emulating Errors (2 of 2) AU546.0

Notes:

Kicking off the emulation


Use this screen to start the emulation process.

8-40 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
What Will This Cause?
# errpt -a
---------------------------------------------------------------------------
LABEL: LVM_SA_QUORCLOSE
IDENTIFIER: CAD234BE
Date/Time: Fri Sep 19 13:58:05 MDT
Sequence Number: 469
Machine Id: 000841564C00
Node Id: bondar
Class: H
Type: UNKN
Resource Name: LVDD
Resource Class: NONE
Resource Type: NONE
Location:
Description
QUORUM LOST, VOLUME GROUP CLOSING
Probable Causes
PHYSICAL VOLUME UNAVAILABLE
Detail Data
MAJOR/MINOR DEVICE NUMBER
00C9 0000
QUORUM COUNT
0
ACTIVE COUNT
0
SENSE DATA
0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000
---------------------------------------------------------------------------

... and a fallover of the discovery resource group to hudson.


Copyright IBM Corporation 2005

Figure 8-31. What Will This Cause? AU546.0

Notes:

Example emulated error record


Here is an example of the output produced by running such an emulated event. The top
of the screen is the truncated output of the error template associated with the
LVM_SA_QUORCLOSE error, which gives a brief indication of the nature of the error.
The output of an emulation will have the value Resource Name: EMULATE. If you are
depending on this field you have a problem testing. You may have to change your
command to execute while testing via emulation.

Copyright IBM Corp. 1998, 2005 Unit 8. Events 8-41


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Checkpoint
1. True or False?
HACMP event scripts are binary executables and cannot be
easily modified.
2. Which of the following runs if an HACMP event script fails? (select
all that apply)
a. Pre-event scripts.
b. Post-event scripts.
c. error notification methods.
d. recovery commands.
e. notify methods.
3. How does an event script get started?
a. Manually by an administrator
b. Called by cluster manager
c. Called by a recovery program
d. Called by the topology services daemon
4. True or False?
Pre event scripts are automatically synchronized.
5. True or False?
Writing error notification methods is a normal part of configuring a
cluster.

Copyright IBM Corporation 2005

Figure 8-32. Checkpoint AU546.0

Notes:

8-42 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
Unit Summary

Having completed this unit, you should be able to:


Describe what an HACMP event is
Describe the sequence of events when:
The first node starts in a cluster
A new node joins an existing cluster
A node leaves a cluster voluntarily
Explain what happens when HACMP processes an event
Describe how to customize the event flow
State how to monitor other devices

Copyright IBM Corporation 2005

Figure 8-33. Unit Summary AU546.0

Notes:

Copyright IBM Corp. 1998, 2005 Unit 8. Events 8-43


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

8-44 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty Unit 9. Integrating NFS into HACMP

What This Unit Is About


This unit covers the concepts of using Suns Network File System in a
highly available cluster. You learn how to configure NFS in an HACMP
environment for maximum availability.

What You Should Be Able to Do


After completing this unit, you should be able to:
Explain the concepts of Network File System (NFS)
Configure HACMP to support NFS
Discuss why Volume Group major numbers must be unique when
using NFS with HACMP
Outline the NFS configuration parameters for HACMP

How You Will Check Your Progress


Accountability:
Checkpoint
Machine exercises

References
SC23-4867-05 HACMP for AIX: HACMP Master Glossary
SC23-4864-06 HACMP for AIX: Concepts and Facilities Guide
SC23-4861-06 HACMP for AIX: Planning and Installation Guide
SC23-4862-06 HACMP for AIX: Administration Guide
SC23-5177-00 HACMP for AIX: Troubleshooting Guide

Copyright IBM Corp. 1998, 2005 Unit 9. Integrating NFS into HACMP 9-1
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Unit Objectives
After completing this unit, you should be able to:
Explain the concepts of Network File System (NFS)
Configure HACMP to support NFS
Discuss why Volume Group major numbers must be unique
when using NFS with HACMP
Outline the NFS configuration parameters for HACMP

Copyright IBM Corporation 2005

Figure 9-1. Unit Objectives AU546.0

Notes:

Objectives
In this unit, we examine how NFS can be integrated in to HACMP in order to provide a
Highly Available Network File System.

9-2 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
So, What Is NFS?
The Network File System (NFS) is a client/server
application that lets a computer user view and optionally
store and update files on a remote computer as though they
were on the user's own computer

NFS Client

NFS mount
NFS Server
read-write
NFS mount

read-only

JFS mount
read-only

NFS mount
NFS Client and Server

shared_vg
Copyright IBM Corporation 2005

Figure 9-2. So, What Is NFS? AU546.0

Notes:

NFS
NFS is a suite of protocols which allow file sharing across an IP network. An NFS server
is a provider of file service (that is, a file, a directory or a file system). An NFS client is a
recipient of a remote file service. A system can be both an NFS client and server at the
same time.

Copyright IBM Corp. 1998, 2005 Unit 9. Integrating NFS into HACMP 9-3
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

NFS Background Processes


NFS uses TCP/IP and a number of background processes to
allow clients to access disk resource on a remote server
Configuration files are used on the client and server to
specify export and mount options
NFS Client

NFS Server
n x nfsd and mountd

n x biod

/etc/exports NFS Client and Server


/etc/filesystems n x biod
n x nfsd and mountd

Copyright IBM Corporation 2005

Figure 9-3. NFS Background Processes AU546.0

Notes:

NFS processes
The NFS server uses a process called mountd to allow remote clients to mount a local
disk or CD resource across the network. One or more nfsd processes handle I/O on the
server side of the relationship.
The NFS client uses the mount command to establish a mount to a remote storage
resource which is offered for export by the NFS server. One or more block I/O
daemons, biod, run on the client to handle I/O on the client side.
The server maintains details of data resources offered to clients in the /etc/exports file.
Clients can automatically mount network file systems using the /etc/filesystems file.

9-4 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
Combining NFS with HACMP
NFS exports can be made highly available by using the HACMP
resource group to specify NFS exports and mounts
client system
# mount aservice:/fsa /a
The A resource group specifies:
client system sees /fsa as /a
aservice as a service IP label resource
/fsa as a filesystem resource
/fsa as a NFS filesystem to export

export /fsa aservice

A /fsa

# mount /fsa
Bondar Hudson
Copyright IBM Corporation 2005

Figure 9-4. Combining NFS with HACMP AU546.0

Notes:

Combining NFS with HACMP


We can combine NFS with HACMP in order to achieve a Highly Available Network File
System. One node in the cluster mounts the disk resource locally and offers that disk
resource for export across the IP network. Clients optionally mount the disk resource. A
second node is configured to take over the NFS export in the event of node failure.
There is one unusual aspect to the above configuration which should be discussed. The
HACMP cluster is exporting the /fsa file system via the aservice service IP label. The
client is mounting the aservice:/fsa file system on the local mount point /a. This is
somewhat unusual in the sense that client systems usually use a local mount point
which is the same as the NFS file systems name on the server.
In the configuration shown above, there is no particularly good reason why the client is
using a different mount point than /fsa and, in fact, the client is free to use whatever
mount point is wishes to use including, of course, /fsa. Why this example is using a
local mount point of /a will become clear shortly.

Copyright IBM Corp. 1998, 2005 Unit 9. Integrating NFS into HACMP 9-5
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

NFS Fallover with HACMP


In this scenario, the resource group moves to the surviving node in
the cluster, which exports /fsa. Clients see NFS server not
responding during fallover
client system

The A resource group specifies: # mount aservice:/fsa /a


aservice as a service IP label resource
client system "sees" /fsa as /a
/fsa as a filesystem resource
/fsa as a NFS filesystem to export

aservice export /fsa

/fsa A

# mount /fsa
Bondar Hudson
Copyright IBM Corporation 2005

Figure 9-5. NFS Fallover with HACMP AU546.0

Notes:

Fallover
If the node offering the NFS export should fail, a standby node takes over the shared
disk resource, locally mounts the file system, and exports the file system or directory for
remote mount.
If the client was not accessing the disk resource during the period of the fallover, then it
is not aware of the change in which node is serving the NFS export.
Note that the aservice service IP label is in the resource group which is exporting /fsa.
The HACMP NFS server support requires that resource groups which export NFS
filesystems be configured to use IPAT since the client system is not capable of dealing
with two different IP addresses for its NFS server depending on which node the NFS
server service happens to be running on.

9-6 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
Configuring NFS for High Availability
Change/Show All Resources and Attributes for a Resource Group
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
[MORE...10] [Entry Fields]
Volume Groups [aaavg] +
Use forced varyon of volume groups, if necessary false +
Automatically Import Volume Groups false +
Filesystems (empty is ALL for VGs specified) [/fsa] +
Filesystems Consistency Check fsck +
Filesystems Recovery Method sequential +
Filesystems mounted before IP configured true +
Filesystems/Directories to Export [/fsa] +
Filesystems/Directories to NFS Mount [] +
Network For NFS Mount [] +
[MORE...10]
F1=Help F2=Refresh F3=Cancel F4=List
F5=Reset F6=Command F7=Edit F8=Image
F9=Shell F10=Exit Enter=Do

Copyright IBM Corporation 2005

Figure 9-6. Configuring NFS for High Availability AU546.0

Notes:

Configuring NFS for High Availability


The visual shows the resource group attributes which are important for configuring an
NFS file system.
- Filesystems/Directories to Export
Specifies the filesystems to be NFS exported.
- Filesystems mounted before IP configured
When implementing NFS support in HACMP, you should also set this option. This
prevents access from a client before the filesystems are ready.
- Filesystem (empty is ALL for VGs specified)
This particular example also explicitly lists the /fsa filesystem as a resource to be
included in the resource group (see the Filesystem (empty is ALL for VGs specified)
field). This is not necessary as this field could have been left blank to indicate that all

Copyright IBM Corp. 1998, 2005 Unit 9. Integrating NFS into HACMP 9-7
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

the filesystems in the aaavg volume group should be treated as resources within the
resource group.

Only non-concurrent access resource groups


The resource group policy cannot be concurrent (On Line On All Available Nodes).

9-8 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
Cross-mounting NFS Filesystems (1 of 3)
A filesystem configured in a resource group can be made
available to all the nodes in the resource group:
One node has the resource group and acts as an NFS
server
Mounts the filesystem (/fsa)
Exports the filesystem (/fsa)
All nodes act as NFS clients
Mount the NFS filesystem (aservice:/fsa) onto a local mount point (/a)

aservice

/a /fsa /a

acts as an NFS server


(exports /fsa) acts as an NFS client
# mount aservice:/fsa /a
Copyright IBM Corporation 2005

Figure 9-7. Cross-mounting NFS Filesystems (1 of 3) AU546.0

Notes:

Cross-mounting
We can use HACMP to mount an NFS exported filesystem locally on all the nodes
within the cluster. This allows two or more nodes to have access to the same disk
resource in parallel. An example of such a configuration might be a shared repository
for the product manuals (read only) or a shared /home filesystem (read-write). One
node mounts the filesystem locally, then exports the filesystem. All nodes within the
resource group then NFS mount the filesystem.
By having all nodes in the resource group act as an NFS client including the node which
holds the resource group, it is not necessary for the takeover node to unmount the
filesystem before becoming the NFS server.

Copyright IBM Corp. 1998, 2005 Unit 9. Integrating NFS into HACMP 9-9
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Concurrent access limitations


While the NFS file system can be mounted read-write by multiple nodes, it should be
noted that all of the NFS caching issues that exist with a regular NFS configuration (one
not involving HACMP in any way) still exist. Parallel or concurrent writes are not
supported. For example, applications running on the two cluster nodes should not
attempt to update the same NFS served file as only one of them is likely to succeed with
the other getting either stale NFS file handle problems or mysterious loss of changes
made to the file. This is a fundamental issue with NFS.

True concurrent access


Clusters wishing to have true concurrent access to the same filesystem for reading and
writing purposes should use the IBM GPFS (General Parallel File System) product
instead of NFS to share the filesystem across the cluster nodes.

9-10 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
Cross-mounting NFS Filesystems (2 of 3)
When a fallover occurs, the role of NFS server moves with the
resource group
All (surviving) nodes continue to be NFS clients

aservice

/a /fsa /a

acts as an NFS server


(exports /fsa)
acts as an NFS client
# mount aservice:/fsa /a

Copyright IBM Corporation 2005

Figure 9-8. Cross-mounting NFS Filesystems (2 of 3) AU546.0

Notes:

Fallover with a cross-mounted file system


If the left-hand node fails then HACMP on the right hand node initiates a fallover of the
resource group. This primarily consists of:
- Assigning or aliasing (depending on which flavor of IPAT is being used) the
aservice service IP label to a NIC
- Varying on the shared volume group and mounting the /fsa journaled filesystem
- NFS exporting the /fsa filesystem
Note that the right hand node already has the aservice:/fsa filesystem NFS mounted
on /a.

Copyright IBM Corp. 1998, 2005 Unit 9. Integrating NFS into HACMP 9-11
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Cross-mounting NFS Filesystems (3 of 3)


Here's a more detailed look at what is going on:
The A resource group specifies: client system
aservice as a service IP label resource # mount aservice:/fsa /a
/fsa as a filesystem resource client system "sees" /fsa as /a
/fsa as a NFS filesystem to export
/fsa as a NFS filesystem to mount on /a

aservice
export /fsa
/fsa
A

# mount /fsa # mount aservice:/fsa /a


# mount aservice:/fsa /a
Bondar Hudson

Copyright IBM Corporation 2005

Figure 9-9. Cross-mounting NFS Filesystems (3 of 3) AU546.0

Notes:

Cross-mounting details
The key change, compared to the configuration which did not use cross-mounting, is
that this configurations resource group lists /fsa as a NFS filesystem and specifies that
it is to be mounted on /a. This causes every node in the resource group to act as an
NFS client with aservice:/fsa mounted at /a. Only the node which actually has the
resource group is acting as an NFS server for the /fsa filesystem.

9-12 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
Choosing the Network for Cross-mounts
In a cluster with multiple IP networks, it may be useful to specify
which network should be used by HACMP for cross-mounts
This is usually done as a performance enhancement
The A resource group specifies:
aservice as a service IP label resource
/fsa as a filesystem resource
/fsa as a NFS filesystem to export
/fsa as a NFS filesystem to mount on /a
net_ether_01 is the network for NFS mounts

net_ether_01

net_ether_02
aGservice aservice
export /fsa
A /fsa

# mount /fsa
# mount aservice:/fsa /a # mount aservice:/fsa /a
Bondar Hudson
Copyright IBM Corporation 2005

Figure 9-10. Choosing the Network for Cross-mounts AU546.0

Notes:

Network for NFS mount


HACMP allows you to specify which network should be used for NFS exports from this
resource group.
In this scenario, we have an NFS cross-mount within a cluster which has two IP
networks. For some reason, probably that the net_ether_01 network is either a faster
networking technology or under a lighter load, the cluster administrator has decided to
force the cross-mount traffic to flow over the net_ether_01 network.
This field is relevant only if you have filled in the Filesystems/Directories to NFS
Mount field. The Service IP Labels/IP Addresses field should contain a service label
which is on the network you select.
If the network you have specified is unavailable when the node is attempting to NFS
mount, it will seek other defined, available IP networks in the cluster on which to
establish the NFS mount.

Copyright IBM Corp. 1998, 2005 Unit 9. Integrating NFS into HACMP 9-13
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Configuring HACMP for Cross-mounting


Change/Show All Resources and Attributes for a Resource Group
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
[MORE...10] [Entry Fields]
Volume Groups [aaavg] +
Use forced varyon of volume groups, if necessary false +
Automatically Import Volume Groups false +
Filesystems (empty is ALL for VGs specified) [/fsa] +
Filesystems Consistency Check fsck +
Filesystems Recovery Method sequential +
Filesystems mounted before IP configured true +
Filesystems/Directories to Export [/fsa] +
Filesystems/Directories to NFS Mount [/a;/fsa] +
Network For NFS Mount [net_ether_01] +
[MORE...10]
F1=Help F2=Refresh F3=Cancel F4=List
F5=Reset F6=Command F7=Edit F8=Image
F9=Shell F10=Exit Enter=Do

Copyright IBM Corporation 2005

Figure 9-11. Configuring HACMP for Cross-mounting AU546.0

Notes:

Configuring HACMP for cross-mounting


The directory or directories to be cross-mounted are specified in the
Filesystems/Directories to NFS Mount field. The network to be used for NFS
cross-mounts is optionally specified in the Network for NFS Mount field.

Cross-mount syntax
Note the rather strange /a;/fsa syntax for specifying the directory to be
cross-mounted. This rather unusual syntax is explained in the next foil.
Note that the resource group must include a service IP label which is on the
net_ether_01 network (aservice in the previous foil).

9-14 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
Syntax for Specifying Cross-mounts
Where the filesystem should be mounted over

/a;/fsa

What the filesystem is exported as

# mount aservice:/fsa /a

What HACMP does


(on each node in the resource group)

Copyright IBM Corporation 2005

Figure 9-12. Syntax for Specifying Cross-mounts AU546.0

Notes:

Syntax for specifying cross-mounts


The inclusion of a semi-colon in the Filesystems/Directories to NFS Mount field
indicates that the newer (and easier to work with) approach to NFS cross-mounting
described in this unit is in effect. The local mount point to be used by all the nodes in the
resource group when they act as NFS clients is specified before the semi-colon. The
NFS filesystem which they are to NFS mount is specified after the semi-colon.
Since the configuration specified in the last HACMP smit screen uses net_ether_01 for
cross-mounts and the service IP label on the net_ether_01 network is aservice (see
the diagram a couple of foils back showing the two IP networks), each node in the
resource group will mount aservice:/fsa on their local /a mount point directory.

Copyright IBM Corp. 1998, 2005 Unit 9. Integrating NFS into HACMP 9-15
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Ensuring the VG Major Number Is Unique


Any Volume Group which contains a filesystem that is
offered for NFS export to clients or other cluster nodes must
use the same VG major number on every node in the cluster
To display the current VG major numbers, use:
# ls -l /dev/*webvg
crw-rw---- 1 root system 201, 0 Sep 04 23:23 /dev/xwebvg
crw-rw---- 1 root system 203, 0 Sep 05 18:27 /dev/ywebvg
crw-rw---- 1 root system 205, 0 Sep 05 23:31 /dev/zwebvg

The command lvlstmajor will list the available major numbers for each
node in the cluster
For example:
# lvlstmajor
43,45...99,101...

The VG major number may be set at the time of creating the VG using SMIT
mkvg or by using the -V flag on the importvg command, for example:

# importvg -V100 -y shared_vg_a hdisk2

C-SPOC will "suggest" a VG major number which is unique across the nodes
when it is used to create a shared volume group

Copyright IBM Corporation 2005

Figure 9-13. Ensuring the VG Major Number Is Unique AU546.0

Notes:

VG major numbers
Volume group major numbers must be the same for any given volume group across all
nodes in the cluster. This is a requirement for any volume group that has filesystems
which are NFS exported to clients (either within or without the cluster).

9-16 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
NFS with HACMP Considerations
Some points to note...

Resource groups which export NFS filesystems MUST


1 implement IPAT.

The Filesystems mounted before IP configured resource group


2 attribute must be set to true.

HACMP does not use /etc/exports and the default is to export


filesystems rw to the world. Specify NFS export options in
3
/usr/es/sbin/cluster/etc/exports if you want better control (AIX 5.2
provides an option to specify this path)
HACMP only preserves NFS locks if the NFS exporting resource
4 group has no more than two nodes.

Copyright IBM Corporation 2005

Figure 9-14. NFS with HACMP Considerations AU546.0

Notes:

HACMP exports file


As mentioned in the visual, if you need to specify NFS options, you must use the
HACMP exports file, not the standard AIX exports file. You can use AIX smit mknfsexp
to build the HACMP exports file:
Add a Directory to Exports List
* PATHNAME of directory to export [] /
* MODE to export directory read-write
HOSTS & NETGROUPS allowed client access []
Anonymous UID [-2]
HOSTS allowed root access []
HOSTNAME list. If exported read-mostly []
Use SECURE option? no +
Public filesystem? no +
* EXPORT directory now, system restart or both both +
PATHNAME of alternate Exports file [/usr/es/sbin/cluster/etc/exports]

Copyright IBM Corp. 1998, 2005 Unit 9. Integrating NFS into HACMP 9-17
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Checkpoint
1. True or False?
HACMP supports all NFS export configuration options.
2. Which of the following is a special consideration when using
HACMP to NFS export filesystems? (select all that apply)
a. NFS exports must be read-write.
b. Secure RPC must be used at all times.
c. A cluster may not use NFS Cross-mounts if there are client
systems accessing the NFS exported filesystems.
d. A volume group which contains filesystems which are NFS
exported must have the same major device number on all cluster
nodes in the resource group.
3. What does [/abc;/xyz] mean when specifying a directory to cross-
mount?
a. /abc is the name of the filesystem which is exported and /xyz is
where it should be mounted at
b. /abc is where the filesystem should be mounted at and /xyz is the
name of the filesystem which is exported
4. True or False?
HACMP's NFS exporting feature only supports clusters of two
nodes.
5. True or False?
IPAT is required in resource groups which export NFS
filesystems.
Copyright IBM Corporation 2005

Figure 9-15. Checkpoint AU546.0

Notes:

9-18 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
Unit Summary
HACMP provides a means to make Network File System
(NFS) highly available
Configure Filesystem/Directory to Export and Filesystems
mounted before IP started in resource group
VG major number must be the same on all nodes
Clients NFS mount using service address
In case of node failure, takeover node acquires the service address, acquires
the disk resource, mounts the file system and NFS exports the file system
Clients see NFS server not responding during the fallover
NFS file systems can be cross-mounted across all nodes
Faster takeover: takeover node does not have to unmount the file system
A preferred network can be selected
Really only for read only file systems: NFS cross-mounted file systems can be
mounted read-write, but concurrent write attempts will produce inconsistent
results
Use GPFS for true concurrent access
Non-default export options can be specified in
/usr/es/sbin/cluster/etc/exports

Copyright IBM Corporation 2005

Figure 9-16. Unit Summary AU546.0

Notes:

Copyright IBM Corp. 1998, 2005 Unit 9. Integrating NFS into HACMP 9-19
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

9-20 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty Unit 10. Problem Determination and Recovery

What This Unit Is About


This unit describes the problem determination and recovery tools and
techniques for diagnosing problems that may occur in your cluster.

What You Should Be Able to Do


After completing this unit, you should be able to:
List reasons why HACMP can fail
Identify configuration and administration errors
Understand why the Dead Man's Switch invokes
Know when the System Resource Controller kills a node
Isolate and recover from failed event scripts
Correctly escalate a problem to IBM support

How You Will Check Your Progress


Accountability:
Checkpoint
Machine exercises

References
SC23-4867-03 HACMP for AIX, Version 5.2: Master Glossary
SC23-4864-03 HACMP for AIX, Version 5.2: Concepts and Facilities
Guide
SC23-4861-03 HACMP for AIX, Version 5.2 Planning and Installation
Guide
SC23-4862-03 HACMP for AIX, Version 5.2: Administration and
Troubleshooting Guide

Copyright IBM Corp. 1998, 2005 Unit 10. Problem Determination and Recovery 10-1
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Unit Objectives
After completing this unit, you should be able to:
List reasons why HACMP can fail
Identify configuration and administration errors
List the problem determination tools available in smit
Understand why the Dead Man's Switch invokes
Know when the System Resource Controller kills a node
Isolate and recover from failed event scripts
Correctly escalate a problem to IBM support

Copyright IBM Corporation 2005

Figure 10-1. Unit Objectives AU546.0

Notes:
In this unit we examine some of the reasons why HACMP might fail, and how to perform
basic problem determination in order to recover from failure.

10-2 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
Why Do Good Clusters Turn Bad?
Common reasons why HACMP fails:
A poor cluster design and lack of thorough planning
Basic TCP/IP and LVM configuration problems
HACMP cluster topology and resource configuration problems
Absence of change management discipline in a running cluster
Lack of training for staff administering the cluster
Performance/capacity problems

X
X
A

Halifax Vancouver

Copyright IBM Corporation 2005

Figure 10-2. Why Do Good Clusters Turn Bad? AU546.0

Notes:

Root causes
Often the root cause of problems with HACMP is the absence of design and planning at
the outset, or poor design and planning. As you will have now figured out, a couple of
hours spent in planning HACMP reaps rewards later on in terms of how easy it is to
configure, administer, and diagnose problems with the cluster.
HACMP verifies all topology and resource configuration parameters and most IP
configuration parameters before synchronization takes place. This means that provided
the cluster synchronizes and starts successfully, the cluster should remain stable.
The prime reason for cluster failure once the environment is in production is
administrative mistakes and an absence of change control.
Typically HACMP clusters are very stable. During the writing of this course, a customer
complained to IBM that his HACMP cluster had failed on him because a node had failed
and his workload did not get taken over by the standby node. Upon investigation it was

Copyright IBM Corp. 1998, 2005 Unit 10. Problem Determination and Recovery 10-3
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

proven that in fact an earlier (undetected) failure had resulted in the standby node
taking over the workload and a subsequent component failure resulted in a second
point of failure. How many points of failure does HACMP handle?

10-4 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
Test Your Cluster before Going Live!
Careful testing of your production cluster before going live reduces the
risk of problems later.
An example test plan might include:
Test Item How to test Checked
Node Fallover
Network Adapter Swap
IP Network Failure
Storage Adapter Failure
Disk Failure
Clstrmgr Killed
Serial Network Failure
SCSI Adapter for rootvg Failure
Application Failure
Node re-integration
Partitioned Cluster
Copyright IBM Corporation 2005

Figure 10-3. Test Your Cluster before Going Live! AU546.0

Notes:

Importance of testing
Every cluster should be thoroughly tested before going live. It is important that you
develop and document a cluster test plan for your environment. Start by taking your
cluster diagram and highlighting all the things that could go wrong, then write down
what you expect the cluster to do in response to that failure. Periodically test your
cluster to ensure that fallover works correctly and correct your test plan if your
assumptions about what will happen differ from that which HACMP actually performs
(for example shutdown -F does not cause fallover). HACMP 5.2 and later provide a
test tool which will be discussed later in this unit.
Although it is recommended that Graceful with Takeover testing of the cluster services
be performed, it is especially important to conduct this testing if HACMP is to be used to
reduce Planned Downtime (for upgrades/maintenance) as this will be the cluster
function that will be used. This Graceful with Takeover testing, however, should not
replace the testing of a node failure due to crash (for example, cat hello > /dev/kmem).

Copyright IBM Corp. 1998, 2005 Unit 10. Problem Determination and Recovery 10-5
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

All efforts should be made to verify application functions (user level testing) as the
cluster function tests are being performed. Verifying that the cluster functions correctly
without verifying that the application functions correctly as part of the cluster function
test is not recommended. Getting the end-user commitment is sometimes the hardest
part of this process.

Use of emulation
You can emulate some common cluster status change events. Remember that
whenever you make a change to cluster configuration, test the change before putting
the cluster back into production if at all possible.
You should always emulate a DARE change before actually doing it. If a DARE change
does not succeed during emulation, then it will definitely not succeed when you actually
do it.

10-6 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
Tools to Help You Diagnose a Problem
Most problems related to IP, LVM and cluster configuration errors
Tools:
Automatic Cluster Configuration Monitoring
Automatic Error Correction during verify
HACMP Cluster Test Tool
Emulation Tools
HACMP Troubleshooting manual
Log files: hacmp.out, cluster.log, clverify.log, clustrmgr.debug

Simple AIX and HACMP commands:

df -k mount lsfs netstat -i


no -a lsdev lsvg [<ecmvg>] lsvg -o

lslv lspv ifconfig

clRGinfo cltopinfo clcheck_server clstat

Copyright IBM Corporation 2005

Figure 10-4. Tools to Help You Diagnose a Problem AU546.0

Notes:

Some key tools


Some of the key tools to aid you in diagnosing a problem in the cluster are detailed
above. Most problems are simple configuration issues and hence the commands used
to diagnose them are also straightforward. Also, especially useful are the
/tmp/hacmp.out and /usr/es/adm/cluster.log files which document all of the output
that the HACMP event scripts generate.

Dont forget the documentation


Useful help on errors generated by HACMP and diagnosing problems with the cluster
may be found in the HACMP for AIX Administration Guide and the HACMP for AIX
Troubleshooting Guide.

Copyright IBM Corp. 1998, 2005 Unit 10. Problem Determination and Recovery 10-7
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Tools Available From smit Menu


Problem Determination Tools
Move cursor to desired item and press Enter.

HACMP Verification
View Current State
HACMP Log Viewing and Management
Recover From HACMP Script Failure
Restore HACMP Configuration Database from Active Configuration
Release Locks Set By Dynamic Reconfiguration
Clear SSA Disk Fence Registers
HACMP Cluster Test Tool
HACMP Trace Facility
HACMP Event Emulation
HACMP Error Notification

Open a SMIT Session on a Node

F1=Help F2=Refresh F3=Cancel Esc+8=Image


Esc+9=Shell Esc+0=Exit Enter=Do

Copyright IBM Corporation 2005

Figure 10-5. Tools Available From smit Menu AU546.0

Notes:

Tools available from the Problem Determination Tools smit menu


We will be looking at some of these tools on the following pages. Not covered are:
View Current State. This tool executes the/usr/es/sbin/cluster/utilities/cldump
command which gives the state of the cluster as long as at least one node has
cluster manager services running.
HACMP Log Viewing and Management. This tool allows you to watch as well
as scan the HACMP log files as well as set options on the /tmp/hacmp.out file
to see event summaries or to see the file in searchable HTML format. watch is
basically a tail -f operation while scan is to view the entire file.
Restore HACMP Configuration Database from Active Configuration.
Release Locks Set By Dynamic Reconfiguration. This was covered in Unit 7.
Clear SSA Disk Fence Registers.
HACMP Trace Facility.
HACMP Error Notification. This was covered in Unit 8.

10-8 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty

Automatic Cluster Configuration Monitoring

HACMP Verification

Move cursor to desired item and press Enter.

Verify HACMP Configuration


Configure Custom Verification Method
Automatic Cluster Configuration Monitoring

Automatic Cluster Configuration Monitoring

Type or select values in entry fields.


Press Enter AFTER making all desired changes.

[Entry Fields]
* Automatic cluster configuration verification Enabled +
Node name Default +
* HOUR (00 - 23) [00] +#

Copyright IBM Corporation 2005

Figure 10-6. Automatic Cluster Configuration Monitoring AU546.0

Notes:

How it works
The clverify utility runs on one user-selectable HACMP cluster node once every 24
hours. By default, the first node in alphabetical order runs the verification at midnight.
When automatic cluster configuration, monitoring detects errors in cluster configuration,
clverify triggers a general_notification event. The output of this event is logged in
hacmp.out throughout the cluster on each node that is running cluster services.
clverify maintains the log file /var/hacmp/clverify/clverify.log.

Copyright IBM Corp. 1998, 2005 Unit 10. Problem Determination and Recovery 10-9
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Automatic Correction
HACMP Verification and Synchronization
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
[Entry Fields]
* Verify, Synchronize or Both [Both] +
* Automatically correct errors found during [No] +
verification?
* Force synchronization if verification fails? [No] +
* Verify changes only? [No] +
* Logging [Standard] +

F1=Help F2=Refresh F3=Cancel F4=List


Esc+5=Reset Esc+6=Command Esc+7=Edit

Also automatic synchronization during cluster start (HACMP 5.3)


Copyright IBM Corporation 2005

Figure 10-7. Automatic Connection AU546.0

Notes:

Autocorrection of some verification errors during Verify


You can run automatic corrective actions during cluster verification on an inactive
cluster. Automatic correction of clverify errors is not enabled by default. You can
choose to run this useful utility in one of two modes. If you select Interactively, when
clverify detects a correctable condition related to importing a volume group or to
exporting and re-importing mount points and filesystems, you are prompted to authorize
a corrective action before clverify continues error checking. If you select Yes, when
clverify detects that any of the conditions listed below exists, it takes the corrective
action automatically without a prompt.
The following errors are detected and fixed:
- Required /etc/services entries are missing on a node
- HACMP shared volume group time stamps are not up to date on a node
- The /etc/hosts file on a node does not contain all HACMP-managed IP addresses
- SSA concurrent volume groups need unique SSA node numbers
- A filesystem is not created on a node, although disks are available

10-10 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty - Disks are available, but the volume group has not been imported to a node.
- Required HACMP snmpd entries are missing on a node.
Note that the autocorrection selection will not appear if cluster services are running.
Instead the top line of the menu will look like:
HACMP Verification and Synchronization (Active Cluster Nodes Exist)

Additional autocorrection in HACMP 5.3


1.RSCT instance number synchronized properly across all nodes
2. Ensure boot-time IP-Addresses are configured on the network interfaces that RSCT
expects
3. Ensure active shared volume groups are not set to auto-varyon
4. Ensure filesystems are not set to auto-mount

Additional verification in HACMP 5.3


In HACMP 5.3, the following are added to verification
Incompatibilities between network and network adapter types
Shared volume groups defined as auto-varyon
Certain Network Options (no command settings) are different in cluster nodes or will
be modified by RSCT during cluster startup
MTU sizes are different on cluster nodes
RSCT software levels are different for the same AIX levels
HACMP WAN support configured and WAN software is missing
Certain volume group settings are different
Disks are not accessible before the cluster startup
There are resource groups with site policies defined, but no XD software is installed
There are resource groups with site policies defined, but no sites configured
Issue an error instead of the warning when a volume group that is set up for cross site
mirroring does not have copies of the logical volumes at both sites
Resource group contains a volume group set up for cross site mirroring, and forced
varyon is not set

Automatic verification during cluster start


In HACMP 5.3 there is an additional automatic verification and correction done during
cluster start:

Copyright IBM Corp. 1998, 2005 Unit 10. Problem Determination and Recovery 10-11
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Currently, if a user attempts to start cluster services on a node on which the HACMP
topology has not yet been synchronized, the following error will be displayed
immediately upon entering SMIT:
smitty clstart
Local node not properly configured for HACMP for AIX.
This message appears when the HACMP cluster ODM stanza does not contain a node
name in the nodename field. Rather than continue to display this message, the existing
clstart SMIT interface will be updated to include a message that indicates verification /
synchronization will occur prior to starting cluster services. This message will appear in
the SMIT interface in place of the existing message.
The assumption is there is a valid cluster configuration on the local node the user is
attempting to start, but the user has not synchronized which leaves the HACMP cluster
node handle field blank and results in the error message, above. If cluster services are
not running on any node in the cluster (known to the local node) then the local cluster
configuration will be synchronized to all nodes attempting to start cluster services after
successfully verifying the local DCD configuration. If, however cluster services are
running on a node in the cluster then the local DCD will be compared against an ACD of
a running cluster node where the local node participates in the ACD's configuration. If
the DCD and ACD match, then verification is run. If the DCD and ACD do not match,
then a snapshot is made of the DCD, and the active node's ACD will be copied to the
DCD on the local node and verification will be run prior to starting cluster services.
This feature can be disabled such that verification and synchronization does not occur
during cluster startup. The smit path to disable is:
smitty hacmp -> Extended Configuration -> Extended cluster services

10-12 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
HACMP Cluster Test Tool
HACMP Cluster Test Tool

Move cursor to desired item and press Enter.

Execute Automated Test Procedure


Execute Custom Test Procedure

F1=Help F2=Refresh F3=Cancel Esc+8=Image


Esc+9=Shell Esc+0=Exit Enter=Do

THESE TESTS ARE DISRUPTIVE

Copyright IBM Corporation 2005

Figure 10-8. HACMP Cluster Test Tool AU546.0

Notes:

Test Tool description


The Cluster Test Tool utility lets you test an HACMP cluster configuration to evaluate
how a cluster operates under a set of specified circumstances, such as when cluster
services on a node fail or when a node loses connectivity to a cluster network. You can
start a test, let it run unattended, and return later to evaluate the results of your testing.
You should run the tool under both low load and high load conditions to observe how
system load affects your HACMP cluster.
The Cluster Test Tool discovers information about the cluster configuration, and
randomly selects cluster components, such as nodes and networks, to be used in the
testing.

Copyright IBM Corp. 1998, 2005 Unit 10. Problem Determination and Recovery 10-13
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

How to run the Test Tool


You run the Cluster Test Tool from SMIT on one node in an HACMP cluster. For testing
purposes, this node is referred to as the control node. From the control node, the tool
runs a series of specified testssome on other cluster nodes, gathers information
about the success or failure of the tests processed, and stores this information in the
Cluster Test Tool log file for evaluation or future reference.
These tests are disruptive. They should not be done in production mode.
1) General topology tests
2) Resource group tests on non-concurrent resource groups
3) Resource group tests on concurrent resource groups
4) Catastrophic failure test

10-14 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
Event Emulation Tools
HACMP provides tools to emulate common cluster events.
Only certain events are emulated
Multiple events cannot be emulated
Each event runs in isolation; results do not impact upon the next
emulated event
The results are logged in /tmp/emuhacmp.out
If an event fails when emulated, it's not going to work when it
happens for real
Failed/Joined Network
Swap Adapter Failed/Joined Standby

Failed/Joined Node
A

Halifax Vancouver

Copyright IBM Corporation 2005

Figure 10-9. Event Emulation Tools AU546.0

Notes:

What emulation does


HACMP now provides a simple tool which emulates common cluster status changes
(events). Event emulation is limited in so far as it does not actually change any device
under the control of HACMP, that is, it does not run chdev, mkdev or rmdev. Events
cannot be stacked up, that is, you cannot test what if an adapter swaps and then a node
fails as each event runs in isolation.

Only certain events can be emulated


The following events can be emulated:
Node Up Event, Node Down Event
Network Up Event, Network Down Event
Fail Standby Event, Join Standby Event
Swap Adapter Event

Copyright IBM Corp. 1998, 2005 Unit 10. Problem Determination and Recovery 10-15
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Listing the Events That Can Be Emulated


HACMP Event Emulation
Move cursor to desired item and press Enter.
Node Up Event
Node Down Event
Network Up Event
Network Down Event
Fail Standby Event
Join Standby Event
Swap Adapter Event

F1=Help F2=Refresh F3=Cancel F8=Image


F9=Shell F10=Exit Enter=Do

Copyright IBM Corporation 2005

Figure 10-10. Listing the Events That Can Be Emulated AU546.0

Notes:

Menu path
smitty hacmp -> Problem Determination Tools -> HACMP Event Emulation

Listing the events that can be emulated


Here is the SMIT panel that lists all the cluster events that can be emulated. As you can
see, this covers nodes, networks and network adapters - the three cluster components
that HACMP monitors.

10-16 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
Emulating a Node Down Event
Emulate Node Down Event
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
[Entry Fields]
* Node Name [hudson] +
* Node Down Mode graceful +

F1=Help F2=Refresh F3=Cancel F4=List


F5=Reset F6=Command F7=Edit F8=Image
F9=Shell F10=Exit Enter=Do

Copyright IBM Corporation 2005

Figure 10-11. Emulating a Node Down Event AU546.0

Notes:

Node down example


In this example, we emulate the orderly departure of the hudson node (it is not possible
to emulate disorderly departures of nodes).
Note that any pre and post events or notify commands are not run when a HACMP
event is emulated.

Copyright IBM Corp. 1998, 2005 Unit 10. Problem Determination and Recovery 10-17
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Emulating a Network Down Event


Emulate Network Down Event

Type or select values in entry fields.


Press Enter AFTER making all desired changes.

[Entry Fields]
* Network Name [net_ether_01] +
Node Name [] +

F1=Help F2=Refresh F3=Cancel F4=List


F5=Reset F6=Command F7=Edit F8=Image
F9=Shell F10=Exit Enter=Do

Copyright IBM Corporation 2005

Figure 10-12. Emulating a Network Down Event AU546.0

Notes:

Network down example


In this example, a network is selected in order to simulate a network failure. If a node is
specified, then a local failure is emulated; otherwise, a global network failure is
emulated. Either way, HACMPs default action is to log the network failure and do
possibly do selective fallover (in the case of local network failure).
A local network failure is a failure of all heartbeat communications on a given node for a
particular network.

10-18 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
Checking Cluster Subsystems (1 of 2)
# /usr/es/sbin/cluster/utilities/clcheck_server \
clstrmgrES ; echo $?

# lssrc ls clstrmgrES | grep state

# lssrc -g cluster (NO LONGER VALID)


Subsystem Group PID Status
clstrmgrES cluster 21032 active
clinfoES cluster 21676 active

Mandatory clstrmgrES

Cluster
Components clinfoES

Optional

Copyright IBM Corporation 2005

Figure 10-13. Checking Cluster Processes (1 of 2) AU546.0

Notes:

clstart subsystems
Listed here are the processes that are listed in the startup smit menu for HACMP. Its
interesting to note that these cluster processes are not displayed by the command
when they are inactive. This was a display option (or probably better a non-display
option) that HACMP chose to use when the subsystems were defined during the install
process. This option can be changed (one subsystem at a time) using the chssys -s
subystem_name -a -D command.

Checking for cluster services up


Starting in HACMP 5.3, you must make a distinction between the clstrmgrES
subsystem and cluster services. The clstrmgrES subsystem is always running--even if
cluster services is not running. So to check if cluster services is running the supported
command is /usr/es/sbin/cluster/utilities/clcheck_cluster clstrmgrES. This
command returns 0 (for down) or 1 (for up) so you will need to check the return code.

Copyright IBM Corp. 1998, 2005 Unit 10. Problem Determination and Recovery 10-19
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

An alternative command which works in HACMP 5.3 but is not guaranteed for the future
is easier. It is lssrc -ls clstrmgrES | grep state. Another command that will give you
state information in HACMP 5.3 is the command /usr/es/sbin/cluster/utilities/cldump.
Finally, you can use the smit path: Problem Determination Tools -> View Current
State

10-20 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
Checking Cluster Subsystems (2 of 2)
Check rsct, clcomd, ctrmc subsystems

#
# lssrc a | grep svc
topsvcs topsvcs 258248 active
grpsvcs grpsvcs 434360 active
emsvcs emsvcs 335994 active
emaixos emsvcs 307322 active

# lssrc -s clcomdES
Subsystem Group PID Status
clcomdES clcomdES 13420 active

# lssrc -s ctrmc
Subsystem Group PID Status
ctrmc rsct 2954 active
#

Copyright IBM Corporation 2005

Figure 10-14. Checking Cluster Processes (2 of 2) AU546.0

Notes:

Supporting subsystems
Listed here are the additional processes we would expect to find running on an HACMP
5.2 and later cluster node.

Copyright IBM Corp. 1998, 2005 Unit 10. Problem Determination and Recovery 10-21
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Testing Your Network Connections


To test your IP network:
ping (interfaces)
netstat rn (routing)
host (name resolution)
netstat -i and ifconfig (addresses, subnet mask)

To test your non-IP networks:


RS232
stty < /dev/tty# (on 2 connected nodes)

Target mode SSA network:


cat < /dev/tmssa#.tm, echo test > /dev/tmssa#.im

Heartbeat over disk:


/usr/sbin/rsct/bin/dhb_read -p hdiskx -r (receive is done first)
/usr/sbin/rsct/bin/dhb_read -p hdiskx -t

Do not perform these tests while HACMP is running

Copyright IBM Corporation 2005

Figure 10-15. Testing Your Network Connections AU546.0

Notes:

Testing your IP network


- Ping between all pairs of interfaces on the same subnet.
- Check the entries in the routing table on each node (netstat -rn).
- Check names are resolvable (host). For example, host vancouver_service.
- Check addresses and subnet mask (netsat -i, ifconfig).

Testing your non-IP networks


- For RS232
On one node, execute the command stty < /dev/tty#. This will hang at the
command line.
On the other connected node, execute the command stty < /dev/tty#.
This causes the tty settings to be displayed on both nodes.

10-22 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty - For Target Mode SSA


On one node, execute the command cat < /dev/tmssa#.tm where the value of #
is the node id of the target ssa router
On the other connected node, execute the command echo test > \
/dev/tmssa#.im where # is the node id of the source ssa router
This causes the word test to appear on the first node
- For Heartbeat over Disk
On one node, execute the command, /usr/sbin/rsct/bin/dhb_read -p hdiskx -r.
This causes the message waiting for response to appear
On the other connected node, execute the command
/usr/sbin/rsct/bin/dhb_read -p hdiskx -t
This causes both nodes to display the message Link operating normally to
appear on both nodes
These tests can be used to validate that network communications are functioning
between cluster nodes over the defined cluster networks.

Copyright IBM Corp. 1998, 2005 Unit 10. Problem Determination and Recovery 10-23
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Deadman Switch (DMS) Timeout


888 LED code -> possible DMS timeout.

Why?
Clstrmgr starved of CPU
Excessive I/O traffic
Excessive TCP/IP traffic over an interface

Was it DMS?
Copy the system dump to a file
kdb on the dump file
stat subcommand
look for 'HACMP dms timeout halting...'

Copyright IBM Corporation 2005

Figure 10-16. Dead Man's Switch (DMS) Time-out AU546.0

Notes:

Dead Man Switch (DMS)


The deadman switch is the AIX 5L kernel extension that halts a node when it enters a
hung state that extends beyond a certain time limit. This enables another node in the
cluster to acquire the hung nodes resources in an orderly fashion, avoiding possible
contention problems. If the deadman switch is not reset in time, it can cause a system
panic and dump under certain cluster conditions.
The dead mans switch should not invoke if your cluster is not overloaded with I/O
traffic. There are steps that can be taken to mitigate the chances of the DMS invoking,
but often this is a result of the machine being fundamentally overloaded.

10-24 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
Avoiding Deadman Switch Timeouts
Steps to avoid DMS timeout problems

1. Isolate the cause of excessive I/O or TCP/IP traffic and fix it, and if that
does not work...
2. Turn on I/O pacing, and if that does not work...
3. Increase the frequency of the syncd, and if that does not work...
4. Reduce the failure detection rate for the slowest network, and if that does
not work...
5. Buy a bigger machine

Copyright IBM Corporation 2005

Figure 10-17. Avoiding Deadman Switch Time-outs AU546.0

Notes:

Causes of DMS time-outs


Most dead mans switch problems are the result of either an extremely overloaded
cluster node or a sequence of truly bizarre cluster configuration misadventures (for
example, DMS time-outs have been known to occur when the disk subsystem is
sufficiently screwed up that AIX encounters difficulties accessing any disks at all).
Large amounts of TCP traffic over an HACMP-controlled service interface may cause
AIX 5L to experience problems when queuing and later releasing this traffic. When
traffic is released, it generates a large CPU load on the system and prevents
timing-critical threads from running, thus causing the Cluster Manager to issue a DMS
time-out.
HACMP via Topology Services produces an AIX error if the time gets close. The error
label is TS_DMS_WARNING_ST and you can set an error notify method to notify you
when this occurs.

Copyright IBM Corp. 1998, 2005 Unit 10. Problem Determination and Recovery 10-25
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

The command /usr/sbin/rsct/bin/hatsdmsinfo can be used to see how often the DMS
timer is being reset
Although we don't recommend changing the DMS time-out value, we are sometimes
asked about how to increase the time-out period on the deadman switch in order to
make it less likely that the DMS will pop and crash the node. There is no strict time-out
setting, it is monitored by RSCT and is calculated as twice the value of the longest
failure detection rate of all configured HA network in the cluster. If, for example, you
have 2 networks, an Ethernet, and a disk heartbeat network, the Ethernet has the
longer failure detection rate, 10 seconds versus 8 for the diskhb network, so the DMS
time-out is set to 2*10, or 20 seconds. If the failure detection rate is being modified to
extend the DMS time-out, it is best to ensure that all networks have the same failure
detection period. To set the DMS time-out value to 30 seconds, while making the failure
detection the same for both networks, the custom NIM settings would be: Ethernet:
Failure Cycle 16
Interval between Heartbeats (seconds) 1

diskhb: Failure Cycle 8


Interval between Heartbeats (seconds) 2
This would increase the DMS time-out from 20 seconds to 32. It would also increase the
amount of time necessary to detect a network failure by the same amount. Note that
since the DMS time-out period is directly tied to failure detection rates, increasing the
DMS time-out period will necessarily increase the delay before the secondary node
starts to acquire resources in the event of a node failure, node hang or the loss of all
network connectivity.

10-26 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
Setting Performance Tuning Parameters
Extended Performance Tuning Parameters Configuration
Move cursor to desired item and press Enter.
Change/Show I/O pacing
Change/Show syncd frequency

F1=Help F2=Refresh F3=Cancel F8=Image


F9=Shell F10=Exit Enter=Do

Copyright IBM Corporation 2005

Figure 10-18. Setting Performance Tuning Parameters AU546.0

Notes:

Extended Performance Tuning Parameter Configuration


This is the menu for changing the I/O pacing and syncd frequency

Copyright IBM Corp. 1998, 2005 Unit 10. Problem Determination and Recovery 10-27
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Enabling I/O Pacing


The HACMP documentation recommends a high water mark of 33
and a low water mark of 24.
Change/Show I/O pacing
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
[Entry Fields]
HIGH water mark for pending write I/Os per file [33] +#
LOW water mark for pending write I/Os per file [24] +#

F1=Help F2=Refresh F3=Cancel F4=List


F5=Reset F6=Command F7=Edit F8=Image
F9=Shell F10=Exit Enter=Do

Copyright IBM Corporation 2005

Figure 10-19. Enabling I/O Pacing AU546.0

Notes:

Setting the I/O pacing values


Remember, I/O pacing and other tuning parameters should only be set to values other
than the defaults after a system performance analysis indicates that doing so will lead to
both the desired and acceptable side effects.
Although the most efficient high- and low-water marks vary from system to system, an
initial high-water mark of 33 and a low-water mark of 24 provides a good starting point.
These settings only slightly reduce write times and consistently generate correct
fallover behavior from the HACMP software.
See the AIX 5L Performance Monitoring & Tuning Guide for more information on I/O
pacing.

10-28 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
Changing the Frequency of syncd
The HACMP documentation recommends a value of 10.
Change/Show syncd frequency
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
[Entry Fields]
syncd frequency (in seconds) [10] #

F1=Help F2=Refresh F3=Cancel F4=List


F5=Reset F6=Command F7=Edit F8=Image
F9=Shell F10=Exit Enter=Do

Copyright IBM Corporation 2005

Figure 10-20. Changing the Frequency of syncd AU546.0

Notes:

Setting the syncd frequency


The syncd setting determines the frequency with which the I/O disk-write buffers are
flushed.
Frequent flushing of these buffers reduces the chance of deadman switch time-outs.
The AIX 5L default value for syncd as set in /sbin/rc.boot is 60. It is recommended to
change this value to 10. Note that the I/O pacing parameters setting should be changed
first.

Copyright IBM Corp. 1998, 2005 Unit 10. Problem Determination and Recovery 10-29
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

SRC Halts a Node


Under what circumstances does the SRC halt a node?
The cluster manager was killed or has crashed

Proving that SRC halted a node:


Check the AIX error log
Look for abnormal termination of clstrmgr daemon

To avoid SRC halts in the first place:


Don't give untrained staff access to the root password
Consider modifying /etc/cluster/hacmp.term

Copyright IBM Corporation 2005

Figure 10-21. SRC Halts a Node AU546.0

Notes:

How SRC halt works


The SRC looks for an entry in the /etc/objrepos/SRCnotify odm file if a subsystem
is killed or crashed. HACMP provides an entry for the clstrmgr. This entry causes
clexit.rc to run which does a halt q by default.

Avoiding SRC halts


Most likely cause is untrained administrator with root privilege. Another possibility is to
modify the /etc/cluster/hacmp.term file. The script clexit.rc will call this script which
allows you to do something different than halt q.

10-30 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
Partitioned Clusters and Node Isolation
When
Heartbeats are received from a node that was marked as
failed
HACMP ODM configuration is not the same on a joining
node as nodes already active in the cluster
Two clusters with the same ID appear in the same logical
network
The rogue recovering or joining node is halted

What happens
Group Services and clstrmgr exit on some node(s)

Proving that Node Isolation caused the problem:


/tmp/clstrmgr.debug file
AIX error log entry GS_DOM_MERGE_ER

Copyright IBM Corporation 2005

Figure 10-22. Partitioned Clusters and Node Isolation AU546.0

Notes:

Node isolation
When you have a partitioned cluster, the node(s) on each side of the partition detect this
and run a node_down for the node(s) on the opposite side of the partition. If, while
running this or after communication is restored, the two sides of the partition do not
agree on which nodes are still members of the cluster, a decision is made as to which
partition should remain up, and the other partition is shutdown by a Group Services
(GS) merge from nodes in the other partition or by a node sending a GS merge to itself.
In clusters consisting of more than two nodes, the decision is based on which partition
has the most nodes left in it, and that partition stays up. With an equal number of nodes
in each partition (as is always the case in a two-node cluster) the node(s) that remain(s)
up is determined by the node number (lowest node number in cluster remains) which is
also generally the first in alphabetical order.

Copyright IBM Corp. 1998, 2005 Unit 10. Problem Determination and Recovery 10-31
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Role of Group Services


Group Services domain merge messages indicate that a node isolation problem was
handled to keep the resources as highly available as possible, giving you time to later
investigate the problem and its cause. When a domain merge occurs, Group Services
and the Cluster Manager exit. The clstrmgr.debug file will contain the following error:
"announcementCb: GRPSVCS announcement code=n; exiting"
"CHECK FOR FAILURE OF RSCT SUBSYSTEMS (topsvcs or grpsvcs)"
There is also an entry in the AIX error log GS_DOM_MERGE_ER.

10-32 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
Avoiding Partitioned Clusters
Have a non IP (serial) network

Have a second non-IP network

Check your non-IP networks before going live

Watch for non-IP network failures in HACMP log files

Do not segment your cluster's IP networks


Avoid multiple switches
Except in carefully designed highly available network configurations
Avoid bridges

Copyright IBM Corporation 2005

Figure 10-23. Avoiding Partitioned Clusters AU546.0

Notes:

What can go wrong


A partitioned cluster can result in data divergence (two cluster nodes each gain access
to half of the disks mirrors and proceed to perform updates on their halves). This is a
scenario which can be extremely difficult to completely recover from as the changes
made by the two nodes may be fundamentally incompatible and impossible to
reconcile.

Avoiding the problem


The best way to avoid a partitioned cluster is to install and configure one or more non-IP
networks.
Test disabling each non-IP network and making sure this is detected by HACMP then
enabling each non-IP network and ensure this is also detected.

Copyright IBM Corp. 1998, 2005 Unit 10. Problem Determination and Recovery 10-33
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Please Check Event Status Message


Config too long message

Cluster <clustername> has been running event <eventname>


for # seconds. Please check event status.

It means that an event script has failed, hung or is taking too long.

HACMP stops processing events until you resolve this issue

Copyright IBM Corporation 2005

Figure 10-24. Please Check Event Status Message AU546.0

Notes:

The config _too_long event


For each cluster event that does not complete within the specified event duration time,
config_too_long messages are logged in the hacmp.out file and sent to the console
according to the following pattern:
First five config_too_long messages appear in the hacmp.out file at 30-second
intervals
Next set of five messages appears at interval that is double the previous interval
until the interval reaches one hour
These messages are logged every hour until the event is complete or is
terminated on
that node
This error can occur if an event script fails or does not complete within a customizable
time period which by default is 360 seconds.

10-34 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty Why does it happen?


There are two major reasons this may happen.
a. The event script fails to complete, in which case the message is sent forever
b. It may be that an event just takes a lot more time like varying on a lot of disks or
processing dependent resource groups in which case this error message eventually
stops being generated when the HACMP event script that was running finally
completes

Copyright IBM Corp. 1998, 2005 Unit 10. Problem Determination and Recovery 10-35
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Changing the Timeouts


Change/Show Time Until Warning
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
[Entry Fields]
Max. Event-only Duration (in seconds) [180] #
Max. Resource Group Processing Time (in seconds) [180] #
Total time to process a Resource Group event 6 minutes and 0 secon>
before a warning is displayed

NOTE: Changes made to this panel must be


propagated to the other nodes by
Verifying and Synchronizing the cluster

F1=Help F2=Refresh F3=Cancel F4=List


F5=Reset F6=Command F7=Edit F8=Image
F9=Shell F10=Exit Enter=Do

Copyright IBM Corporation 2005

Figure 10-25. Changing the Time-outs AU546.0

Notes:

smit menu
smit hacmp -> Extended Configuration -> Extended Event Configuration ->
Change/Show Time Until Warning

How to set the values


Note that the time-outs are specified as two values one for fast events that do not
involve resource group movements and a second value for slow events:
Max. Event-only Duration (in seconds) - This is the amount of time that a fast event is
allowed to take.
Max. Resource Group Processing Time (in seconds) - This is the additional amount
of time to be allowed for slow events. Therefore the amount of time for Resource Group
Processing is the sum of the Max. Event-only Duration and the Max. Resource Group
Processing Time

10-36 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
Recovering From an Event Script Failure

1. /tmp/hacmp.out file go to time of 1st too long msg


Use /usr/es/adm/cluster.log to find time of 1st msg

2. Go backwards to find the AIX error messages

3. Manually correct the problem and complete failed event

4. Perform "Recover from Script Failure"

5. Verify config too long message stops

6. Verify that the cluster is now working properly

Copyright IBM Corporation 2005

Figure 10-26. Recovering From an Event Script Failure AU546.0

Notes:

Why recover from script failure is necessary


If an event script fails or takes too long, the Please check event status message starts
to appear as described on the previous visual. HACMP stops processing cluster events
until the situation is resolved. If the problem is that an event took too long then the
problem may soon solve itself. If a HACMP event script has actually failed then manual
intervention is required.

The procedure
The procedure is outlined in the visual above. Using the /usr/es/adm/cluster.log file
with the command grep EVENT /usr/es/adm/cluster.log | more makes it easier to
find when the config too long event 1st occurred. Be sure to find the earliest AIX error
message -- not just the first AIX error message. You must manually complete what the
event would have done before doing recover from script failure which is described on
the next visual. You can also use the cluster.log in combination with hacmp.out.

Copyright IBM Corp. 1998, 2005 Unit 10. Problem Determination and Recovery 10-37
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Recovering From an Event Failure


Problem Determination Tools
Move cursor to desired item and press Enter.
HACMP Verification
View Current State
HACMP Log Viewing and Management
Recover From HACMP Script Failure
Restore HACMP Configuration Database from Active Configuration
Release Locks Set By Dynamic Reconfiguration
Clear SSA Disk Fence Registers
HACMP Trace Facility
+--------------------------------------------------------------------------+
Select a Node

Move cursor to desired item and press Enter.

bondar
hudson

F1=Help F2=Refresh F3=Cancel
F8=Image F10=Exit Enter=Do
F1 /=Find n=Find Next
F9+--------------------------------------------------------------------------+

Copyright IBM Corporation 2005

Figure 10-27. Recovering from an Event Failure AU546.0

Notes:

What this procedure does


This SMIT menu entry can be used to recover from a script failure. This does not mean
that HACMP fixes problems in event scripts, but this menu is used to allow the cluster
manager to continue to the next event following an event script failure that you have
identified and manually corrected. Select the node experiencing the problem and press
Enter.

10-38 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
A Troubleshooting Methodology
Save the log files from all available nodes as soon as possible
Attempt to duplicate the problem
Approach the problem methodically
Distinguish between what you know and what you assume
Keep an open mind
Isolate the problem
Go from the simple to the complex
Make one change at a time
Stick to a few simple troubleshooting tools
Do not neglect the obvious
Watch for what the cluster is not doing
Keep a record of the tests you have completed

Copyright IBM Corporation 2005

Figure 10-28. A Troubleshooting Methodology AU546.0

Notes:

Trouble shooting suggestions


Save the log files from every available cluster node while they are still available -
Things might get much worse than they already are. Having access to all relevant
cluster log files and application log files could prove very important. These log files
might be overwritten while you are investigating the problem or they might be lost
entirely if more hardware failures occur. Save copies of them very early in the
troubleshooting exercise to ensure that they are not lost.
Attempt to duplicate the problem -
While keeping in mind the importance of not making a bad situation worse by causing
even more problems, it is often useful to try to duplicate the circumstances which are
believed to have been in effect when the problem occurred as this can lead to a greater
understanding of exactly what went wrong.

Copyright IBM Corp. 1998, 2005 Unit 10. Problem Determination and Recovery 10-39
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Approach the problem methodically -


Jumping around from idea to idea and just trying whatever comes to mind may be an
entertaining use of your time but it is unlikely to yield a fast solution to the problem at
hand.
Distinguish between what you know and what you assume -
It is far too easy to spend quite a while chasing down a path of inquiry which is based on
a faulty assumption. It is frequently necessary to proceed on the basis of an assumption
but be sure that you understand when you are working based on an assumption. Once
you have spent twenty minutes to half an hour working on the basis of an assumption
with no apparent progress, it is probably time to start to wonder about the validity of the
assumption. If you spend more than about three quarters of an hour based on an
assumption with still little or no apparent progress then it is probably time to figure out a
way to determine if the assumption is true or not (devise a test which will indicate if the
assumption is valid and then perform the test).
Keep an open mind -
Although related to the issue of knowing if you are working on the basis of an
assumption or a fact, keeping an open mind is much more than that. It means being
careful to not make assumptions which are based on flimsy or non-existent evidence
and it means to be on the lookout for clues that are not compatible with your current
assumptions so that you are able to drop faulty assumptions more rapidly.
Isolate the problem -
Consider temporarily simplifying the cluster in order to remove elements which may be
confusing the issue at hand. Keep in mind that your simplifications may change the
situation enough that the problem vanishes. This does not necessarily mean that the
elements which you removed were part of the problems cause as their removal may
simply have changed the relative timing of key events such that the bad sequence of
events no longer occurs.
Go from the simple to the complex -
Most problems are actually simple problems. Do not start to develop elaborate theories
of what went wrong until you have demonstrated that the simpler possibilities did not
cause the problem to occur.
Make one change at a time -
Once you believe that you understand the problem, make small changes to the cluster
which are each intended to eliminate some aspect of the problem and then verify that
they had the intended effect. If the small changes are not having the intended effect
then your diagnosis of what is at fault may be wrong. Also, it is far easier to back out a
few simple changes than to back out a long series of changes if it should turn out that
your diagnosis is wrong.

10-40 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty Stick to a few simple troubleshooting tools -


While it is true that sophisticated tools are often useful and sometimes even essential,
trying to use tools which you are not extremely comfortable with is likely to increase the
time that it takes to resolve the problem. Stick to the tools that you are comfortable with
but be prepared to learn new tools if it should become necessary to do so (just make
sure that it is truly necessary and not just a chance to try out a new toy).
Do not neglect the obvious -
Pay attention to the most obvious indications that you have a problem and, at least
initially, focus on what they seem to suggest as obvious places to start. For example, an
error message about a disk I/O problem or the inability to access a data file is unlikely to
have anything to do with a networking problem. On the other hand, it is possible that
disk I/O problems have caused your non-IP target mode SSA network to fail (in other
words, the problem is usually obvious but not necessarily obvious).
Watch for what the cluster is not doing -
Also known as watching out for the dog that didnt bark (a reference to Arthur Conan
Doyles Sherlock Holmes story Silver Blaze in which a key clue involves a dog which did
not bark during the commission of the crime but would normally have been expected to
do so in the situation at hand). Watch for messages that should appear given your
current assumptions. If they do not appear (in other words, if the dog does not bark)
then your assumptions may be faulty.
Keep a record of the tests you have completed -
If the problem is truly simple, then you may be able to find it within a few minutes. If the
search takes longer than about fifteen minutes, then it is probably time to start taking
notes of what you are doing (also include a list of your assumptions so that you can
review them later to see which ones are starting to look doubtful). If finding and fixing
the problem should happen to turn into a major adventure then the ability to look back
on what you did (as opposed to what you vaguely remember doing) could prove
extremely useful.
Finally, keep in mind that many cluster problems are the result of poor cluster design,
untrained cluster administrators or the lack of a proper change control methodology.
Without a doubt, the easiest and fastest way to deal with a problem is to ensure that it
cannot happen in the first place.

Copyright IBM Corp. 1998, 2005 Unit 10. Problem Determination and Recovery 10-41
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Contacting IBM for Support


Before contacting IBM about a support issue, collect the
following information:

Item Checked
EXACT error messages that appear in HACMP logs such as
hacmp.out or on the console
Your cluster diagram / Planning Worksheets (updated)
A snapshot of your current cluster configuration (not a photo)
Details of any customization performed to HACMP events
Details of current AIX, HACMP and application software levels
Details of any PTFs applied to HACMP or AIX the cluster
The adapter microcode levels (especially for SSA adapters)
Cluster planning worksheets, with all components clearly labeled
A network topology diagram for the network as far as the users
Copies of all HACMP log files (snap e command)

Copyright IBM Corporation 2005

Figure 10-29. Contacting IBM for Support AU546.0

Notes:

What to do when contacting IBM


The visual above summarizes the steps. It is a very good idea to collate as much of this
information in advance of having a problem as is possible, especially snapshots and the
cluster diagram. If you have not already got this information assembled at your office for
your existing clusters, you are strongly recommended to do so as soon as you get back.

Updating your planning worksheets


To update your planning worksheets, if you are using the Online Planning Worksheets,
you can now export the HACMP om (or a snapshot with HACMP 5.3) to the planning
using the smit path Extended Configuration -> Export Definition File for Online
Planning Worksheets (or the path Extended Configuration -> Snapshot
Configuration ->Convert Existing Snapshot For Online Planning Worksheets).
The file should have a name of the form name.haw. The default location is
/var/hacmp/log

10-42 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

Uempty
Checkpoint
1. What is the most common cause of cluster failure?
a. Bugs in AIX or HACMP
b. Cluster administrator error
c. Marauding space aliens from another galaxy
d. Cosmic rays
e. Poor/inadequate cluster design
2. True or False?
Event emulation can emulate all cluster events.
3. If the cluster manager process should die, what will happen to the
cluster node?
a. It continues running but without HACMP to monitor and protect it.
b. It continues running AIX but any resource groups will fallover.
c. Nobody knows because this has never happened before.
d. The System Resource Controller sends an e-mail to root and issue a "halt -q".
e. The System Resource Controller sends an e-mail to root and issue a
"shutdown -F".
4. True or False?
A non-IP network is strongly recommended. Failure to include a non-IP
network can cause the cluster to fail or malfunction in rather ugly ways.
5. (bonus question) My favorite graphic in the lower right hand corner
of a foil was: ____________________________________

Copyright IBM Corporation 2005

Figure 10-30. Checkpoint AU546.0

Notes:

Copyright IBM Corp. 1998, 2005 Unit 10. Problem Determination and Recovery 10-43
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Unit Summary

Having completed this unit, you should be able to:


Understand why HACMP can fail
Identify configuration and administration errors
Understand why the Dead Man's Switch invokes
Know when the System Resource Controller will kill a node
Isolate and recover from failed event scripts
Correctly escalate a problem to IBM support
Explain the importance of cluster planning
Describe the key cluster planning deliverables
Requirements document
Design document
Test plan
Documented operational procedures
Explain how the requirements, design and test plan documents
should be linked together
Use the export to planning worksheets feature of HACMP 5.2

Copyright IBM Corporation 2005

Figure 10-31. Unit Summary AU546.0

Notes:

10-44 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Student Notebook

Uempty Unit 11. HACMP Migration

What This Unit Is About


This unit describes how to do updates to HACMP systems.

What You Should Be Able to Do


After completing this unit, you should be able to:
List the methods to update a cluster to HACMP 5.3
Describe the migration steps for each migration method
List the new features of HACMP 5.3

How You Will Check Your Progress


Accountability:
Checkpoint
Machine exercises

References
SC23-4861-06 HACMP for AIX, Version 5.3 Planning and Installation
Guide
/usr/es/sbin/cluster/release_notes
www.ibm.com/servers/eserver/pseries/haHACMP 5.3

Copyright IBM Corp. 1998, 2005 Unit 11. HACMP Migration 11-1
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Unit Objectives
After completing this unit, you should be able to:
List the methods to update a cluster to HACMP 5.3
Describe the migration steps for each migration method
List the new features of HACMP 5.3

Copyright IBM Corporation 2005

Figure 11-1. Unit Objectives AU546.0

Notes:

11-2 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Student Notebook

Uempty
Okay, So You Want to Upgrade
Various versions of HACMP work with different release levels of AIX.
As of September 2005 only HACMP 5.x releases are supported.
HACMP 5.3 is not supported on AIX 5L V5.1.

AIX AIX 5L AIX 5L AIX 5L


V4.3.3 V5.1 V5.2 ML4 V5.3 ML2

HACMP yes yes


4.4.1
HACMP yes yes yes
4.5
HACMP/ES yes yes yes
4.5
HACMP yes yes yes
5.1
HACMP yes yes yes
5.2
HACMP yes yes
5.3

Copyright IBM Corporation 2005

Figure 11-2. Okay, So You Want to Upgrade AU546.0

Notes:

HACMP supported levels and AIX


HACMP migration to HACMP 5.3 is supported from HACMP 4.5 (classic and ES,
HACMP 5.1 and HACMP 5.2.
It is always wise to check the migration guide of the level of HACMP that you are
migrating to for detailed steps. Your local procedure should be written with this as a
reference.
If you are upgrading from an older release, you must get to a supported level using the
methods defined by that level.

End of service dates


HACMP 4.4.1 12/31/2003 HACMPES 4.5 9/1/2005
HACMP 4.5 9/1/2005 HACMP 5.1 9/1/2006
HACMP 5.2 9/30/2007

Copyright IBM Corp. 1998, 2005 Unit 11. HACMP Migration 11-3
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Upgrade Cold or Warm?


Type:
Cold cluster services stopped on all nodes; must uninstall HACMP on all nodes
Warm- cluster services available on some node(s); install update
Offline cluster services stopped on all nodes but not uninstalled
Method:
Rolling (warm): mixed releases during migration
Node by Node (warm): same release during migration
Snapshot (cold)
Install (offline)
Start all over (cold)
Migration Paths
Warm (Rolling) or Install (offline)
HACMP 5.X
HACMP 5.3
HACMP 4.5/ES
Warm (Node-by-Node)

HACMP 4.5 HACMP 5.3

Cold (Snapshot)
HACMP 4.5 HACMP 5.3

Copyright IBM Corporation 2005

Figure 11-3. Upgrade - Cold or Warm? AU546.0

Notes:

Terminology
HACMP defines 4 migration methods:
i. Rolling: A type of upgrade from one HACMP version to another during which
cluster services are not stopped on all nodes in the cluster. Rolling migration lets
you update a cluster that has an ES (Enhanced Scalability) option to a cluster
with a higher version of the ES option. HACMP is not uninstalled and data is
saved.
ii. Snapshot: A type of upgrade from one HACMP version to another during which
you take a snapshot of the current cluster configuration, stop cluster services on
all nodes, install the next version of HACMP and then convert the snapshot by
running the clconvert_snapshot utility. HACMP is uninstalled and no data is
saved.

11-4 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Student Notebook

Uempty iii. Node-by-Node: A type of a rolling migration that enables you to upgrade the
cluster from one version to another while also moving the cluster from HACMP
(HAS) to HACMP/ES (HACMP 5.x). HACMP is not uninstalled and data is saved.
iv. Offline: Here the cluster services are stopped. HACMP is not uninstalled and
data is saved.
Note that migration of a node while the applications are running is not an acceptable
method for upgrading HACMP.

Migration Paths Characteristics


The figure above summarizes the various migration paths to HACMP 5.3
- Two paths can be used while cluster services are running on the node that you are
not currently upgrading:
i. By using Rolling Migration. This is supported to upgrade from HACMP 5.2, 5.1,
and HACMP/ES 4.5
ii. By using Node-by-Node. This is supported for HACMP 4.5
- Two paths can be used with cluster services not running on any node two ways
i. Using a snapshot and deinstalling the current version. This is supported for all
migration supported releases.
ii. Just install HACMP 5.3 on top. This is only supported for the same HACMP
levels as Rolling Migration

Recommended reading
Chapter 10 HACMP Planning and Installation Guide

Copyright IBM Corp. 1998, 2005 Unit 11. HACMP Migration 11-5
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

What Are We Going to Achieve?


Rolling from HACMP/ES 4.5, AIX 5L V5.1 to HACMP 5.3, AIX 5L
V5.3
Snapshot from HACMP 4.5, AIX 5L V5.1 to HACMP 5.3, AIX 5L
V5.3
Node by Node migration from HACMP 4.5 to HACMP 5.3

Ethernet

res_grp_b
res_grp_a

TMSSA

RS232

Vancouver
Halifax

AIX 5.1 AIX 5.1


HACMP/ES 4.5 HACMP/ES 4.5

Copyright IBM Corporation 2005

Figure 11-4. What Are We Going to Achieve? AU546.0

Notes:
We will be upgrading a running cluster with an application. The goal is to keep the
application running while we upgrade HACMP/ES 4.5 on AIX 5L V5.1 to HACMP 5.3 on
AIX 5L V5.3.

11-6 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Student Notebook

Uempty
Upgrade Steps
The steps are generic
Step Description Resources Resources available
available on Halifax on Vancouver

1 Backup and snapshots. Yes Yes


2 Planning. Yes Yes
3 Migrate resource groups off of node Halifax. No Yes, both groups
4 Stop HACMP on the Halifax node. No Yes, both groups
5 Upgrade AIX on the Halifax node. No Yes, both groups
6 Upgrade HACMP on the Halifax node. No Yes, both groups
7 Convert HACMP ODM classes if required. No Yes, both groups
8 Verify the configuration. No Yes, both groups
9 Restart HACMP on Halifax node. Maybe Yes, it depends
10 Migrate resource groups back to Halifax. Yes Yes
11 Migrate resource groups off of node Vancouver. Yes, both groups No
12 Stop HACMP on the Vancouver node. Yes, both groups No
14 Upgrade AIX on the Vancouver node. Yes, both groups No
15 Upgrade HACMP on the Vancouver node. Yes, both groups No
16 Convert HACMP ODM classes if required. Yes, both groups No
17 Verify the configuration. Yes, both groups No
18 Restart HACMP on Vancouver node. Yes, it depends Maybe
19 Migrate resource groups back to Vancouver Yes Yes
20 Backup and snapshots. Yes Yes

Copyright IBM Corporation 2005

Figure 11-5. Upgrade Steps AU546.0

Notes:

Rolling migration steps


These are generic steps to accomplish a rolling migration/upgrade (that is, the
application and resource groups remain available). This does not eliminate the need for
thorough planning of each step during a migration/upgrade process or the need of
testing after completion of these steps.

Copyright IBM Corp. 1998, 2005 Unit 11. HACMP Migration 11-7
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Risks?
To assume the world is perfect and software always functions as
designed would put most of us out of a job
If changing the level of AIX, will the applications still function?
Is the combination of AIX and HACMP supported?
Upgraded systems may not use "new features" until the rest of the cluster is
aware of the "features".
Nodes may run at different versions of HACMP, but ONLY during migration
and only ES to ES or classic to classic.
Keep the time that mismatched versions are running to a minimum.
Adapter and Storage microcode may require upgrading; don't forget to check.
Make a plan. Make sure you can go back.
Old code at latest level

Copyright IBM Corporation 2005

Figure 11-6. Risks? AU546.0

Notes:

Risks and restrictions


Assessing the risks associated with any proposed migration/upgrade are a must. As
part of the early planning stages, all applications, software, and hardware should be
reviewed for feature support with the proposed changes.
DARE cannot be used when the HACMP software levels are not identical in the cluster.
Nodes may only run ES to ES or classic to classic. With node by node migration both
nodes actually run classic until all nodes have ES installed.

11-8 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Student Notebook

Uempty
Step 1/Roll: Backup and Snapshots
Make an extra backup and mksysb before starting
Test that backups are useable
Back up the shared data separate from the non-shared storage
Verify the cluster and take a snapshot; move the files to a safe place

Ethernet

res_grp_b
res_grp_a

TMSSA

RS232

Vancouver
Halifax

AIX 5.1 AIX 5.1


HACMP/ES 4.5 HACMP/ES 4 5

Copyright IBM Corporation 2005

Figure 11-7. Step 1/Roll: Backup and Snapshots AU546.0

Notes:

Get ready
The cluster should be working! Save the configuration and make a mksysb. The
HACMP software on the cluster nodes should be at the current level. If patches are
applied, make sure you reboot and repeat saving the configuration and mksysb. The
saved snapshot should be placed in a directory not used by HACMP.
The backup steps can't be stressed enough. This is the best insurance policy should
things go drastically wrong during the migration/upgrade process.

Copyright IBM Corp. 1998, 2005 Unit 11. HACMP Migration 11-9
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Step 2/Roll: Planning


The second most important plan is how to perform the upgrade
The most important plan is how to undo the upgrade if it fails
Verify with the application vendors the proposed changes will not cause other
problems
Verify all of the prerequisites are met, hardware and software
Acquire necessary patches
Verify the cluster is functional and stable
Involve all that will be affected by the changes in the planning
Ethernet

res_grp_b
res_grp_a

TMSSA

RS232

Vancouver
Halifax

AIX 5.1 AIX 5.1


HACMP/ES 4.5 HACMP/ES 4 5

Copyright IBM Corporation 2005

Figure 11-8. Step 2/Roll: Planning AU546.0

Notes:

Things to think about


Planning, planning, planning. The need for planning a migration/upgrade can't be
stressed enough. Taking the time to review and plan thoroughly will save many 'I forgot
to do that' problems during and after the migration/upgrade process.
Coordinate with application vendors and network folks to make sure the new
environment is workable. Acquire patches for the new release of HACMP.

Pre-Planning for all upgrades


- Resource group types become policies
- Distribution policy of network converted to node
- Software in committed state at latest level
- clsmuxpd not installed
- cluster manager daemon will always run

11-10 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Student Notebook

Uempty - Check Types of Networks


Make sure that the types of networks that you plan to use are supported by
HACMP 5.3. Remove or change unsupported types before you upgrade the
HACMP software. The following network types are not supported in HACMP 5.3:
IP (Generic IP)
SOCC
SLIP
FCS (Fiber Channel Switch)
802_eth (Ethernet Protocol 802.3)
- HACMP Configuration Database Security Changes May Affect Migration
The HACMP Configuration Database (ODM) has the following security
enhancements:
Ownership. All HACMP ODM files are owned by the root user and the hacmp
group. In addition, all HACMP binaries that are intended for use by non-root
users are also owned by root user and the hacmp group.
Permissions. The hacmpdisksubsystem file is set with 600 permissions. Most of
the other HACMP ODM files are set with 640 permissions (the root user can read
and write, while the hacmp group can only read). All HACMP binaries that are
intended for use by non-root users are installed with 2555 permissions (readable
and executable by all users, with the setgid bit turned on so that the program
runs as hacmp group).

During the installation, HACMP creates the hacmp group on all nodes. By
default, the hacmp group has permission to read the HACMP ODMs, but does
not have any other special authority. For security reasons, do not to expand the
authority of the hacmp group.
If you use programs that access the HACMP ODMs directly, you may need to
rewrite them if they are intended to be run by non-root users:
All access to the ODM data by non-root users should be handled via the
provided HACMP utilities.
In addition, if you are using the PSSP File Collections facility to maintain the
consistency of /etc/group, the new hacmp group that is created at installation
time on the individual cluster nodes may be lost when the next file
synchronization occurs.
To prevent overwriting your hacmp group, before installing HACMP 5.3,
either:
- Turn off the PSSP File Collections synchronization of the /etc/group file
OR
- Include the hacmp group in the master /etc/group file and propagate this
change to all cluster nodes.

Copyright IBM Corp. 1998, 2005 Unit 11. HACMP Migration 11-11
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Pre-planning for HACMP 5.1 upgrades


- Convert to local mount points for NFS
- Enhanced security mode remains but name changed to Kerberos security
- Dynamic Node Priority may be changed on migration

Pre-planning for HACMP(/ES) 4.5 upgrades


- HACMP 5.3 processes resource groups in parallel by default but will respect serial
order on migration
- Priority override on migration determined by sticky attribute
- /.rhosts replaced by clcomd. Remove /.rhosts after migration unless needed by
other programs
- From HACMP 4.5 the rsct filesets will be new

11-12 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Student Notebook

Uempty
Step 3/Roll: Migrate Resources
The resource groups are to be moved to the other nodes while the
originating node is worked on.
This will cause a short outage to move the resource group.
Shutdown Cluster Services with Takeover.
Verify the resource group is taken over successfully.

Ethernet

res_grp_a

TMSSA
res_grp_b
RS232
Vancouver
Halifax

AIX 5.1
AIX 5.1 HACMP/ES 4.5
HACMP/ES 4.5

Copyright IBM Corporation 2005

Figure 11-9. Step 3/Roll: Migrate Resources AU546.0

Notes:

Keeping the application running


One of the benefits of HACMP is the ability to move the resources, providing continued
application availability and still perform an upgrade or migration on another cluster
node.

Copyright IBM Corp. 1998, 2005 Unit 11. HACMP Migration 11-13
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Step 4/Roll: Stop HACMP


Verify HACMP has stopped on the node to be upgraded.
If stop cluster services with graceful takeover was used then clustmgr should
stop.
If not stop cluster services now.

Ethernet

res_grp_a

TMSSA
res_grp_b
RS232
Vancouver
Halifax

AIX 5.1
AIX 5.1 HACMP/ES 4.5
HACMP/ES 4.5

Copyright IBM Corporation 2005

Figure 11-10. Step 4/Roll: Stop HACMP AU546.0

Notes:

Stopping HACMP
If an rg_move command (C-SPOC) was used to move the resource groups (as
opposed to a shutdown with fallover), it is now time to stop HACMP on the node to be
upgraded.

11-14 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Student Notebook

Uempty
Step 5/Roll: Upgrade AIX
If AIX is to be upgraded, do it now.
To keep HACMP ODM entries in tact:
Use "Migration Install" if installing a new version/release of AIX.
Use "update all" if installing a new modification level of AIX.
Apply the necessary AIX ptf's.

Ethernet

res_grp_a

TMSSA
res_grp_b
RS232
Halifax Vancouver

AIX 5.1
AIX 5.3 ML 2 HACMP/ES 4.5
HACMP/ES 4.5

Copyright IBM Corporation 2005

Figure 11-11. Step 5/Roll: Upgrade AIX and RSCT AU546.0

Notes:

Upgrade or Migrate AIX (and RSCT)


If updating the AIX level, it may be worthwhile to export the shared VG definitions after
the cluster processing has been stopped on the affected node. After updating the AIX
level, import the shared VG definitions prior to restarting HACMP. This will ensure that
no VG corruption will arise during the migration process.
Most changes within a major release are simple updates to the operating system, not
migrations. However if you are starting with an AIX version 4 then you can use AIX
migration to get to AIX version 5.
Note that HACMP 5.3 is not supported with AIX 5.1

Copyright IBM Corp. 1998, 2005 Unit 11. HACMP Migration 11-15
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Step 6/Roll: Upgrade HACMP


Upgrade HACMP on the first node.
Smitty update_all will upgrade to HACMP 5.3 in a single pass.
Additional file sets may be added after upgrade; documentation, man pages
and HATivoli to name a few new filesets.

Ethernet

res_grp_a

TMSSA
res_grp_b
RS232
Halifax Vancouver

AIX 5.1
AIX 5.3 ML 2 HACMP/ES 4.5
HACMP 5.3

Copyright IBM Corporation 2005

Figure 11-12. Step 6/Roll: Upgrade HACMP AU546.0

Notes:

Watch the messages


During the update process, messages will scroll by. Be careful to notice them or check
the output after the update is complete. There may be messages regarding the
configuration update that requires manual intervention. Just because the installation is
complete and the message is OK, it does not mean that the install log is free of very
important messages. Scroll through it and verify there are no warnings from the
conversion routine.

11-16 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Student Notebook

Uempty
Step 7/Roll: Convert ODM Classes
If a ODM class upgrade failure occurred in the previous step, it
may be necessary to manually update the HACMP ODM classes.
HACMP 5.3 will automatically attempt to run clconvert. Check the output
after the install for positive or negative messages.
The clconvert can be found in /usr/es/sbin/cluster/conversion if required to
run manually.

Ethernet

res_grp_a

TMSSA
res_grp_b
RS232
Halifax Vancouver

AIX 5.1
AIX 5.3 ML2 HACMP/ES 4.5
HACMP 5.3

Copyright IBM Corporation 2005

Figure 11-13. Step 7/Roll: Convert ODM Classes AU546.0

Notes:
This step is only necessary if the conversion routine did not complete successfully.
The /usr/es/sbin/cluster/conversion/clconvert command is documented in the HACMP
Administration Guide Appendix C.

Copyright IBM Corp. 1998, 2005 Unit 11. HACMP Migration 11-17
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Step 8/Roll: Verify Configuration


Before proceeding, make sure the configuration looks appropriate.
Check networks.
Check external disk subsystems.
Check HACMP topology and resources.
Reboot the node; it is a requirement for HACMP install and a good idea on
upgrades.

Ethernet

res_grp_a

TMSSA
res_grp_b
RS232
Vancouver
Halifax

AIX 5.1
AIX 5.3 ML2 HACMP/ES 4.5
HACMP 5.3

Copyright IBM Corporation 2005

Figure 11-14. Step 8/Roll: Verify Configuration AU546.0

Notes:

Dont forget to reboot, then check things out.


After the reboot check things out. Make sure that everything looks as expected.
Remember that if something is wrong with the configuration, HACMP isn't going to fix it
when it is started.
Commands like lslpp, and cltopinfo confirm that there is indeed a HACMP
configuration on the machine. If the conversion fails, there may not be any configuration
data for the commands to find. This is a manual method since verify cannot be run in a
mixed environment.

11-18 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Student Notebook

Uempty
Step 9/Roll: Restart HACMP
Restart HACMP with smitty clstart.
Verify clstrmgr starts
Depending on the resource groups configuration and options, a takeover
may occur. Be prepared for an outage on the application
Use clstat or cldisp to verify the cluster becomes stable

Ethernet

res_grp_a res_grp_b
TMSSA

RS232
Vancouver
Halifax

AIX 5L V5.1
AIX 5L V5.3 ML2 HACMP/ES 4.5
HACMP 5.3

Copyright IBM Corporation 2005

Figure 11-15. Step 9/Roll: Restart HACMP AU546.0

Notes:

Start HACMP on the updated node


After clstat reports that both nodes and the cluster are stable, check things out again.
Make sure that all expected resources have started and are functional.
Remember you are now in compatibility mode with different levels of HACMP on
different nodes. You want minimize the time you are in this state. You can not make any
configuration changes while in this state. Synchronization will fail.

Copyright IBM Corp. 1998, 2005 Unit 11. HACMP Migration 11-19
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Step 10/Roll: Migrate Resource Groups


In some cases the resource groups may not migrate back
automatically.
Migrate the resource groups as desired
Use clstat or clRGinfo to verify the cluster becomes stable

Ethernet

res_grp_a res_grp_b

TMSSA

RS232
Vancouver
Halifax

AIX 5L V5.1
AIX 5L V5.3 ML2 HACMP/ES 4.5
HACMP 5.3

Copyright IBM Corporation 2005

Figure 11-16. Step 10/Roll: Migrate Resource Groups AU546.0

Notes:

Ensure the resource group(s) are in their home locations


Keep in mind that resource groups clusters that have been configured as never fallback
(cascading without fallback) or with a distribution policy (rotating) will not fallback to the
updated node automatically. This should have been recognized during the planning.

11-20 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Student Notebook

Uempty
What About Failures?
In this type of mixed environment, HACMP will function but:
Applications may not appreciate differences in HACMP or AIX levels
Care must be taken not to deploy options or enhancements that the older
version of HACMP or AIX cannot deal with
There may be compatibility issues in the event of a failover so don't rest too
long
HACMP will not be able to synchronize
Ethernet

res_grp_a res_grp_b

TMSSA

RS232

Halifax Vancouver

AIX 5.1
AIX 5.3 ML2 HACMP/ES 4.5
HACMP 5.3

Copyright IBM Corporation 2005

Figure 11-17. What About Failures? AU546.0

Notes:

Handle failures as rapidly as possible


Again, the need for planning is apparent. Any one of the conditions mentioned could
cause adverse effects to the entire migration/upgrade.
In this environment, it will be difficult to support the cluster. Keep the time the cluster is
in a mixed version configuration to a minimum.

Copyright IBM Corp. 1998, 2005 Unit 11. HACMP Migration 11-21
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Steps 11 to 19/Roll
Repeat the steps 3 through 10 only on the other node.
Move the resources to the Halifax node
Stop HACMP
Upgrade AIX and HACMP; convert ODM if necessary
Reboot, verify and restart HACMP
The process is exactly the same, it is just the other node.

Ethernet

res_grp_a res_grp_b

TMSSA

RS232

Halifax Vancouver

AIX 5L V5.3 ML2


AIX 5L V5.3 ML2 HACMP 5.3
HACMP 5.3

Copyright IBM Corporation 2005

Figure 11-18. Steps 11 to 19/Roll AU546.0

Notes:

Now its time for the next node


Repeat the steps 3 through 11 on the rest of the nodes in the cluster one at a time. Use
the chart of steps as a checklist to verify the steps were performed and in the correct
order.

11-22 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Student Notebook

Uempty
Step 20/Roll: Backup and Testing
Okay, all the upgrades are done. The resource groups are back
where they belong.
Verify the cluster will synchronize both topology and resources
Verify the cluster is stable
Create a new mksysb for both nodes
Create new cluster snapshot
Fully test the cluster
Ethernet

res_grp_a res_grp_b

TMSSA

RS232

Halifax Vancouver

AIX 5L V5.3 ML2


AIX 5L V5.3 ML2 HACMP 5.3
HACMP 5.3

Copyright IBM Corporation 2005

Figure 11-19. Step 20/Roll: Backup and Testing AU546.0

Notes:

Time for backup and testing


Now that all this work is done, get your backups completed. Imagine doing all this work,
then having an outage which required backups to restore!
Might want to use the Test facility of HACMP 5.3 and save your configuration to an xml
file for the Online Planning Worksheets.

Copyright IBM Corp. 1998, 2005 Unit 11. HACMP Migration 11-23
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

What About the Other Upgrade Methods?


There are three other methods of upgrading to HACMP 5.3.
Snapshot (Cold) Convert
The classic method, stop HACMP everywhere, uninstall it and install HACMP 5.3
convert the snapshot and applying it to the newly installed software.
Node by Node Migration
Node by node upgrade of a running cluster. Cluster services remain up and each
node is upgraded in turn. When finished the cluster switches to HACMP 5.3.
Offline
Cluster services stopped everywhere but not uninstalled. Install HACMP 5.3.
clconvert runs automatically (not covered here).

cold convert
HACMP 5.X, HACMP 5.3
HACMP/ES 4.5

node by node migration


HACMP 4.5 HACMP 5.3

Summary for our example case


Copyright IBM Corporation 2005

Figure 11-20. What About the Other Upgrade Methods? AU546.0

Notes:

Other methods for updating the cluster


There are pros and cons to both methods. The next series of slides will describe the
snapshot method. This is sometimes referred to as cold since the cluster is completely
stopped.
The snapshot convert methodology is the traditional method for upgrading HACMP. I.

11-24 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Student Notebook

Uempty
Snapshot/Cold Convert
The cold convert method has its advantages.
Conversion can be accomplished from HACMP version 4.1 upwards
The entire cluster is converted at once
A snapshot may be used or you can just start all over
The configuration may be reentered in a different configuration

The disadvantages:
The entire cluster must be down and all the nodes will require a reboot

Copyright IBM Corporation 2005

Figure 11-21. Snapshot/Cold Convert AU546.0

Notes:

Advantages and disadvantages


The ability to perform the snapshot update is going to be dependent on the individual
service level agreements is place. Does this agreement provide for an extended
outage?
This is the fastest method. Some of the steps can be done in parallel. This method can
convert from HACMP 4.2.2 to HACMP 5.3 in one go.
The disadvantage is that there is a longer period of time that the application is not
available

Copyright IBM Corp. 1998, 2005 Unit 11. HACMP Migration 11-25
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Steps for a Snapshot Convert


The following table outlines the steps required for a snapshot
conversion from HACMP/ES 4.5 to HACMP 5.3

Step Description Resources Resources available


available on Halifax on Vancouver

1 Backup and snapshots. Yes Yes


2 Planning. Yes Yes
3 Stop HACMP on all nodes. No No
4 Uninstall HACMP on all nodes. No No
5 Install HACMP/ES on all nodes. No No
6 Run clconvert_snapshot. No No
7 Apply Snapshot. No No
8 Reinstall customized Event Scripts. No No
9 Synchronize and restart the cluster. No No
10 Verify cluster operation. Yes Yes
Yes

Copyright IBM Corporation 2005

Figure 11-22. Steps for a Snapshot Convert AU546.0

Notes:

Review the steps


Considerable planning is required. Complete documentation for event customization is
a must. The biggest thing to consider is the downtime required.
Notice that the resource groups will not be available.

11-26 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Student Notebook

Uempty
Step 1/Snap: Backup and Snapshots
The backups and snapshots are very important. All of the previous
reasons apply and a few extra considerations this time.
Copy the snapshot out of the default directory
/usr/es/sbin/cluster/snapshots/<name>.odm and .info to another directory
Move all customized events and configuration information out of
/usr/es/sbin/cluster and /usr/lpp/cluster

Ethernet

res_grp_b
res_grp_a

TMSSA

RS232
Vancouver
Halifax

AIX 5L V5.1 AIX 5L V5.1


HACMP/ES 4.5 HACMP/ES 4.5

Copyright IBM Corporation 2005

Figure 11-23. Step 1/Snap: Backup and Snapshots AU546.0

Notes:

Save customizations and set the SNAPSHOTPATH variable


Any customizations that have been set up in the /usr/es/sbin/cluster (or
/usr/sbin/cluster) directory structure should be copied to a safe place to ensure they
exist after the upgrade. Snapshot files should be copied to a safe place as a safeguard
against losing them during the deinstallation process.
Remember SNAPSHOTPATH variable:
For the Snapshot utility, the default directory path for storage and retrieval of a snapshot
is /usr/es/sbin/cluster/snapshots. However, if you have specified an alternate path using
the SNAPSHOTPATH environment variable, check this location before upgrading.

Copyright IBM Corp. 1998, 2005 Unit 11. HACMP Migration 11-27
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Step 2/Snap: Planning


The snapshot convert will require down time. Make sure you have
enough.
Once the process is started, restore of a mksysb is the easiest fix if things go
wrong
Verify all of the prerequisites are met, hardware and software
Verify all local customization is documented
Involve all that will be affected by the changes in the planning
Verify the version of the current HACMP software

Ethernet

res_grp_a res_grp_b

TMSSA

RS232

Halifax Vancouver

AIX 5L V5.1 AIX 5L V5.1


HACMP/ES 4.5 HACMP/ES 4.5

Copyright IBM Corporation 2005

Figure 11-24. Step 2/Snap: Planning AU546.0

Notes:

Must plan to fit window of down time


Planning and more planning. There really is no substitute for having a good plan to
ensure the upgrades work successfully. You may have to plan to go back to old
environment to make window of down time.

11-28 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Student Notebook

Uempty
Step 3/Snap: Stop HACMP
Stop HACMP
No point in doing a failover, both cluster nodes have to be down

Ethernet

res_grp_a res_grp_b

TMSSA

RS232
Vancouver
Halifax

AIX 5L V5.1
AIX 5L V5.1 HACMP/ES 4.5
HACMP/ES 4.5

Copyright IBM Corporation 2005

Figure 11-25. Step 3/Snap: Stop HACMP AU546.0

Notes:

Stop the cluster and the applications


This shutdown can be accomplished via the C-SPOC option for stopping all nodes in
the cluster.
HACMP will require a reboot of the machine so the applications should be stopped too.
A graceful shutdown on all of the nodes will do the job.

Copyright IBM Corp. 1998, 2005 Unit 11. HACMP Migration 11-29
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Step 4/Snap: Uninstall HACMP


Remove HACMP
Remove all of the HACMP software from all of the nodes

Ethernet

TMSSA

RS232
Vancouver
Halifax

AIX 5L V5.1
AIX 5L V5.1

Copyright IBM Corporation 2005

Figure 11-26. Step 4/Snap Uninstall HACMP AU546.0

Notes:

Methods to remove the software


See the man page for a list of installp options available to remove software or use SMIT
option to remove installed software but installp -u cluster* generally works.
You may want to also remove /usr/es/ (or /usr/sbin/cluster).
The smit fast path is smitty remove. Select all the cluster filesets.

11-30 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Student Notebook

Uempty
Step 5/Snap: Install HACMP 5.3
Migrate/update AIX 5L V5.1 to AIX 5L V 5.3
Install the appropriate AIX prerequisite file sets
Install the HACMP 5.3 filesets
Use smitty install_all
Only install the required file sets
Reboot the node. It is a requirement of any HACMP installation
Ethernet

TMSSA

RS232
Vancouver
Halifax

AIX 5L V5.3 ML2


AIX 5L V5.3 ML2 HACMP 5.3
HACMP 5.3

Copyright IBM Corporation 2005

Figure 11-27. Step 5/Snap: Install HACMP 5.3 AU546.0

Notes:

Installation notes
- As in any installation process performed using SMIT, the preview option can be
selected. If any required filesets are missing, this will list them. HACMP 5.3 has a
couple of additional prerequisites that HACMP 4.5 (classic) did not have.
- Any prerequisites should have been discovered in the planning stages.
- All of the nodes in the cluster can be installed at the same time.
- Reboot each node in the cluster with a shutdown Fr command. When you reboot
cluster nodes, the first node up runs a network_down event for each of its non-IP
networks, even if the network is healthy:
Error: EVENT START: network_down -1 SERIAL
This event appears in the /tmp/hacmp.out file whether or not the network is
functional. Ignore the message and let cluster services continue to function. You
should see this error message corrected in a healthy cluster as functional network
communication is eventually established between other nodes in the cluster.

Copyright IBM Corp. 1998, 2005 Unit 11. HACMP Migration 11-31
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Step 6/Snap: Apply Snapshot


On one of the nodes, apply the snapshot
The command to convert the snapshot is:
/usr/es/sbin/cluster/conversion/clconvert_snapshot -C -v <ver> -s <file>
Where <ver> is the version number of the snapshot file
And <file> is the file name of the snapshot taken on the previous release
Once the snapshot is converted it may be applied
/usr/es/sbin/cluster/utilities/clsnapshot or via smit

Ethernet

TMSSA

RS232

Halifax Vancouver

AIX 5L V5.3 ML2 AIX 5L V5.3 ML2


HACMP 5.3 HACMP 5.3

Copyright IBM Corporation 2005

Figure 11-28. Step 6/Snap: Apply Snapshot AU546.0

Notes:

Snapshot considerations
Copy the saved snapshot into the /usr/es/sbin/cluster/snapshots directory.
Applying the snapshot on one system will cause synchronization of all the nodes in the
cluster.
For additional info on the snapshot utility consult the HACMP Administration guide.
You may need to force apply the snapshot. The following messages can be ignored:
WARNING: The NFS mount/Filesystem specified for resource group rg1 is using
incorrect syntax for specifying an NFS cross mount: /mnt/fs1.
ERROR: Disk Heartbeat Networks have been defined, but no Disk Heartbeat devices.
You must configure one device for each node in order for a Disk Heartbeat network to
function.

11-32 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Student Notebook

Uempty
Step 7/Snap: Reinstall Event Customization
If there has been primary or secondary event customization, it has
been removed and may require reinstallation
This assumes the event customization is documented and stored in a location
that is still accessible
Manual intervention will be required to check the compatibility of the previous
event customization and the new event scripts

Ethernet

TMSSA

RS232

Halifax Vancouver

AIX 5L V5.3 ML2 AIX 5L V5.3 ML2


HACMP 5.3 HACMP 5.3

Copyright IBM Corporation 2005

Figure 11-29. Step 7/Snap: Reinstall Event Customization AU546.0

Notes:

Evaluate your customization


Your customization must be evaluated for serial versus parallel processing and the
possibility that you can eliminate some customization using resource group
dependencies if part of your customization was to handle controlling the order that
resource groups were started.
Having good documentation of customized events/scripts is important at this point, or a
great deal of time may be spent trying to determine what was previously customized
and trying to redo the customization of the cluster environment.
Changes made to the ODM captured by the snapshot will be propagated to the new
installation. Check to see if any of customized HACMP classic events are substituted
into the configuration by the snapshot.

Copyright IBM Corp. 1998, 2005 Unit 11. HACMP Migration 11-33
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Step 8/Snap: Synchronize and Restart


The converted snapshot is applied to all of the nodes, make sure all
the ODM constructs are correct
Synchronize topology services
Synchronize resource groups
Start HACMP on all the nodes in the cluster
smitty clstart
Topology

Resources

Ethernet

res_grp_b
res_grp_a TMSSA

RS232

Halifax Vancouver

AIX 5L V5.3 ML2


AIX 5L V5.3 ML2 HACMP/ 5.3
HACMP 5.3

Copyright IBM Corporation 2005

Figure 11-30. Step 8/Snap: Synchronize and Restart AU546.0

Notes:

Synchronize and start cluster services


Make sure that the synchronizations of Topology and Resources completes
successfully. Remember there is no 'sort of synchronized' if synchronization fails. The
problems need to be addressed and resolved.
After synchronization (which can now be run with automatic correction), it is time to start
cluster services.

11-34 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Student Notebook

Uempty
Step 9/Snap: Verify Cluster Operation
Use the original planning information to verify the cluster operation
Perform fall over test per the test plan
Verify customization is working as planned
Check for errors in the various logs
Verify the cluster becomes stable
Take a new snapshot and mksysb
Ethernet

res_grp_b
res_grp_a

TMSSA

RS232
Vancouver
Halifax

AIX 5L V5.3 ML2


AIX5L V 5.3 Ml2 HACMP 5.3
HACMP 5.3

Copyright IBM Corporation 2005

Figure 11-31. Step 9/Snap: Verify Cluster Operation AU546.0

Notes:

Test
A complete test of cluster functionality should be performed. Don't assume that it
worked fine before the upgrade, so it will work fine now. Build enough time into the plan
to fully test the cluster.
Perform the steps in the test plan. You may want to also use the Test facility of HACMP.

Copyright IBM Corp. 1998, 2005 Unit 11. HACMP Migration 11-35
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

But I Can't Get the Down Time ...


There is another option for upgrading to HACMP 5.3, node by node
migration. This is another type of a warm conversion. The node by
node migration does not require the entire cluster to be stopped at
any time during the migration process. There are some
prerequisites that must be met to enable this type of migration.
All nodes must be running HACMP 4.5
The software must be in committed state
The cluster must be stable
There must be enough disk and memory to run both HACMP 4.5 and
HACMP 5.3
No cluster can be set to automatically start in the inittab
All changes in topology and configuration must be finished prior to starting
the node by node migration

node by node migration


HACMP HACMP
v 4.5 v 5.3

Summary for our example case

Copyright IBM Corporation 2005

Figure 11-32. But I Can't Get the Down Time... AU546.0

Notes:

Node-by-node migration
Now that we've seen the snapshot method we'll take a look at the steps required to
perform a node-by-node conversion.
Node by node migration first appeared in HACMP 4.3.1. Since it is the goal to finish
on HACMP/ES 4.5 then it is suggested to upgrade HACMP to 4.5 first, then perform
the node by node migration.
Node-by-Node migration is supported in HACMP 5.3 only from HACMP 4.5
(classic).

11-36 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Student Notebook

Uempty
Node by Node Migration Steps
The steps are for a node by node migration from HACMP 4.5 to
HACMP 5.3

Step Description Resources Resources available


available on Halifax on Vancouver

1 Backup and snapshots. Yes Yes


2 Planning. Yes Yes
3 Stop HACMP on the Halifax node. No Yes, both groups
4 Install HACMP/ES on the Halifax node. No Yes, both groups
5 Restart HACMP on Halifax node. Maybe Yes, it depends
6 Stop HACMP on the Vancouver node. Yes, both groups No
7 Install HACMP/ES on the Vancouver node. Yes, both groups No
8 Restart HACMP on Vancouver node. Yes, it depends Maybe
9 Backup and snapshots. Yes Yes

Copyright IBM Corporation 2005

Figure 11-33. Node by Node Migration Steps AU546.0

Notes:

How node-by-node works


These are the steps required to perform a node-by-node upgrade. Each HACMP cluster
has its unique customizations that require thorough planning. Remember a little
planning can avert major headaches!
Before the last node is installed with HACMP 5.3, all the nodes continue to run HACMP
4.5
When you have installed HACMP 5.3 on the last node, starting Cluster Services
automatically triggers the transfer of control from HACMP 4.5 to HACMP 5.3 which
proceeds as follows:
- Installing HACMP 5.3 installs a recovery file called firstboot in a holding directory on
the cluster node, and creates a migration file (.mig) to be used as a flag during the
migration process

Copyright IBM Corp. 1998, 2005 Unit 11. HACMP Migration 11-37
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

- The HACMP 5.3 recovery driver sends a message to the HACMP clstrmgr telling it
to run the waiting and waiting_complete events
- HACMP 5.3 uses the RSCT Group Services to verify cluster stability and
membership
- The firstboot file on each cluster node is moved to an active directory (/etc)
- The migration flag (.mig file) created during installation is transferred from the
HACMP 5.3 directory to the HACMP 4.5 directory on all nodes
When the firstboot file is moved to the active directory and the .mig file transfer is
complete on all nodes, transfer of control to HACMP 5.3 continues with the HACMP 5.3
migrate event.
- HACMP 5.3 recovery driver issues the migrate event
- HACMP 5.3 stops the HACMP daemons using the forced option
- HACMP 5.3 clinfoES daemons are all activated, reusing the ports previously used
by the HACMP versions of those daemons
- HACMP 5.3 recovery driver runs the migrate_complete event
- HACMP is deinstalled; configuration files common to both products are left
untouched
- Base directories are relinked
- /etc/firstboot files are removed
- The migration flag (.mig file) in the HACMP /usr/sbin/cluster directory is removed
- Migration is now complete
You should verify and test the clusters proper fallover and recovery functionality.

11-38 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Student Notebook

Uempty
Step 1/Node. Backup and Snapshots
Node by Node migration has a "step of no return", backups are
critical in this environment
Copy the snapshot out of the default directory
/usr/sbin/cluster/snapshots/<name>.odm and .info to another directory

Ethernet

res_grp_b
res_grp_a

TMSSA

RS232
Vancouver
Halifax

AIX 5L V5.3 ML2 AIX 5L V5.3 2ML8


HACMP 4.5 HACMP 4.5

Copyright IBM Corporation 2005

Figure 11-34. Step 1/Node. Backup and Snapshots AU546.0

Notes:
As we saw during the cold upgrade, it is important to make a copy of the snapshot files in a
location which won't be effected by the upgrade. If this isn't done and the snapshot is
converted to HACMP/ES format, it can't be unconverted.
The deinstallation of HACMP and implementation of HACMP/ES will happen under the
control of the migrate event script. It is critical that a good mksysb is created before the
process is started.

Copyright IBM Corp. 1998, 2005 Unit 11. HACMP Migration 11-39
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Step 2/Node. Planning


Node by node migration
Once the process is started, restore of a mksysb is the easiest fix if things go
wrong
Verify all of the prerequisites are met, hardware and software. HACMP 5.3
may require additional filesets to be added to AIX
Verify the configuration is acceptable to HACMP 5.3

Ethernet

res_grp_a res_grp_b

TMSSA

RS232

Halifax Vancouver

AIX 5L V5.3 ML2 AIX 5L V5.3 ML2


HACMP 4.5 HACMP 4.5

Copyright IBM Corporation 2005

Figure 11-35. Step 2/Node. Planning AU546.0

Notes:

Points to consider
Having two copies of the mksysb image is probably not a bad idea before you start this
process.
Verifying prerequisites can be accomplished using the preview option from the SMIT
install software menu.
The cluster will require a complete test for functionality once the migrations have been
completed. Testing is a must; don't assume everything will work fine.
The clsmuxpd is not used in HACMP 5.3. During the node-by-node migration, it will not
be installed on the node. Before upgrading, make sure your applications previously
monitored by clsmuxpd now use application monitoring.
The cl_registerwithclsmuxpd() API routine is removed in HACMP 5.3; application
monitoring effectively supersedes this function. Ensure your applications previously
monitored by clsmuxpd now use application monitoring.

11-40 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Student Notebook

Uempty
Step 3/Node. Stop HACMP
Stop HACMP 4.5 on the node to be upgraded
Stop cluster services with takeover to have the resource groups migrate to
the next node in the resource group
Check the resources have successfully transferred to another node
Verify there are no resource groups on the node to be upgraded

Ethernet

res_grp_a

TMSSA res_grp_b

RS232
Vancouver
Halifax

AIX 5L V 5.3 ML2


AIX 5L V5.3 ML2 HACMP 4.5
HACMP 4.5

Copyright IBM Corporation 2005

Figure 11-36. Step 3/Node. Stop HACMP AU546.0

Notes:
Use clstat on the surviving node to verify the cluster becomes stable after the node is
stopped. If the cluster does not become stable, stop the process here and troubleshoot the
problem.
Verify all the cluster resources are active on the second node.

Copyright IBM Corp. 1998, 2005 Unit 11. HACMP Migration 11-41
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Step 4/Node. Install HACMP 5.3


Do not uninstall HACMP 4.5
Install the appropriate HACMP 5.3 file sets
Do not just install all the filesets. Only install the required file sets
Verify the filesets installed correctly
Reboot the node

Ethernet

res_grp_a

TMSSA res_grp_b

RS232
Vancouver
Halifax

AIX 5L V5.3 ML2 AIX 5L V5.3 ML2


HACMP 4.5 HACMP 4.5
HACMP 5.3

Copyright IBM Corporation 2005

Figure 11-37. Step 4/Node. Install HACMP 5.3 AU546.0

Notes:
Review the smit.out log file after completing the install process. Make sure each fileset
installed completed successfully.
The reboot is not optional.
Note: The clsmuxpd is not used in HACMP 5.3. During the node-by-node migration, it
will not be installed on the node.
Note: The cl_registerwithclsmuxpd() API routine is removed; application monitoring
effectively supersedes this function. Ensure your applications previously monitored by
clsmuxpd now use application monitoring.

11-42 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Student Notebook

Uempty
Step 5/Node. Restart HACMP
Restart HACMP services on the hybrid node
Both HACMP 4.5 and HACMP 5.3 clstrmgr will start
HACMP 4.5 will be in control as the node rejoins the cluster
The node will reacquire resources according to their configuration parameters
Ensure the cluster becomes stable
Ethernet

res_grp_a res_grp_b

TMSSA

RS232
Vancouver
Halifax

AIX 5L V 5.3 ML2 AIX 5L V5.3 ML2


HACMP 4.5 HACMP 4.5
HACMP 5.3

Copyright IBM Corporation 2005

Figure 11-38. Step 5/Node. Restart HACMP AU546.0

Notes:
Take a look at all the processes that have started as the hybrid node joins the cluster.
Commands like lssrc -g cluster will show the node is in hybrid mode.
Use the clstat utility to verify cluster stability.

Copyright IBM Corp. 1998, 2005 Unit 11. HACMP Migration 11-43
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Step 6/Node. Stop HACMP


Stop HACMP 4.5 on the other node. Do not uninstall it
Use shutdown takeover to transfer the resources to the other node.
Check the resources have successfully transferred to the other node
Verify there are no resource groups on the node to be upgraded
Ethernet

res_grp_b

TMSSA
res_grp_a
RS232
Vancouver
Halifax

AIX 5L V5.3 ML2 AIX 5L V5.3 ML2


HACMP 4.5 HACMP 4.5
HACMP 5.3

Copyright IBM Corporation 2005

Figure 11-39. Step 6/Node. Stop HACMP AU546.0

Notes:
Now repeat the steps previously performed to the upgraded node.

11-44 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Student Notebook

Uempty
Step 7/Node. Install HACMP 5.3
Install the appropriate HACMP 5.3 file sets
Only install the required file sets
Verify the filesets installed correctly
Reboot the node

Ethernet

res_grp_a

TMSSA
res_grp_b
RS232
Vancouver
Halifax

AIX 5L V5.3 ML2


AIX 5L V5.3 ML2 HACMP 4.5
HACMP 4.5 HACMP 5.3
HACMP 5.3

Copyright IBM Corporation 2005

Figure 11-40. Step 7/Node. Install HACMP 5.3 AU546.0

Notes:
Perform the same step as on the other node.

Copyright IBM Corp. 1998, 2005 Unit 11. HACMP Migration 11-45
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Step 8/Node. Restart HACMP


This is the point of no return during the upgrade
Restart HACMP services on the hybrid node
Both HACMP 4.5 and HACMP 5.3 clstrmgr will start
HACMP 4.5 will be in control as the node rejoins the cluster
The node will reacquire resources according to their configuration parameters
If this is the last node to be upgraded from HACMP 4.5 to HACMP 5.3, the
conversion continues as soon as the cluster becomes stable
HACMP 5.3 will assume control of all nodes
HACMP 4.5 will be DEINSTALLED !
Ensure the cluster completes migration and becomes stable
Ethernet

res_grp_a res_grp_b

TMSSA

RS232
Vancouver
Halifax

AIX 5 V5.3 ML2 AIX 5L V5.3 ML2


HACMP 5.3 HACMP 5.3
Copyright IBM Corporation 2005

Figure 11-41. Step 8/Node. Restart HACMP AU546.0

Notes:
Review the log files carefully.
The migration will start automatically when the last node joins the cluster in the hybrid
mode.
This is interesting to observe by tail -f /tmp/hacmp.out. The TCP/IP processes may
appear to hang while the migration is in progress.
Make sure the conversion process completed successfully. Also verify that the HACMP
software was uninstalled correctly.

11-46 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Student Notebook

Uempty
Step 9/Node. Backup and Snapshots
The cluster will be running HACMP 5.3 only
The migration is complete
It is a good time to make a new snapshot and mksysb
Verify correct operation of the cluster
Check for errors during conversion in /tmp/hacmp.out and /tmp/cm.log

Ethernet

res_grp_a res_grp_b

TMSSA

RS232
Vancouver
Halifax

AIX 5 V5.3 ML2 AIX 5L V5.3 ML2


HACMP 5.3 HACMP 5.3

Copyright IBM Corporation 2005

Figure 11-42. Step 9/Node. Backup and Snapshots AU546.0

Notes:
Make new backups. This is important, the complete configuration had been changed from
HACMP to HACMP/ES.
Fully test the cluster functionality. Make sure everything stills performs as expected.

Copyright IBM Corp. 1998, 2005 Unit 11. HACMP Migration 11-47
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Whats New in HACMP 5.3


Packaging
Base, Smart Assist, XD
clsmuxpd removed
Problem Determination moved to a separate manual
Ease of Use
XML files
Veritas and OEM volume groups
SMS paging
Client clhosts file created (clhosts.client)
Performance and Security
Websmit security enhanced
Cluster manager starts from /etc/inittab
hacmp group in /etc/groups
Parallel processing of resource groups is now the default
Configuration changes
Configure location rules for resource groups and alias addresses
All resources must have unique names
Distribution policy changed to node only
Support for power 5 virtual I/O and Dynamic LPAR
Verification changes
Verification/synchronization at Cluster Services startup
Automated automatic error notification
cldiag moved to samples
Additional warning, error and autocorrection is done

Copyright IBM Corporation 2005

Figure 11-43. Whats New in HACMP 5.3 AU546.0

Notes:

Packaging
HACMP comes as a base CD with additional fee based feature CDs -- one for Smart
Assists (DB2, WebSphere, and Oracle) and the other for HACMP/XD. The clsmuxpd
daemon has been removed. The function is now in the clstrmgrES subsystem. The
HACMP for AIX Administration and Problem Determination Manual has been broken up
and problem determination is now in a separate manual. Dont forget the
RELEASE_NOTES file (see copy in Appendix to this course).

Ease of use
XML files can be created to keep configuration data. HACMP can be configured from
them via the online planning work sheets or the utilities command cl_opsconfig.
Snapshots can also be converted to the xml formatted file. HACMP will allow
administrators to use Veritas Volume groups in an HACMP cluster and opens the
interface to other OEM vendors. Paging notifications to cell phones is now supported.

11-48 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Student Notebook

Uempty Finally during synchronization HACMP will propagate a clhosts file for HACMP client
nodes. The file created is /usr/es/sbin/cluster/etc/clhosts.client

Performance and Security


Websmit now ties into AIX login security. The clstrmgrd daemon starting from inittab is a
big change to get used to. The hacmp group in /etc/groups is new to HACMP 5.3. For
migration if serial processing of resource groups is needed then it must be specifically
configured.

Configuration changes
You can now have choices about how to distribute alias addresses among the
interfaces as well as how to distribute dependent resource groups such as the ability to
ensure that the parent and child resource groups do not run on the same node. If you
configured a resource group with a distribution policy of network, it is no longer
supported in HACMP 5.3. HACMP 5.3 now supports the use of virtual I/O for network
adapters (must use netmon file) as well as DLPAR for power 5.

Verification changes
HACMP 5.3 will ensure all nodes are configured/synchronized the same way as the first
node up (see additional rules in the Planning and Installation Guide). It is no longer
necessary to manually invoke the Automated Error Notification. The cldiag command is
now deprecated and moved to the samples directory. Additional warnings, errors and
auto corrections have been added:
- Report Error if HACMP network type does not match CuAt interface type
- Auto correct ERROR: Network option: tcp_pmtu_discover has different settings
between nodes: nodeA and nodeB. Please make sure that the command no o
tcp_pmtu_discover provides the same output on all nodes (also for
udp_pmtu_discover and ipignoreredirects)
- Auto correct WARNING: Network option: routerevalidate is set to 0 on node
nodeA. Please be aware that this setting will be changed to 1 during HACMP
startup.
- Auto correct WARNING: Network option: nonlcsrcroute is set to 0 on node
nodeA. Please be aware that this setting will be changed to 1 during HACMP
startup (also for ipsrcroutesend, ipsrcrouterecv, and Ipsrcrouteforward)
- Report ERROR: The MTU sizes do not match for communication interface:
ip_label_mtu1500 and ip_label_mtu2000. The NIC en1 on node nodeA has an MTU
size of 1500, and the NIC on node nodeB has an MTU size of 2000. To correct this
error, make sure that the MTU size is consistent across all NICs on the same
HACMP network.
- Report WARNING: The RSCT level is different on nodes: nodeA and nodeB. Both
nodes have AIX level 5.3.0.1 installed, and RSCT software is at 2.3.0.0 on node

Copyright IBM Corp. 1998, 2005 Unit 11. HACMP Migration 11-49
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

nodeA and 2.2.0.0 on nodeB. To ensure HACMP is working properly it is


recommended the same level RSCT software be installed on nodes with the same
level of AIX. The filesets used are rsct.basic.hacmp,rsct.basic.rte,rsct.core.utils, and
rsct.core.sec
- Volume Group Consistency:
Auto correct error pvid list not identical
Auto correct concurrent capable not consistent
Report warning when cluster services not running for disk availability
(cl_querypv command)
- Auto corrections for previous verification of RSCT instance number and filesystem
mount set to auto, vg set to auto varyon.

11-50 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Student Notebook

Uempty
Checkpoint
1. True or False?
AIX 5.1 is compatible with HACMP 5.3.
2. To be able to do a node-by-node migration from HACMP 4.4.1
to HACMP 5.3 when AIX is at level 4.3.3 you should:
a. You can do node-by-node migration to HACMP 5.3
b. First upgrade to HACMP 4.5 and then upgrade to HACMP 5.3.
c. Nobody knows because this has never happened before.
d. None of the above
3. True or False?
When upgrading levels of HACMP, it is recommended that you upgrade one
node and then run the cluster for a week or two to see if any problems occur
before upgrading the other nodes.
4. List 3 features of HACMP 5.3
5. True or False
Always make a new mksysb of the system and create a
snapshot before upgrading to a new level of HACMP
6. If the nodes of an HACMP 4.5 cluster running AIX 5.1 need to
be upgraded to HACMP 5.3 then which should be done first?
a. HACMP 4.5 to HACMP 5.1
b. AIX 5.1 to AIX 5.2
c. HACMP 4.5 to HACMP 5.3

Copyright IBM Corporation 2005

Figure 11-44. Checkpoint AU546.0

Notes:
Write down your answers here:

1.
2.
3.
4.
5.
6.

Copyright IBM Corp. 1998, 2005 Unit 11. HACMP Migration 11-51
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Exercise

Exercise:

Migration

Copyright IBM Corporation 2005

Figure 11-45. Exercise AU546.0

Notes:

11-52 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Student Notebook

Uempty
Unit Summary

Having completed this unit, you should be able to:


List the methods to update a cluster to HACMP 5.3.
Describe the migration steps for each migration method.
List the features of HACMP since before migration.

Copyright IBM Corporation 2005

Figure 11-46. Unit Summary AU546.0

Notes:

Copyright IBM Corp. 1998, 2005 Unit 11. HACMP Migration 11-53
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

11-54 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

AP Appendix A. Checkpoint Solutions


Unit 1 - Introduction to HACMP for AIX 5L

Lets Review Solutions


1. Which of the following is a characteristic of high availability?
a. High availability always requires specially designed hardware
components.
b. High availability solutions always require manual intervention to
ensure recovery following failover.
c. High availability solutions never require customization.
d. High availability solutions offer excellent price performance when
compared with Fault Tolerant solutions.
2. True or False?
High availability solutions never fail.
3. True or False?
A thorough design and detailed planning is required for all high availability
solutions.
4. True or False? (the local area network is a SPOF)
The cluster shown on the foil titled "What We Plan to Achieve This Week"
has no obvious single points of failure.
5. A proposed cluster with a two year life (for planning purposes) has a
vulnerability which is likely to occur twice per year at a cost of
$10,000 per occurrence. It costs $25,000 in additional hardware
costs to eliminate the vulnerability. Should the vulnerability be
eliminated?
a. Yes ($25,000 is less than $10,000 times four)
b. No
Copyright IBM Corporation 2005

Copyright IBM Corp. 1998, 2005 Appendix A. Checkpoint Solutions A-1


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Unit 1- Introduction to HACMP for AIX 5L

Lets Review Solutions


1. Which of the following are examples of topology components in
HACMP (select all that apply)?
a.Node
b.Network
c.Service IP label
d.Hard disk drive
2. True or False?
All clusters require shared disk for storage of HACMP log files.
3. True or False?
All nodes in an HACMP cluster must have roughly equivalent performance
characteristics.

Copyright IBM Corporation 2005

A-2 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

AP Unit 1- Introduction to HACMP for AIX 5L

Checkpoint Solutions
1. True or False?
Resource Groups may be moved from node to node
2. True or False?
HACMP XD is a complete solution for building
geographically distributed clusters.
3. Which of the following capabilities does HACMP not
provide (select all that apply)?:
a. Time synchronization.
b. Automatic recovery from node and network adapter failure.
c. System Administration tasks unique to each node. Backup and
restoration.
d. Fallover of just a single resource group.
4. True or False?
Resource Groups may be moved from node to node.
5. True or False?
All nodes in a resource group must have equivalent
performance characteristics.

Copyright IBM Corporation 2005

Copyright IBM Corp. 1998, 2005 Appendix A. Checkpoint Solutions A-3


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Unit 2 - Shared Storage Considerations

Lets Review Topic 1 Solutions


1. Which of the following statements is true (select all that
apply)?
a. Static application data should always reside on private
storage.
b. Dynamic application data should always reside on shared
storage.
c. Shared storage must always be simultaneously accessible
to all cluster nodes.
d. Regardless of the size of the cluster, all shared storage
must always be accessible (subject to access control) by all
cluster nodes.
2. True or False?
Using RSCT-based shared disk protection results in slower
fallovers.
3. True or False?
Ghost disks must be checked for and eliminated
immediately after every cluster fallover or fallback.
4. True or False?
Lazy Update is no longer needed with enhanced concurrent
volume groups.

Copyright IBM Corporation 2005

A-4 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

AP Unit 2 - Shared Storage Considerations

Lets Review Topic 2 Solutions


1. Which of the following disk technologies are
supported by HACMP?
a. SCSI.
b. SSA.
c. FC.
d. All of the above.
2. True or False?
SSA disk subsystems can support RAID5 (cache-enabled) with HACMP.
3. True or False?
Compatibility must be checked when using different SSA adapters in the
same loop.
4. True or False?
No special considerations are required when using SAN based storage
units (ESS, EMC HDS, and so forth).
Check for supported FC adapters and firmware levels, multi-pathing
software, and so forth
5. True or False?
hdisk numbers must map to the same PVIDs across an entire HACMP
cluster.

Copyright IBM Corporation 2005

Copyright IBM Corp. 1998, 2005 Appendix A. Checkpoint Solutions A-5


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Unit 2 - Shared Storage Considerations

Checkpoint Solutions
1. True or False?
Lazy update keeps VGDA constructs in sync between cluster nodes.
(reserve/release-based shared storage protection)
2. Which of the following commands will bring a volume group
online?
a. getvtg <vgname>
b. mountvg <vgname>
c. attachvg <vgname>
d. varyonvg <vgname>
3. True or False?
Quorum should always be disabled on shared volume groups.
4. True or False?
filesystem and logical volume attributes cannot be changed while the
cluster is operational.
5. True or False?
An enhanced concurrent volume group is required for the heartbeat over
disk feature.

Copyright IBM Corporation 2005

A-6 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

AP Unit 3 - Networking Considerations

Lets Review Topic 1 Solutions


1. How does HACMP use networks (select all which apply)?
a. Provide client systems with highly available access to
the cluster's applications
b. Detect failures
c. Diagnose failures
d. Communicate between cluster nodes
e. Monitor network performance
2. Using information from RSCT, HACMP only directly handles
three types of failures: Network interface card (NIC)
failures, Node failures, Network failures.
3. True or False?
Heartbeat packets must be acknowledged or a failure is assumed to have
occurred.
4. True or False?
Clusters should include a non-IP network.
5. True or False?
Each NIC on each physical IP network on each node is required to have an
IP address on a different logical subnet.

Copyright IBM Corporation 2005

Copyright IBM Corp. 1998, 2005 Appendix A. Checkpoint Solutions A-7


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Unit 3 - Networking Considerations

Lets Review Topic 2 Solutions


1. True or False?
Clusters must always be configured with a private IP network for HACMP
communication.
2. Which of the following are true statements about communication
interfaces (select all that apply)?
a. Has an IP address assigned to it using the AIX TCP/IP SMIT screens
b. Might have more than one IP address associated with it
c. Sometimes but not always used to communicate with clients
d. Always used to communicate with clients
(Communication interfaces on private IP networks are not intended to be used
by clients.)
3.True or False?
Persistent node IP labels are not supported for IPAT via IP replacement.
4.True or False?
There are no exceptions to the rule that, on each node, each NIC on the same
LAN must have an IP address in a different subnet
(The HACMP 5.1 heartbeat over IP aliases feature is the exception to this rule.)

Copyright IBM Corporation 2005

A-8 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

AP Unit 3 - Networking Considerations

Lets Review Topic 3 Solutions


1. True or False?
A single cluster can use both IPAT via IP aliasing and IPAT via IP replacement.
2. True or False?
All networking technologies supported by HACMP support IPAT via IP aliasing.
3. True or False?
All networking technologies supported by HACMP support IPAT via IP
replacement.
4. If the left node has NICs with the IP addresses 192.168.20.1 and 192.168.21.1 and
the right hand node has NICs with the IP addresses 192.168.20.2 and 192.168.21.2
then which of the following are valid service IP addresses if IPAT via IP aliasing is
being used (select all that apply)?
a. (192.168.20.3 and 192.168.20.4) OR (192.168.21.3 and 192.168.21.4)
b. 192.168.20.3 and 192.168.20.4 and 192.168.21.3 and 192.168.21.4
c. 192.168.22.3 and 192.168.22.4
d. 192.168.23.3 and 192.168.24.3
5. If the left node has NICs with the IP addresses 192.168.20.1 and 192.168.21.1 and
the right hand node has NICs with the IP addresses 192.168.20.2 and 192.168.21.2
then which of the following are valid service IP addresses if IPAT via IP replacement
is being used (select all that apply)?
a. (192.168.20.3 and 192.168.20.4) OR (192.168.21.3 and 192.168.21.4)
b. 192.168.20.3, 192.168.20.4, 192.168.21.3 and 192.168.21.4
c. 192.168.22.3 and 192.168.22.4
d. 192.168.23.3 and 192.168.24.3
Copyright IBM Corporation 2005

Copyright IBM Corp. 1998, 2005 Appendix A. Checkpoint Solutions A-9


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Unit 3 - Networking Considerations

Checkpoint Solutions
1. True or False?
Clients are required to exit and restart their application
after a fallover.
2. True or False?
All client systems are potentially directly affected by the
ARP cache issue.
3. True or False?
clinfo must not be run both on the cluster nodes and
on the client systems.
4. If clinfo is run by cluster nodes to address ARP cache
issues, you must add the list of clients to ping to either the
/etc/cluster/ping_client_list or the
/usr/es/sbin/cluster/etc/clinfo.rc file

Copyright IBM Corporation 2005

A-10 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

AP Unit 4 - Planning for Application and Resource Groups

Checkpoint Solutions
1. True or False
Applications are defined to HACMP in a configuration file
that lists what binary to use.
2. What policies would be the best to use for a 2 node mutual
takeover cluster using IPAT to minimize both applications
running on the same node?
a. home, next, never
b. first, next, higher
c. distribution, next, never
d. all, error, never
e. home, next, higher
3. Which type of data should not be placed in private data
storage?
a. Log data
b. License file
c. Configuration files
d. Application binaries
4. Which policy is not a Run-time policy?
a. Settling
b. Delayed Fallback Timer
c. Dynamic Node Priority
Copyright IBM Corporation 2005

Copyright IBM Corp. 1998, 2005 Appendix A. Checkpoint Solutions A-11


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Unit 5 - HACMP Installation

Lets Review Solutions


1. What is the first step in implementing a cluster?*
a. Order the hardware
b. Plan the cluster
c. Install AIX and HACMP
d. Install the applications
e. Take a long nap
2. True or False?
HACMP 5.3 is compatible with any version of AIX 5L V5.x.
3. True or False?
Each cluster node must be rebooted after the HACMP software is installed.
4. True or False?
You should take careful notes while you install and configure HACMP so
that you know what to test when you are done.

*There is some dispute about whether the correct answer is b or e although a


disconcerting number of clusters are implemented in the order a, b, c, d, e (how can
you possibly order the hardware if you do not yet know what you are going to
build?) or even just a, c, d (cluster implementers who skip step b rarely have time
for long naps).

Copyright IBM Corporation 2005

A-12 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

AP Unit 5 - HACMP Installation

Checkpoint Solutions
1. What component detects an adapter failure?
a. Cluster manager
b. RSCT
c. clcomd
d. clinfo
2. What component provides SNMP information?
a. Cluster manager
b. RSCT
c. clsmuxpd
d. clinfo
3. What component is required for clstat to work?
a. Cluster manager
b. RSCT
c. clcomd
d. clinfo

4. What component removes requirement for the /.rhosts file


a. Cluster manager
b. RSCT
c. clcomd
d. clinfo

Copyright IBM Corporation 2005

Copyright IBM Corp. 1998, 2005 Appendix A. Checkpoint Solutions A-13


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Unit 6 - Initial Cluster Configuration

Checkpoint Solutions
1. True or False?
It is possible to configure a recommended simple two-node cluster
environment using just the standard configuration path.
2. In which of the top level HACMP menu choices is the menu for starting and
stopping cluster nodes?
a. Initialization and Standard Configuration
b. Extended Configuration
c. System Management (C-SPOC)
d. Problem Determination Tools
3. In which of the top level HACMP menu choices is the menu for defining a non-
IP heartbeat network?
a. Initialization and Standard Configuration
b. Extended Configuration
c. System Management (C-SPOC)
d. Problem Determination Tools

4. True or False?
It is possible to configure HACMP faster by having someone help you on
the other node.
5. True or False?
You must specify exactly which filesystems you want mounted when you
put resources into a resource group.

Copyright IBM Corporation 2005

A-14 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

AP Unit 7 - Basic HACMP Administration

Lets Review: Topic 1 Solutions


1. True or False?
It is impossible to add a node while HACMP is running.
2. Which of the following are not supported by HACMP 5.1? (select all that apply)
a. Cascading resource group with IPAT via IP aliasing.
b. Custom resource group with IPAT via IP replacement.
c. HWAT in a resource group which uses IPAT via IP aliasing.
d. HWAT in a custom resource group.
e. More than three custom resource groups in a two node cluster.
3. Which of the following sequences of steps implement HWAT in a cluster
currently using custom resource groups?
a. Delete custom RGs, define cascading RGs, places resources in new RGs,
disable IPAT via IP aliasing on network, delete old service IP labels, define
new service IP labels, synchronize
b. Delete custom RGs, define cascading RGs, places resources in new RGs,
delete old service IP labels, disable IPAT via IP aliasing on network, define
new service IP labels, synchronize
c. Delete custom RGs, disable IPAT via IP aliasing on network, delete old
service IP labels, define new service IP labels, define cascading RGs,
places resources in new RGs, synchronize
d. Delete custom RGs, delete old service IP labels, disable IPAT via IP
aliasing on network, define new service IP labels, define cascading RGs,
places resources in new RGs, synchronize
*Old service IP labels must be deleted before disabling IPAT via IP aliasing and new service IP
labels must exist before they can be placed into the resource groups.
Copyright IBM Corporation 2005

Copyright IBM Corp. 1998, 2005 Appendix A. Checkpoint Solutions A-15


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Unit 7 - Basic HACMP Administration

Lets Review: Topic 2 Solutions


1. True or False?
Using C-SPOC reduces the likelihood of an outage by reducing the
likelihood that you will make a mistake.
2. True or False?
C-SPOC reduces the need for a change management process.
3. C-SPOC cannot do which of the following administration tasks?
a. Add a user to the cluster.
b. Change the size of a filesystem.
c. Add a physical disks to the cluster.
d. Add a shared volume groups to the cluster.
e. Synchronize existing passwords.
f. None of the above. (e was correct for previous versions)
4. True or False?
It does not matter which node in the cluster is used to initiate a C-
SPOC operation.
5. Which log file provides detailed output on HACMP event script
execution?
a. /tmp/clstrmgr.debug
b. /tmp/hacmp.out
c. /var/adm/cluster.log

Copyright IBM Corporation 2005

A-16 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

AP Unit 7 - Basic HACMP Administration

Lets Review: Topic 3 Solutions


1. True or False?
DARE operations can be performed while the cluster is running.
2. Which operations can DARE not perform (select all that apply)?
a. Changing the name of the cluster.
b. Removing a node from the cluster.
c. Changing a resource in a resource group.
d. Change whether a network uses IPAT via IP aliasing or via IP
replacement.
3. True or False?
It is possible to roll back from a successful DARE operation using an
automatically generated snapshot.
4. True or False?
Running a DARE operation requires three separate copies of the
HACMP ODM.
5. True or False?
Cluster snapshots can be applied while the cluster is running.
6. What is the purpose of the dynamic reconfiguration lock?
a. To prevent unauthorized access to DARE functions.
b. To prevent further changes being made until a DARE operation
has completed.
c. To keep a copy of the previous configuration for easy rollback.

Copyright IBM Corporation 2005

Copyright IBM Corp. 1998, 2005 Appendix A. Checkpoint Solutions A-17


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Unit 7 - Basic HACMP Administration

Checkpoint Solutions
1. True or False?
A star configuration is a good choice for your non-IP networks.
2. True or False?
Using DARE, you can change from IPAT via aliasing to IPAT via
replacement without stopping the cluster.
3. The clpasswd utility allows users to change their passwords across
all nodes
4. True or False?
RSCT will automatically update /etc/filesystems when using enhanced
concurrent mode volume groups
5. True or False?
A resource groups priority override location can be cancelled by
selecting a destination node of Restore_Node_Priority_Order.
6. The basic steps to configure WebSMIT are:
a. Install a web server and edit the httpd.conf file to configure it to
server WebSMIT pages
b. Link the WebSMIT cgi-bin and htdocs directories to the web
servers directory
c. Edit the wsm_smit.conf file to configure WebSMIT security
d. Set 4511 permissions on the wsm_cmd_exec file.

Copyright IBM Corporation 2005

A-18 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

AP Unit 8 - Events

Lets Review Solutions


1. True or False?
HACMP 5.x supports a maximum of five pre and five post events per HACMP
event.
2. Which of the following are examples of primary HACMP events (select all that
apply)?
a. node_up
b. node_up_local
c. node_up_complete
d. start_server
e. Rg_up
3. When a node joins an existing cluster, what is the correct sequence for these
events?
a. node_up on new node, node_up on existing node, node_up_complete on new
node, node_up_complete on existing node
b. node_up on existing node, node_up on new node, node_up_complete on new
node, node_up_complete on existing node
c. node_up on new node, node_up on existing node, node_up_complete on existing
node, node_up_complete on new node
d. node_up on existing node, node_up on new node, node_up_complete on existing
node, node_up_complete on new node

Copyright IBM Corporation 2005

Copyright IBM Corp. 1998, 2005 Appendix A. Checkpoint Solutions A-19


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Unit 8 - Events

Checkpoint Solutions
1. True or False?
HACMP event scripts are binary executables and cannot be
easily modified.
2. Which of the following runs if an HACMP event script fails? (select
all that apply)
a. Pre-event scripts.
b. Post-event scripts.
c. error notification methods.
d. recovery commands.
e. notify methods.
3. How does an event script get started?
a. Manually by an administrator
b. Called by cluster manager
c. Called by a recovery program
d. Called by the topology services daemon
4. True or False?
Pre event scripts are automatically synchronized.
5. True or False?
Writing error notification methods is a normal part of configuring a
cluster.

Copyright IBM Corporation 2005

A-20 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

AP Unit 9 - Integrating NFS into HACMP

Checkpoint Solutions
1. True or False?*
HACMP supports all NFS export configuration options.
2. Which of the following is a special consideration when using HACMP to NFS
export filesystems? (select all that apply)
a. NFS exports must be read-write.
b. Secure RPC must be used at all times.
c. A cluster may not use NFS Cross-mounts if there are client systems
accessing the NFS exported filesystems.
d. A volume group which contains filesystems which are NFS exported must
have the same major device number on all cluster nodes in the resource
group.
3. What does [/abc;/xyz] mean when specifying a directory to cross-mount?
a. /abc is the name of the filesystem which is exported and /xyz is where it
should be mounted at
b. /abc is where the filesystem should be mounted at and /xyz is the name of
the filesystem which is exported
4. True or False?**
HACMP's NFS exporting feature only supports clusters of two nodes.
5. True or False?
IPAT is required in resource groups which export NFS filesystems.
*/usr/es/sbin/cluster/exports must be used to specify NFS export options if
the default of "read-write to the world" is not acceptable.
**Resource groups larger than two nodes which export NFS filesystems do not
provide full NFS functionality (for example, NFS file locks are not preserved
across a fallover).
Copyright IBM Corporation 2005

Copyright IBM Corp. 1998, 2005 Appendix A. Checkpoint Solutions A-21


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Unit 10 - Problem Determination and Recovery

Checkpoint Solutions
1. What is the most common cause of cluster failure?
a. Bugs in AIX or HACMP
b. Cluster administrator error
c. Marauding space aliens from another galaxy
d. Cosmic rays
e. Poor/inadequate cluster design
2. True or False?
Event emulation can emulate all cluster events.
3. If the cluster manager process should die, what will happen to the
cluster node?
a. It continues running but without HACMP to monitor and protect it.
b. It continues running AIX but any resource groups will fallover.
c. Nobody knows because this has never happened before.
d. The System Resource Controller sends an e-mail to root and issue a "halt -q".
e. The System Resource Controller sends an e-mail to root and issue a
"shutdown -F".
4. True or False?
A non-IP network is strongly recommended. Failure to include a non-IP
network can cause the cluster to fail or malfunction in rather ugly ways.
*The correct answer is almost certainly "cluster administrator error" although
"poor/inadequate cluster design" would be a very close second.

Copyright IBM Corporation 2005

A-22 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

AP Unit 11- HACMP Migration

Checkpoint Solutions
1. True or False?
AIX 5.1 is compatible with HACMP 5.3.
2. To be able to do a node-by-node migration from HACMP 4.4.1
to HACMP 5.3 when AIX is at level 4.3.3 you should:
a. You can do node-by-node migration to HACMP 5.3
b. First upgrade to HACMP 4.5 and then upgrade to HACMP 5.3.
c. Nobody knows because this has never happened before.
d. None of the above
3. True or False?
When upgrading levels of HACMP, it is recommended that you upgrade one
node and then run the cluster for a week or two to see if any problems occur
before upgrading the other nodes.
4. List 3 features of HACMP 5.3
5. True or False
Always make a new mksysb of the system and create a
snapshot before upgrading to a new level of HACMP
6. If the nodes of an HACMP 4.5 cluster running AIX 5.1 need to
be upgraded to HACMP 5.3 then which should be done first?
a. HACMP 4.5 to HACMP 5.1
b. AIX 5.1 to AIX 5.2
c. HACMP 4.5 to HACMP 5.3

Copyright IBM Corporation 2005

Copyright IBM Corp. 1998, 2005 Appendix A. Checkpoint Solutions A-23


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Appendix C - IPAT via IP Replacement

Checkpoint Solutions
1. For IPAT via replacement (select all that apply)
a. Each service IP address must be in the same subnet as one
of the non-service addresses
b. Each service IP address must be in the same subnet
c. Each service IP address cannot be in any non-service address
subnet
2. True or False?
If the takeover node is not the home node for the resource group
and the resource group does not have a Startup policy of Online
Using Distribution Policy, the service IP address replaces the IP
address of a NIC with an IP address in the same subnet as the
subnet of the service IP address
3. True or False?
In order to use HWAT, you must enable and complete the
ALTERNATE ETHERNET address field in the SMIT devices
menu
4. True or False?
You must stop the cluster in order to change from IPAT via
aliasing to IPAT via replacement

Copyright IBM Corporation 2005

A-24 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

AP Appendix B. Release Notes for HACMP 5.3


====================================================================
Release Notes for HACMP for AIX 5L version 5.3, 07/07/2005
====================================================================

These release notes contain the latest information about version 5.3 of HACMP for AIX 5L.

NEW/ENHANCED FUNCTIONALITY IN HACMP 5.3


- New Features that Enhance Ease of Use
- New Features that Enhance Performance and Security
- New Features that Enhance Geographic Distance Capability
- Discontinued Features

OTHER CHANGES OR ENHANCEMENTS


- Official Name of the Product: HACMP for AIX 5L
- The Cluster Manager Starts at Boot Time
- Mapping of Cascading, Rotating and Concurrent Resource Groups
- All Resources Defined to HACMP Must Have Unique Names
- Network-based Distribution Policy Replaced by Node-Based Distribution
- The clRGinfo Utility Displays Information about Secondary Instances of Resource
Groups
- Site-specific Priority Override Location (POL) for Resource Groups
- Customizing Inter-Site Resource Group Recovery
- Online Planning Worksheets Application Can be Run from the HACMP Installation
CD-ROM
- The rc.cluster Script Requires Changes to Enable Clinfo to Start
- HACMP Extended Distance (HACMP/XD)
- XD_data Network Requirements
- HACMP Smart Assist software for DB2, WebSphere, and Oracle

Copyright IBM Corp. 1998, 2005 Appendix B. Release Notes for HACMP 5.3 B-1
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

PRODUCT CONSTRAINTS
- Required BOS level
- APARs Required

INSTALLATION AND MIGRATION NOTES


- Installation and Migration Information Included in the Planning and Installation Guide
- The cluster.rpv.* filesets renamed
- Ensuring that clcomdES and clstrmgrES Get Started when the AIX 5L Installation
Assistant Runs
- The NFS Mount Format Required in HACMP 5.3
- Network types not supported in HACMP 5.3 flagged during migration from HACMP
4.5
- HACMP Configuration Database (ODM) Security Changes May Affect Migration
- Force Apply the Snapshot and Use Autocorrective Action of the HACMP cluster
Verification During an Upgrade to HACMP 5.3
- Inter-Site Selective Fallover Disabled During an Upgrade to HACMP 5.3
- The Same Version of SNMP Should be Used During Upgrades

POTENTIAL AIX 5L ISSUES


- Distributed SMIT/6000 (DSMIT)
- HACMP does not support VIPA
- Aggregate IP Interface is Not Supported with the SP Switch2

NOTES ON OTHER UTILITIES AND FACILITIES


- Online Planning Worksheets Requirements
- Duplicate IP Address Entry AIX 5L error can be ignored
- Fileset Updates in HACMP 5.3 and HACMP/XD for GLVM 5.3
- SNMP v3 Agents are Used with HACMP 5.2 and 5.3
- Oracle 9i Continues to Use the Event Management Daemon

USER DOCUMENTATION LOCATION AND TITLES

B-2 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

AP PRODUCT DIRECTORIES LOADED

PRODUCT MAN PAGES

=======================================
NEW/ENHANCED FUNCTIONALITY IN HACMP 5.3
=======================================

This section lists only the titles of the features added in this release.
For brief descriptions of each feature, see Chapter 8 HACMP 5.3: Summary of Changes
in the Concepts and Facilities Guide.
You can access the publications directly from the /pubs directory of the HACMP Installation
CD without having to first install the filesets.
The documentation is available in HTML and PDF formats.

New Features that Enhance Ease of Use


*************************************
- Cluster Verification has more checks and more automatic corrections
- Cluster Information Program (Clinfo) enhancements: No need to recompile clients
- SMS pager capability for text or cell phone notifications
- Smart Assist for WebSphere updated and improved for HACMP 5.3.
- Smart Assist for DB2 extends an existing HACMP configuration to include
monitoring and recovery support for DB2 Universal Database (UDB) Enterprise
Server Edition.
- Smart Assist for Oracle provides assistance to those involved with the installation of
Oracle Application Server 10g (9.0.4) (AS10g) Cold Failover Cluster (CFC) solution
on IBM AIX 5L v. 5.2 operating system.
- Online Planning Worksheets cluster definition saved in XML format for easy edits.
Cluster snapshot can also be saved to this format.
- Automatic accommodation of volume groups and filesystems created with the
Veritas Volume Manager (VxVM) and VxFS using Veritas Foundation Suite v. 4.0.

Copyright IBM Corp. 1998, 2005 Appendix B. Release Notes for HACMP 5.3 B-3
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

New Features that Enhance Performance and Security


**************************************************
- You can configure cluster-wide resource group location dependencies between
resource groups. You can keep specified resource groups online on the same node,
at the same site, or on different nodes.
- You can configure a distribution preference for the IP service aliases. This enables
the placement of IP labels on the same interface as the persistent IP label, or on
other interfaces. (This function enables easier configuration for networks firewall
requirements).
- New architecture for communication between Clinfo and the Cluster Manager
improves both performance and usability.
- Improved WebSMIT Security.

New Features that Enhance Geographic Distance Capability


********************************************************
- Greater control of resource groups with replicated resources (resources replicated
across sites using HACMP/XD for GLVM, HAGEO or ERCMF PPRC).
HACMP automatically manages both primary (active) instances of the resources
and the secondary (backup) instances. You have greater control of site
management policies and can move resource groups with replicated resources
across sites.

Discontinued Features and Utilities


***********************************
- The clsmuxpd daemon no longer exists as a standalone daemon in HACMP 5.3.
The SNMP functions, previously performed by clsmuxpd, are integrated into the
Cluster Manager. The Cluster Information Manager, Clinfo, now only obtains data
from SNMP when requested; it no longer obtains the entire cluster configuration at
startup. Some API routines have been eliminated or changed. This may require you
to recompile any applications you have built using the previously available Clinfo
API. For more information, see the Clients Guide.
- The command line utilities cldiag and clverify are removed. All functionality is
available from SMIT in HACMP 5.3.
- If you are upgrading from a pre 5.2 release, manual reconfiguration of user-defined
events is required. HACMP 5.2 and 5.3 interact with the RSCT Resource Monitoring
and Control (RMC) subsystem instead of with the Event Management subsystem.

B-4 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

AP - If you are upgrading from a pre 5.2 release, the cllockd or cllockdES (the Cluster
Lock Manager) is no longer supported as of HACMP 5.2.
During node-by-node migration, it is uninstalled. Installing HACMP 5.2 or 5.3
removes the Lock Manager binaries and definitions.
- The network-based distribution policy for resource groups is removed. See also the
section: Network-based Distribution Policy Replaced by Node-Based Distribution.

=============================
OTHER CHANGES OR ENHANCEMENTS
=============================

Official Name of the Product: HACMP for AIX 5L


**********************************************
HACMP 5.3 has the following characteristics:
- Includes all the features of HACMP 5.2 and ESCRM 4.5, in addition to the new
features in HACMP 5.3.

NOTE: Prior to HACMP version 5.1, HACMP for AIX included four features: HAS and CRM
(with core filesets named cluster.base*); and ES and ESCRM (with core filesets named
cluster.es*). Starting with HACMP 5.1, the HAS, CRM and ES features are no longer
shipped, and the ESCRM feature is now called HACMP. That is, HACMP 5.3 refers to the
ESCRM feature of HACMP.

The Cluster Manager Starts at Boot Time


***************************************
In HACMP 5.3, the init process starts the Cluster Manager daemon automatically at boot
time. The Cluster Manager must be running before any cluster services can start on a
node.
This enables SNMP clients, such as Clinfo, to obtain information from nodes at all times
(even if they are currently down).

Since the Cluster Manager daemon is now a long running process, you cannot use lssrc -s
clstrmgrES to determine the state of the cluster.
Instead, we recommend several alternative methods:
1. Use /usr/es/sbin/cluster/utilities/clcheck_server grpsvcs instead.

Copyright IBM Corp. 1998, 2005 Appendix B. Release Notes for HACMP 5.3 B-5
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

If the grpsvcs daemon is running, then the Cluster Manager has begun to process
events.
2. Use the SMIT path Problem Determination Tools > View Current State to view the state
of the cluster.

Note that the Cluster Manager is never permanently stopped. Once it is stopped, for
example, even during a forced down event, the Cluster Manager process will automatically
be restarted. It can be in one of several states, as displayed by executing the command:
lssrc -ls clstrmgrES
ST_INIT (HACMP is configured but no events have run)
ST_NOTCONFIGURED (HACMP has yet to be configured)

Mapping of Cascading, Rotating and Concurrent Resource Groups


*************************************************************
Upon upgrading to HACMP 5.3 from 4.5 or 5.1, your predefined cascading, rotating and
concurrent resource groups continue to function in the same manner as in previous
releases. However, the following terms are no longer used:
- custom resource groups
- cascading and rotating resource groups
- cascading without fallback
- inactive takeover.
Upon upgrading, HACMP automatically maps the predefined resource groups
characteristics to the startup, fallover and fallback policies that describe their behavior in
HACMP 5.3.

For a complete table of how pre-5.2 resource group characteristics are mapped to the
combinations of startup, fallover and fallback policies available in HACMP 5.3, see chapter
10: Upgrading an HACMP Cluster in the Planning and Installation Guide.
NOTE: If you are upgrading from HACMP 5.2 to HACMP 5.3, then there is no conversion of
resource groups (they already use startup, fallover and fallback policies).

All Resources Defined to HACMP Must Have Unique Names


*****************************************************
All resources defined to HACMP must have unique names.

B-6 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

AP This is now enforced by SMIT. It is suggested that the names of resources relate to the
application they serve, as well as any corresponding device, such as:
DB2_Filesystem1, or Websphere_service_address.

Network-Based Distribution Policy Replaced by Node-Based Distribution


*********************************************************************
In releases prior to HACMP 5.3, you could specify the Startup Policy for resource groups to
be Startup Using Distribution Policy, and in addition to that, the distribution policy could be
set to be either network-based or node-based.
Starting with HACMP 5.3, when the upgrade to HACMP 5.3 is completed on all nodes, all
resource groups that used the network-based distribution begin to use node-based
distribution by default. That is, upon nodes startup, HACMP activates only one resource
group with such policy per node.

NOTE: The SMIT screen for specifying the distribution policy is removed, since the only
distribution policy that HACMP 5.3 utilizes is node-based distribution. To achieve the
behavior similar to network-based distribution, use the Configure Service IP
Labels/Addresses Distribution Preferences panel in SMIT.

The clRGinfo Utility Displays Information about Secondary Instances of


**********************************************************************
Resource Groups
***************

For those resource groups that are configured to use nodes at more than one site in their
nodelist, the clRGinfo utility now displays information about the secondary (backup)
instance of the resource group, in addition to information about primary instances.

Use either clRGinfo -v or clRGinfo -p to view this information.

Site-specific Priority Override Location (POL) for Resource Groups


******************************************************************
In addition to the node-specific Priority Override Location (POL), you can now specify a
site-specific POL for a resource group.

Copyright IBM Corp. 1998, 2005 Appendix B. Release Notes for HACMP 5.3 B-7
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

You configure this setting when you use the SMIT path HACMP Resource Group and
Application Management > Move a Resource Group to Another Node/Site > Move
Resource Groups to Another Site, and select another site.

The site-specific Priority Override Location (POL) specifies the site where you want to
move the resource group.
This site POL setting overrides the settings for the nodelist for the resource group, that is,
HACMP attempts to move the resource group to the available nodes at another site, if
these nodes are included in the nodelist for this resource group. Site-specific POL
overrides node POL, if node POL is specified for the node at the local site.

Customizing Inter-Site Resource Group Recovery


**********************************************
In addition to the customizing of recovery for specific resources (available in HACMP prior
to version 5.3), you can now customize the resource group recovery in clusters with sites.

In SMIT, this function is located in Extended Configuration > Extended Resource


Configuration > Customize Resource Group and Resource Recovery > Customize
Inter-site Resource Group Recovery.

The default setting is Fallover, that is, for a specified resource group, HACMP will
automatically use Selective Fallover to move the resource group to another node, at either
site. You can change this option and then HACMP will not automatically use selective
fallover for a particular resource group in case of failure. It notifies you when the resource
group goes into the error state.
Note that even if the Cluster Manager is not enabled to initiate a selective fallover across
sites, it will still move the resource group within a site if a node_down or node_up event
occurs. You can also manually move a resource group between sites.

Also, see the section Inter-Site Selective Fallover After an Upgrade to HACMP 5.3"

Online Planning Worksheets Application Can be Run from the HACMP CD-ROM
***********************************************************************
You can run the Online Planning Worksheets Application directly from the HACMP
CD-ROM.

B-8 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

AP RUNNING THE APPLICATION FROM AN AIX SYSTEM

1. Make sure that the path to the Java 2 Runtime Environment is in your PATH
environment variable.
On AIX 5.1 by default this is /usr/java130/bin.
On AIX 5.2 by default this is /usr/java131/bin.
On AIX 5.3 by default this is /usr/java140/bin.

2. Mount the installation media by using the following command:

mount -vcdrfs -p -r cd_location mount_directory

Where cd_location is the location of the CD, and mount_directory is the name of the
directory to be mounted.
For example:
mount -vcdrfs -p -r /dev/cd0 /mnt

3. Run the application, by executing the following command:

java -jar <mount_directory>/olpw/worksheets.jar

where <mount_directory> is the directory specified in step 2.

RUNNING THE APPLICATION FROM A WINDOWS SYSTEM

1. Make sure that the Java2 Runtime Environment version 1.3.0 or greater is installed on
your system.

2. Run the application by executing the worksheets.bat file:

From a command prompt, execute the worksheets.bat file.

or

Copyright IBM Corp. 1998, 2005 Appendix B. Release Notes for HACMP 5.3 B-9
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

On your desktop or in Windows Explorer, double-click the worksheets.bat file.

On systems that have an association configured for .jar files, you can double-click the
worksheets.jar file.

NOTE: Do not close the command window used to launch the application.
Closing this window closes the application.

The rc.cluster Script Requires Changes to Enable Clinfo to Start


****************************************************************

If you use the /usr/es/sbin/cluster/etc/rc.cluster script to start the Clinfo utility on client
nodes, HACMP issues an error (/usr/es/sbin/cluster/etc/ha_odm_constants: not found) and
Clinfo is not started. To enable Clinfo to start, comment out the line
. /usr/es/sbin/cluster/etc/ha_odm_constants in the rc.cluster script, or start the Clinfo utility
manually.

HACMP Extended Distance (HACMP/XD)


*********************************
HACMP/XD is a function of HACMP that requires separate installation.
It provides HACMP/XD for Geographic Logical Volume Manager (GLVM), HACMP/xD for
ESS PPRC, and HACMP/XD for HAGEO.
The HACMP/XD features are an extension to HACMP and require that you have HACMP
installed first.
For more information, see:
HACMP/XD for Geographic Logical Volume Manager: Planning and Administration
Guide
HACMP/XD for ESS PPRC: Planning and Administration Guide
HACMP/XD for HAGEO Technology: Concepts and Facilities Guide
HACMP/XD for HAGEO Technology: Planning and Administration Guide

XD_data Network Requirements


****************************

B-10 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

AP An XD_data mirroring network in HACMP is required for HACMP/XD for GLVM


configuration. You configure an XD_data network in HACMP SMIT just as any other IPAT
via IP Replacement type network.
That is, it is recommended to configure standby (backup) IP labels for this network.

However, XD_data does not support any type of IPAT (IP Address Takeover), including
IPAT via IP replacement. In other words, it does not take over the IP address of ANOTHER
node. It does support local IP swapping.

Also, the XD_data network requires that you configure a node-bound service IP label for it.

HACMP Smart Assist Software for WebSphere, DB2, and Oracle


**********************************************************
HACMP Smart Assist features for WebSphere, DB2, and Oracle are three separately
installable features. HACMP Smart Assists help you to quickly configure either a two-node
hot standby or mutual takeover cluster to make your DB2 instances or Oracle or
WebSphere applications highly available with HACMP.
If Smart Assist filesets are installed, they also provide native Smart Assist SMIT interface
that enables you to easily configure a desired cluster configuration with WebSphere, DB2
or Oracle components.

One release_notes file covers all three Smart Assists.

Location of the Release Notes for HACMP Smart Assist software:


/usr/es/sbin/cluster/release_notes_assist.

===================
PRODUCT CONSTRAINTS
===================

The following product constraints apply:

Required BOS Level

Copyright IBM Corp. 1998, 2005 Appendix B. Release Notes for HACMP 5.3 B-11
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

******************

For detailed information on software requirements, see the migration and upgrade chapters
in the Planning and Installation Guide.

The specific HACMP 5.3 requirements for AIX 5.2 and AIX 5.3 are:

AIX 5L V5.2 with the 5200-04 Recommended Maintenance package (or later
maintenance levels) with RSCT version 2.3.6 or higher.
AIX 5L V5.3 with the 5300-02 Recommended Maintenance package (or later
maintenance levels) with RSCT version 2.4.2 or higher.
The RSCT filesets delivered with AIX 5L must be installed.

NOTE: HACMP v.5.3 is not supported on AIX 5L 5.1.

APARs Required
**************
The following APARs are required for proper operation of HACMP version 5.3.
This list represents the most current information available at this writing.
Refer to the Announcement materials for a list of the latest known fixes.

For all HACMP 5.3 customers:

APAR IY71500
*************
This is an APAR for AIX 5L 5.3. The fix enables conversion of a volume group that contains
no logical volumes from non-concurrent capable to concurrent capable using the chvg -C
command.

APAR IY72928
************

B-12 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

AP Apply this APAR to enable the DARE (Dynamic Reconfiguration) functions for resource
groups with dependencies, or resource groups that have replicated resources and are
configured in clusters with sites.

Apply the following APARs to ensure RSCT works as required:


IY72649 for AIX 5.2
IY72650 for AIX 5.3

================================
INSTALLATION AND MIGRATION NOTES
================================

Installation and Migration Information Included in the Planning and Installation Guide
***************************************************************

For updated installation and migration information, see:


Chapter 9 Installing HACMP Software (for new installations)
Chapter 10 Upgrading an HACMP Cluster (for upgrading)
in the Planning and Installation Guide.

The cluster.rpv.* filesets renamed


**********************************

The HACMP filesets cluster.rpv and cluster.rpv.msg.en_US (needed if you install


HACMP/XD for GLVM) are renamed as follows:

cluster.rpv is renamed to glvm.rpv


cluster.rpv.msg.en_US is renamed to glvm.rpv.msg.en_US.
The cluster.xd.glvm filesets remain as they are in the previous HACMP/XD version.

glvm.rpv.client Remote Physical Volume Client


glvm.rpv.server Remote Physical Volume Server

Copyright IBM Corp. 1998, 2005 Appendix B. Release Notes for HACMP 5.3 B-13
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

glvm.rpv.util Geographic LVM Utilities

glvm.rpv.msg.en_US RPV Messages - U.S. English

Ensuring that clcomdES and clstrmgrES Get Started when the


**********************************************************
AIX 5L Installation Assistant Runs
**********************************
Upon installation, AIX 5L configures the Installation Assistant (IA) to run from /etc/inittab.
Once you reboot the system after installation, the IA runs, prompting you for information
input at the system console, such as time and date. Until the Installation Assistant receives
input, no other processes are run from /etc/inittab, including the HACMP processes, such
as the cluster communications daemon (clcomdES), and the cluster manager daemon
(clstrmgrES), both of which start from /etc/inittab.

When you do not respond to the IA prompt (for instance, when the installation or reboot is
performed remotely), no other processes are started from /etc/inittab, and you may notice
that certain HACMP daemons have not been started. HACMP cannot control this situation.

To ensure that these and other daemons are started from /etc/inittab on nodes that you
rebooted after installing AIX 5L, respond to the AIX 5L Installation Assistant prompt and
manually enter the information at the system console. Alternatively, you can remove the IA
entry from /etc/inittab. This will enable the process of reading the /etc/inittab to proceed and
will start the clcomdES and the clstrmgrES daemons.

The NFS Mount Format Required in HACMP 5.3


******************************************
NOTE: This note is important if you are upgrading from HACMP 4.5. In HACMP 5.2 and
5.3, only the following format for cross-mounting filesystems is supported:
NFS_Mount_Point; Local_Mount_Point
Example:
/nfsmount1/fs1;/mnt/fs1 /nfsmount2/fs2/;/mnt/fs2 ...
where /mnt/fs1 is the filesystem mount point, and

B-14 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

AP /nfsmount1/fs1 is the directory where the NFS share will be mounted.

This format was introduced in HACMP 4.3.1 and was used along with the old format (where
the NFS mount point was not specified).
Starting with HACMP 5.2, the following changes apply:
If you configure new filesystems or change existing resource groups, only the new
format of cross-mounting is allowed.
If you upgrade from previous releases to HACMP 5.3, the old format for configuring
cross-mounted filesystems is still allowed. However, HACMP issues an error and
requires changing the format if you attempt to change the resource group, or create a
new filesystem after an upgrade.
A warning message notifies you about this change during the installation process, after
HACMP runs the cluster verification process, and during the NFS mount process.

Network types not supported in HACMP 5.3 flagged during migration from
***********************************************************************
HACMP 4.5
*********
Some network types are supported in HACMP (HAS) 4.5 but not in HACMP/ES 4.5 or
HACMP 5.1 and higher. If any of the following network types are detected during a
migration from HACMP (HAS) 4.5 to HACMP 5.3, the migration will fail during the
pre-install checks of cluster.es.server.rte:

* SOCC
* SLIP
* 802_ether (Ethernet Protocol 802.3)
* IP (Generic IP)
* FCS (Fiber Channel Switch)

An error message will alert you that the unsupported network types are detected in the
configuration and must be removed or redefined.

The only supported network types are Ether, Token, FDDI, HPS, ATM, RS232, TMSCSI,
TMSSA, DISKHB.

Copyright IBM Corp. 1998, 2005 Appendix B. Release Notes for HACMP 5.3 B-15
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

HACMP Configuration Database (ODM) Security Changes May Affect Migration


************************************************************************
NOTE: If you are upgrading from a release prior to HACMP 5.2

To further improve security, the HACMP Configuration Database (ODM) has the following
enhancements:
Ownership. All HACMP ODM files are owned by user root and group hacmp.
In addition, all HACMP binaries that are intended for use by non-root users are also
owned by user root, group hacmp.
Permissions. All HACMP ODM files, except for the hacmpdisksubsystem file with 600
permissions, are set with 640 permissions (that is, readable by user root and group
hacmp, writable by user root).
All HACMP binaries that are intended for use by non-root users are installed with 2555
permissions (that is, readable and executable by all users, with the setgid bit turned on
so that the program runs as group hacmp).

During the installation, HACMP creates the group hacmp on all nodes if it does not
already exist. By default, group hacmp has permission to read the HACMP ODMs, but
does not have any other special authority. For security reasons, it is recommended not to
expand the authority of group hacmp.

If you use programs that access the HACMP ODMs directly, you may need to rewrite them
if they are intended to be run by non-root users: All access to the ODM data by non-root
users should be handled via the provided HACMP utilities.

In addition, if you are using the PSSP File Collections facility to maintain the consistency of
/etc/group, the new group hacmp that is created at installation time on the individual
cluster nodes may be lost when the next file synchronization occurs.

There are two possible solutions to this problem. Before HACMP 5.3 is installed:
1) Turn off PSSP File Collections synchronization of /etc/group, or
2) Ensure that group hacmp is included in the master /etc/group file and ensure that the
change is propagated to all cluster nodes.

Force Apply the Snapshot and Use Autocorrective Action of HACMP


Cluster Verification Utility During an Upgrade to HACMP 5.3

B-16 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

AP *******************************************************************
If you upgrade to HACMP 5.3 using the previously created snapshot, the migration to the
new version may get stalled if the cluster configuration in the snapshot is not 100%
accurate according to the verification check. (Also, dynamic cluster reconfiguration does
not work if verification finds any errors).

This is due to the fact that the cluster verification utility in HACMP 5.3 checks for a wider
variety of issues than in previous releases. The issues that HACMP now identifies as
incorrect could have been misconfigured in the cluster configuration in the previous
releases without necessarily breaking the cluster.

If you apply a snapshot and see an error, review the log files to check if it can be
automatically corrected in HACMP 5.3. A list of errors for which HACMPs verification utility
takes corrective actions is included in Chapter 6 of the Administration Guide.

If the error is included in the list, to continue an upgrade process, Force Apply the snapshot
and run the cluster synchronization and verification process, with the option Automatically
Correct Errors during the Cluster Verification set to Interactively.
NOTE: Be careful when Force Applying the snapshot: only use this option if you know that
the error you encountered can be automatically corrected.

Also, you may see some warnings and errors that will cause the upgrade process via a
snapshot to fail (there is no automatic corrective action for them, and they do not break the
cluster, so Force Applying the snapshot is safe). In these cases, Force Apply the snapshot
to continue an upgrade process to HACMP 5.3.

Inter-Site Selective Fallover After an Upgrade to HACMP 5.3


***********************************************************

NOTE: While you are upgrading to HACMP 5.3, selective fallover of resource groups
between sites is disabled. This is the pre-5.3 release behavior for resource groups with a
non-IGNORE site management policy.
A particular instance of a resource group can fall over within one site, but cannot move
between sites.
During migration, if no nodes are available on the site where the affected instance resides,
that instance goes into ERROR or ERROR_SECONDARY state. It does not stay on the
node where it failed.

Copyright IBM Corp. 1998, 2005 Appendix B. Release Notes for HACMP 5.3 B-17
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

This behavior applies to both primary and secondary instances in releases prior to HACMP
5.3.

For a new install of HACMP 5.3, inter-site resource group recovery is instead enabled by
default. HACMP will automatically attempt to move the resource group to another node
(first at the local site and then at the remote site), in case of a resource failure.

However, if you migrated from HACMP 5.2, the inter-site resource group recovery is
disabled. After the migration to HACMP 5.3 is complete, you can change the default
behavior and enable HACMP to move a resource group between sites.
Use the HACMP SMIT path Extended Configuration > Extended Resource Configuration >
Customize Resource Group and Resource Recovery > Customize Inter-Site Resource
Group Recovery.

The Same Version of SNMP Should be Used During Upgrades


*******************************************************

We recommended that when you upgrade to HACMP 5.3, you do so from/to the same
version of SNMP, otherwise your SNMP-based applications may not function correctly.
Once the upgrade has completed, you can switch to a different version of SNMP, if needed.

For example, if you are migrating from an environment using SNMPv1, and you are
upgrading to AIX 5L 5.3, then before upgrading to HACMP 5.3, run the following command:

stopsrc -s snmpd

See also the note SNMP version 3 Agents are Used with HACMP 5.3" later in this
document.

====================
POTENTIAL AIX ISSUES
====================

Distributed SMIT/6000 (DSMIT)


*****************************

B-18 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

AP HACMP offers a suite of SMIT commands to manage the operation of HACMP cluster
environments. As part of this function, the necessary logic is included to control and
synchronize the execution of the commands on all HACMP cluster nodes.

DSMIT provides a similar distribution of SMIT function for networked RS/6000


System/6000 processors. Because of the potential conflict where these two products
coexist on the same systems, do not use DSMIT for HACMP cluster operations.

HACMP does not support VIPA


***************************
HACMP does not support the Virtual IP Address (VIPA) facility of AIX 5L.

The pseudo IP address provided by VIPA cannot be reliably monitored by RSCT or


HACMP.
The failure of the underlying devices that are used to service the pseudo device cannot be
coordinated with HACMP recovery processing.
VIPA can be configured and used outside of HACMP, but when using these facilities on an
HACMP cluster node ensure that they are configured on the subnets that are completely
different from the subnets used by HACMP. If any VIPA addresses are configured on the
same subnet that is used for an HACMP network, HACMP may not be able to properly
detect failures and manage recovery.

Aggregate IP Interface is Not Supported With the SP Switch2


***********************************************************
With the SP Switch2 you have css0 and css1, PSSP allows you to configure an Aggregate
IP switch. This is an ml0 interface with its own IP address. This ml0 interface is not
supported by HACMP.

=======================================
NOTES ON OTHER UTILITIES AND FACILITIES
=======================================
The following sections describe notes for product utilities:

Copyright IBM Corp. 1998, 2005 Appendix B. Release Notes for HACMP 5.3 B-19
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Online Planning Worksheets Requirements


***************************************
The online planning worksheet program requires version 1.3.0 of the J2RE (Java 2
Runtime Environment). It is supported on AIX 5L v.5.1 or higher, and MS Windows 95/98,
NT, and 2000. If you run the program on MS Windows, check to see that the minimum level
of J2RE is installed.
You may need to install a Java virtual machine in order to use the online worksheet
program.

In earlier releases, the OLPW files (Readme.txt, worksheets.html, worksheets, and


worksheets.bat) were shipped in /usr/es/sbin/cluster/samples/worksheets/ in the
cluster.adt.es.client.samples.demos fileset.

In HACMP 5.3, these files are shipped in the /usr/es/sbin/cluster/worksheets directory in


the cluster.es.worksheets fileset.

Note: Importing the HACMP definition is supported only from the cluster nodes (and not, for
instance, from a Windows 2000 machine).

In addition, you can also create a cluster definition file from an active HACMP cluster.

To create a cluster definition file from an active HACMP cluster, export the definition file
from SMIT (Extended Configuration > Export Definition File for Online Planning
Worksheets), and open the file from the Online Planning Worksheets application.

Also, if you choose to create a cluster definition as XML, ensure that constructions of the
type <tag/> are written as follows: <tag></tag>. This affects such entities as
<VALIDATED/>.

Duplicate IP Address Entry AIX 5L error can be ignored


********************************************************

On networks configured to use IP Aliasing for IP Address Takeover (aka, IPAT via Aliasing),
HACMP will briefly define the Service IP address on both the old (failed) network interface
and the new (takeover) network interface in order to properly preserve network routes. The
Service IP address is then removed from the old network interface. During the brief time

B-20 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

AP that the service IP address exists on both network interfaces, AIX 5L may detect this
situation and add an entry in the error log similar to:

FE2DEE00 0510092505 P S SYSXAIXIF DUPLICATE IP ADDRESS


DETECTED IN THE NET

This condition is only temporary (during the IPAT operation) and the error log entry can
safely be ignored.

Fileset Updates in HACMP and HACMP/XD 5.3


******************************************

NOTE: The filesets cluster.rpv and cluster.rpv.msg.en_US are renamed. For details, see
the Installation and Migration Notes section earlier in this file.

SNMP version 3 Agents are Used with HACMP 5.3


*********************************************
HACMP 5.3 uses SNMP version 3 agents by default. This makes it compatible with the
SNMP agents version used in AIX 5.2 and 5.3. In general, as of AIX 5L 5.2 and 5.3,
SNMPv3 is the default.

We recommended that when you upgrade to HACMP 5.3, you do so from/to the same
version of SNMP.

For details, see the note The Same Version of SNMP Should be Used During Upgrades
earlier in this document.

Oracle 9i Continues to Use the Event Management Daemon


******************************************************
Oracle 9i continues to use the Event Management daemon to access network status
through the EMAPI.

======================================
USER DOCUMENTATION LOCATION AND TITLES

Copyright IBM Corp. 1998, 2005 Appendix B. Release Notes for HACMP 5.3 B-21
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

======================================

HACMP 5.3 documentation in HTML and PDF formats


***********************************************
Documentation for HACMP for AIX 5L, version 5.3 is supplied in HTML and PDF formats.
You may want to install the documentation before doing the full install of the product, in
order to read the chapters on installation procedures, or the description of new features in
HACMP 5.3.
You can read the publications directly from the /pubs directory of the HACMP Installation
CD without having to first install the filesets.

Viewing and installing the documentation files


**********************************************
You can view the HTML and PDF documents before installing.

Insert the HACMP installation CD and find the documents under the top-level /pubs
directory. See step 7 below for an explanation of the files contained in the HTML directory.

Take the following steps to install the documentation:

1. At the command line, enter: smit install_selectable_all SMIT asks for the input
device/directory for software.

2. Select the CD ROM drive from the picklist and press Enter.

3. On the next SMIT screen with the cursor on Software to Install, press the F4 key.

4. SMIT lists the image cluster.doc.en_US fileset with its subdirectories:

5.3.0.0 HACMP PDF Documentation


5.3.0.0 HACMP Web-based HTML Documentation

NOTE: Installing all of the documentation requires about 27 MB of space in the /usr
filesystem. (PDF files = 15 MB, HTML files = 12 MB.)

B-22 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

AP 5. The documentation is installed in the following directory:

/usr/share/man/info/en_US/cluster/HAES

6. For the PDF files, each book directory contains the books PDF file.

7. For the HTML files, each book directory contains a number of files: separate .html files
for each chapter, .gifs, and a .css file for each book. The MAIN FILE (the one you
should click on) for each book is an .htm file with a filename that begins with ha and is
related to the name of the book (for example, ha_concepts.htm).
When you click on this main file, the book opens in the browser window with a Contents
frame that allows you to navigate easily to any chapter.

See the Document Titles and Filenames list below for exact filenames for the book
titles (both .htm and .pdf).

Document Titles and Filenames


*****************************
The titles of the HACMP for AIX 5L products, Version 5.3, documentation set are:

HACMP Version 5.3: Concepts and Facilities Guide (filename = ha_concepts)


HACMP Version 5.3: Planning and Installation Guide (filename = ha_plan_install)
HACMP Version 5.3: Administration Guide (filename = ha_admin)
HACMP Version 5.3: Troubleshooting Guide (filename = ha_troubleshoot)
HACMP Version 5.3: Programming Client Applications (filename = ha_clients)
HACMP Version 5.3: Glossary (filename = ha_glossary.pdf and glossary.html)
HACMP/XD for ESS PPRC: Planning and Administration Guide
(filename = ha_xd_pprc)
HACMP/XD for GLVM: Planning and Administration Guide (filename = ha_xd_glvm)

Copyright IBM Corp. 1998, 2005 Appendix B. Release Notes for HACMP 5.3 B-23
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

==========================
PRODUCT DIRECTORIES LOADED
==========================
Use the following command to determine the exact files loaded into product directories
when installing the HACMP for AIX 5L, version 5.3:

lslpp -f cluster*

The LPP modifies the following files:

/etc/inetd.conf
/etc/inittab
/etc/group
/etc/objrepos/SRCnotify
/etc/objrepos/SRCsubsvr
/etc/objrepos/SRCsubsys
/etc/rc.net
/etc/services
/etc/snmpd.conf
/etc/snmpd.peers
/etc/syslog.conf
/var/spool/cron/crontabs/root

=================
PRODUCT MAN PAGES
=================

Man pages for HACMP commands and utilities are installed in the following directory:

/usr/share/man/cat1

No man pages were removed for HACMP 5.3.

B-24 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

AP The following man pages are new in HACMP 5.3:

/usr/share/man/cat1/clmixver.1
/usr/share/man/cat1/clwahs_import.1
/usr/share/man/cat1/cl_opsconfig.1

Execute man [command-name] to read the information.

Copyright IBM Corp. 1998, 2005 Appendix B. Release Notes for HACMP 5.3 B-25
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

B-26 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

AP Appendix C. IPAT via IP Replacement

What This Unit Is About


This unit describes the HACMP IP Address Takeover via IP
replacement function.

What You Should Be Able to Do


After completing this unit, you should be able to:
Explain and configure IP Address Takeover (IPAT) via IP
replacement

How You Will Check Your Progress


Accountability:
Checkpoint
Machine exercises

References
SC23-4867-05 HACMP for AIX: HACMP Master Glossary
SC23-4864-06 HACMP for AIX: Concepts and Facilities Guide
SC23-4861-06 HACMP for AIX: Planning and Installation Guide
SC23-4862-06 HACMP for AIX: Administration Guide
SC23-5177-00 HACMP for AIX: Troubleshooting Guide

Copyright IBM Corp. 1998, 2005 Appendix C. IPAT via IP Replacement C-1
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Unit Objectives
After completing this unit, you should be able to:
Explain and set up IP Address Takeover (IPAT) via IP
replacement

Copyright IBM Corporation 2005

Figure C-1. Unit Objectives AU546.0

Notes:

C-2 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

AP
IPAT via IP Replacement Configuration
Define each networks boot IP addresses in the AIX ODM.
Each interface IP address on a given node must be in a different logical IP subnet*
and there must be a common subnet among the nodes
Define these address in the /etc/hosts file and configure them in HACMP topology
Define service IP addresses in /etc/hosts and HACMP resources
The address must be in the SAME subnet as a common interface subnet
HACMP configures them to AIX as required

Before starting the application resource group

9.47.10.1 (ODM) 9.47.11.1 (ODM) 9.47.10.2 (ODM) 9.47.11.2 (ODM)

* See earlier discussion of heartbeating and failure diagnosis for explanation of why

Copyright IBM Corporation 2005

Figure C-2. IPAT via IP Replacement Configuration AU546.0

Notes:

Requirements
Keep the following items in mind when you configure a network for IPAT via IP
replacement:
- There must be at least one logical IP subnet which has a communication interface
(NIC) on each node. (In HACMP 4.5 terminology, these were called boot adapters.)
- Each service IP address must be in the same logical IP subnet as one of the
non-service addresses. Contrast with IPAT via IP aliasing, where service addresses
are required to NOT be in a boot subnet.
- If you have more than one service IP address, they must all be in the same subnet.
The reason for this will become clear when we discuss what happens during a
takeover, see IPAT via IP Replacement after a Node Fails on page 8.
- None of the other non-service addresses may be in the same subnet as the service
IP address (this is true regardless of whether IPAT via IP replacement is being used

Copyright IBM Corp. 1998, 2005 Appendix C. IPAT via IP Replacement C-3
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

as the NICs on each node are required to be on different IP subnets in order to


support heartbeating).
- All network interfaces must have the same subnet mask.

IPAT via IP replacement subnet rules example


Each service IP address must be in one, and only one, of the non-service subnets.
All service IP addresses must be in the same subnet.
Each non-service IP address on each node must be in a separate subnet.
For example, in a cluster with one network using IPAT via replacement, where each
node has two communication interfaces and two service IP labels, the network will
require two subnets:
Node name NIC IP Label IP Address
node1 en0 n1-if1 192.168.10.1
node1 en1 n1-if2 192.168.11.1
node2 en0 n2-if1 192.168.10.2
node2 en1 n2-if2 192.168.11.2
Service address appA-svc 192.168.10.22
Service address appB-svc 192.168.10.25

subnet IP labels
n1-if1, n2-if1, appA-svc,
192.168.10/24
appB-svc
192.168.11/24 n1-if2, n2-if2

C-4 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

AP
IPAT via IP Replacement in Operation
When the resource group comes up on a node, HACMP replaces
an boot (ODM) IP label with the service IP label
It replaces the boot IP label on the same subnet if the resource
group is on its startup node or if the distribution startup
policy is used.
It replaces a boot IP label on a different subnet otherwise

After starting the application resource group

9.47.10.22 (service) 9.47.11.1 (ODM) 9.47.10.2 (ODM) 9.47.11.2 (ODM)

Copyright IBM Corporation 2005

Figure C-3. IPAT via IP Replacement in Operation AU546.0

Notes:

Operation
When the resource group comes up on its home node, the resource groups service IP
address replaces the interface IP address of the NIC (AIX ODM) which is in the same
subnet as the service IP label (that is, the boot adapter in HACMP 4.x terminology).
Note that this approach implies that there cannot be two resource groups in the cluster
which both use IPAT via IP replacement and use the same node as their home node
unless their respective service IP addresses are in different subnets (in other words,
associated with different physical networks).
Also, since the service IP address replaces the existing IP address on the NIC, it is not
possible to have two or more service IP addresses in the same resource group which
are in the same IP subnet (as there will not be an adapter to assign the second service
IP address to).
When the resource group comes up on any node other than its home node, the
resource groups service IP address replaces the interface IP address of one of the

Copyright IBM Corp. 1998, 2005 Appendix C. IPAT via IP Replacement C-5
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

NICs which is not in the same subnet as the service IP address (this is primarily to allow
some other resource group to use the node as its home node).

C-6 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

AP
IPAT via IP Replacement After an I/F Fails
If the communication interface being used for the service IP label
fails, HACMP swaps the service IP label with a boot (ODM) IP label
on one of the node's remaining available (that is, currently
functional) communication interfaces
The IP labels remain swapped when the failed interface recovers

NIC A NIC B
9.47.11.1 (ODM) 9.47.10.22 (service) 9.47.10.2 (ODM) 9.47.11.2 (ODM)

Copyright IBM Corporation 2005

Figure C-4. IPAT via IP Replacement after an I/F Fails AU546.0

Notes:

Interface failure
If a communications interface (NIC A) which is currently assigned an IPAT via IP
replacement service IP address fails, then HACMP moves the service IP address to
one of the other communication interfaces (NIC B) on the same node (to one of the
standby adapters using HACMP 4.x terminology).
If there are no available (that is, functional) NICs left the relevant network then HACMP
initiates a fallover.

Interface swap
The failed communications interface (NIC A) is then reconfigured with the address of
the communication interface (NIC B) as this allows the heartbeat mechanism to watch
for when the failed communication interface (NIC A) recovers.

Copyright IBM Corp. 1998, 2005 Appendix C. IPAT via IP Replacement C-7
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

IPAT via IP Replacement After a Node Fails


If the resource group's node fails, HACMP moves the resource
group to a new node and replaces an interface IP label with the
service IP label:
If the resource group is on its startup node or if the Startup policy is
distribution, it replaces the interface (ODM) IP label in the same subnet
Else it replaces an interface (ODM) IP label in a different subnet
Or fails if there isn't an available interface

9.47.10.2 (ODM) 9.47.10.22 (service)

Copyright IBM Corporation 2005

Figure C-5. IPAT via IP Replacement after a Node Fails AU546.0

Notes:

Node failure
If the node currently responsible for an IPAT via IP replacement using resource group
fails then HACMP initiates a fallover. When the resource group comes up on the
takeover node, the service IP addresses are assigned to NICs on the fallover node:
- Home node or Startup policy of Online Using Distribution Policy (rotate in
HACMP 4.x terminology)
If the takeover node is the home node for the resource group or the resource group
has a Startup policy of Online Using Distribution Policy (rotate in HACMP 4.x
terminology), the Service IP addresses replace the IP addresses of a
communications interface (NIC) with an IP address in the same subnet as the
service IP address.
- Not the home node and not Online Using Distribution Policy
If the takeover node is not the home node for the resource group and the resource
group does not have a Startup policy of Online Using Distribution Policy, the

C-8 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

AP service IP addresses replace the IP addresses of a communications interface (NIC)


with an IP address in a different subnet than the subnet of the service IP address (a
standby adapter in HACMP 4.x terminology). This is primarily to allow some other
resource group to use the node as its home node.
Note: This explains why all service IP addresses must be in the same subnet when
using IPAT via replacement.

Home node and Startup policy


The home node (or the highest priority node for this resource group) is the first node
that is listed in the participating nodelist for a non-concurrent resource group. The home
node is a node that normally owns the resource group. Note that the takeover node
might actually be the home node since a resource group can be configured to not
always run on the highest priority available node.
Resource groups have three policies that HACMP uses to determine which nodes will
start a which resource groups. A Startup policy of Online Using Distribution
Policy (also called a distributed policy) specifies that only one resource group can be
active on a given node. If the first node in the resource groups list of nodes already has
another resource group started on it then the next node in the list of nodes is tried.
These concepts will be discussed in detail in the unit on resource groups.

Copyright IBM Corp. 1998, 2005 Appendix C. IPAT via IP Replacement C-9
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

IPAT via IP Replacement Summary


Configure each node with up to eight communication interfaces
(each on a different subnet)
Assign service IP labels to resource groups as appropriate
Each node can be the most preferred node for at most one
resource group
No limit on number of service IP labels per resource group but
each service IP label must be on a different physical network
HACMP replaces non-service IP labels with service IP labels on
the same subnet as the service IP label when the resource group
is running on its most preferred node or if the Startup Policy
is distributed
HACMP replaces non-service IP labels with service IP labels on a
different subnet from the service IP label when the resource
group is moved to any other node
IPAT via IP replacement supports hardware address
takeover

Copyright IBM Corporation 2005

Figure C-6. IPAT via IP Replacement Summary AU546.0

Notes:

Advantages
Probably the most significant advantage of IPAT via IP replacement is that it supports
hardware address takeover (HWAT), which will be discussed in a few pages.
Another advantage is that it requires fewer subnets. If you are limited in the number of
subnets available for your cluster, this may be important.
Note: Another alternative, if you are limited on the number of subnets you have
available, is to use heartbeating via IP aliases. See Heartbeating Over IP Aliases in the
HACMP for AIX 5L Planning and Installation Guide.

Disadvantages
Probably the most significant disadvantages are that IPAT via IP replacement limits the
number of service IP labels per subnet per resource group on one communications
interface to one and makes it rather expensive (and complex) to support lots of

C-10 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

AP resource groups in a small cluster. In other words, you need more network adapters to
support more applications.
Also, IPAT via replacement usually takes more time than IPAT via aliasing.
Note that HACMP tries to keep the service IP Labels available by swapping IP
addresses with other communication interfaces (standby adapters in HACMP 4.x
terminology) even if there are no resource groups currently on the node which uses
IPAT via IP replacement.

Copyright IBM Corp. 1998, 2005 Appendix C. IPAT via IP Replacement C-11
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Gratuitous ARP Support Issues


Gratuitous ARP is supported by AIX on the following network
technologies:
Ethernet (all types and speeds)
Token-Ring
FDDI
SP Switch 1 and SP Switch 2
Gratuitous ARP is not supported on ATM
Operating systems are not required to support gratuitous ARP
packets
Practically every operating system does support gratuitous ARP
Some systems (for example, certain routers) can be configured
to respect or ignore gratuitous ARP packets

Copyright IBM Corporation 2005

Figure C-7. Gratuitous ARP Support Issues AU546.0

Notes:

Review
When using IPAT via aliasing, you can use AIXs gratuitous ARP features to update
client and router ARP caches after a takeover. However, there may be issues.

Gratuitous ARP issues


Not all network technologies provide the appropriate capabilities to implement
gratuitous ARP. In addition, operating systems which implement TCP/IP are not
required to respect gratuitous ARP packets (although practically all modern operating
systems do).
Finally, support issues aside, an extremely overloaded network or a network which is
suffering intermittent failures might result in gratuitous ARP packets being lost (a
network which is sufficiently overloaded to be losing gratuitous ARP packets or which is
suffering intermittent failures which result in gratuitous ARP packets being lost is likely

C-12 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

AP to be causing the cluster and the cluster administrator far more serious problems than
the ARP cache issue involves).

Copyright IBM Corp. 1998, 2005 Appendix C. IPAT via IP Replacement C-13
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

What if Gratuitous ARP is Not Supported?


If the local network technology doesn't support gratuitous ARP or
there is a client system or router on the local physical network
which must communicate with the cluster and which does not
support gratuitous ARP packets:
clinfo can used on the client to receive updates of changes.
clinfo can be used on the servers to ping a list of clients, forcing an
update to their ARP caches.
HACMP can be configured to perform Hardware Address Takeover
(HWAT).

Suggestion:
Do not get involved with using either clinfo or HWAT to deal with
ARP cache issues until you've verified that there actually are ARP
issues which need to be dealt with.

Copyright IBM Corporation 2005

Figure C-8. What if Gratuitous ARP is Not Supported? AU546.0

Notes:

If gratuitous ARP is not supported


HACMP supports three alternatives to gratuitous ARP. The first two are discussed in
Unit 3. We will discuss the third option here.

Dont add unnecessary complexity


Cluster configurators should probably not simply assume that gratuitous ARP wont
provide a satisfactory solution as each of the alternatives introduce additional, possibly
unnecessary complexity into the cluster.
If the cluster administrator or configurator decides that the probability of a gratuitous
ARP update packet being lost is high enough to be relevant, then they should proceed
as though their context does not support gratuitous ARP.

C-14 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

AP
Option 3: Hardware Address Takeover
HACMP can be configured to swap a service IP label's hardware
address between network adapters.
HWAT is incompatible with IPAT via IP aliasing because each
service IP address must have its own hardware address and a NIC
can support only one hardware address at any given time.
Cluster implementer designates a Locally Administered Address
(LAA) which HACMP assigns to the NIC which has the service IP
label

Copyright IBM Corporation 2005

Figure C-9. Option 3: Hardware Address Takeover AU546.0

Notes:

Hardware address takeover (HWAT)


Hardware Address Takeover (HWAT) is the most robust method of dealing with the ARP
cache issue as it ensures that the hardware address associated with the service IP
address does not change (which avoids the whole issue of whether the client systems
ARP cache is out-of-date).
The essence of HWAT is that the cluster configurator designates a hardware address
which is to be associated with a particular service IP address. HACMP then ensures
that whichever NIC the service IP address is on also has the designated hardware
address.

HWAT considerations
There are a few points which must be kept in mind when contemplating HWAT:

Copyright IBM Corp. 1998, 2005 Appendix C. IPAT via IP Replacement C-15
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

- The hardware address which is associated with the service IP address must be
unique within the physical network that the service IP address is configured for.
- HWAT is not supported by IPAT via IP aliasing because each NIC can have more
than one IP address but each NIC can only have one hardware address.
- HWAT is only supported for Ethernet, token ring and FDDI networks (MCA FDDI
network cards do not support HWAT). ATM networks do not support HWAT.
- HWAT increases the takeover time (usually by just a few seconds).
- HWAT is an optional capability which must be configured into the HACMP cluster
(we will see how to do that in a few minutes).
- Cluster nodes using HWAT on token ring networks must be configured to reboot
after a system crash as the token ring card will continue to intercept packets for its
hardware address until the node starts to reboot.

C-16 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

AP
Hardware Address Takeover (1 of 3)
bondar-if1 hudson-if1
9.47.9.1 9.47.9.2
tr1 255.255.255.0 tr1
255.255.255.0
tr1
00:04:ac:48:22:f4 00:04:ac:62:72:61

Before hudson-if2
bondar-if2
tr0 9.47.5.3
255.255.255.0
resource group 9.47.5.2
255.255.255.0 tr0
00:04:ac:48:22:f6
00:04:ac:62:72:49
is started

Bondar Hudson

hudson-if1
bondar-if1 9.47.9.2
9.47.9.1 255.255.255.0
tr1 255.255.255.0 00:04:ac:62:72:61
00:04:ac:48:22:f4

After hudson-if2
xweb
tr0 9.47.5.1
255.255.255.0
resource group 9.47.5.2
255.255.255.0 tr0
00:04:ac:48:22:f6
40:04:ac:62:72:49
is started

Bondar Hudson

Copyright IBM Corporation 2005

Figure C-10. Hardware Address Takeover (1 of 3) AU546.0

Notes:

Hardware Address Takeover (HWAT): boot time


At boot time, the interfaces are assigned their normal hardware addresses.

HWAT: resource group started


When HACMP starts the resource group, the service IP address replaces the
non-service IP address of the interface and the alternate hardware address replaces
the normal hardware address for that NIC.
The alternate hardware address is usually referred to as a Locally Administered
Address or LAA.

Copyright IBM Corp. 1998, 2005 Appendix C. IPAT via IP Replacement C-17
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Hardware Address Takeover (2 of 3)


LAA is moved along with the service IP label
hudson-if1
xweb
9.47.9.2
9.47.5.1
255.255.255.0 tr1
255.255.255.0
00:04:ac:62:72:61
40:04:ac:62:72:49
Interface
xweb
9.47.5.1
failure hudson-if2
9.47.5.2
255.255.255.0
255.255.255.0 00:04:ac:48:22:f6
40:04:ac:62:72:49

Bondar Hudson

xweb
bondar-if1 9.47.5.1
9.47.9.1 255.255.255.0
255.255.255.0 40:04:ac:62:72:49
00:04:ac:48:22:f4
Node failure hudson-if2
xweb 9.47.5.2
9.47.5.1 255.255.255.0
255.255.255.0 00:04:ac:48:22:f6
40:04:ac:62:72:49

Bondar Hudson

Copyright IBM Corporation 2005

Figure C-11. Hardware Address Takeover (2 of 3) AU546.0

Notes:

HWAT: interface or node failure


If a NIC (with a service IP address that has an LAA) fails, HACMP moves the IP
address to a NIC on the takeover node. It also moves the LAA (alternative hardware
address) to the same NIC.
If a node fails, the service IP address, and its associated LAA, are moved to another
node.
The result, in both of these cases, is that the local clients ARP caches are still up to
date since the HW address associated with the IP address has not changed.

C-18 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

AP
Hardware Address Takeover (3 of 3)
xweb
bondar-if1 9.47.5.1
9.47.9.1 255.255.255.0
tr1 255.255.255.0 40:04:ac:62:72:49 tr1
00:04:ac:48:22:f4
When a failed node
hudson-if2
bondar-if2 comes back to life, 9.47.5.2
tr0 9.47.5.3
tr0
255.255.255.0 the burned in ROM 255.255.255.0
00:04:ac:48:22:f6
00:04:ac:62:72:49 Address is used on
the service network
adapter.
Bondar Hudson

hudson-if1
bondar-if1 9.47.9.2
9.47.9.1 255.255.255.0
tr1 255.255.255.0 00:04:ac:62:72:61 tr1
00:04:ac:48:22:f4
After HACMP is
xweb
started the node hudson-if2
9.47.5.2
tr0 9.47.5.1 reintegrates 255.255.255.0 tr0
255.255.255.0
40:04:ac:62:72:49 according to its 00:04:ac:48:22:f6

resource group
parameters

Bondar Hudson

Copyright IBM Corporation 2005

Figure C-12. Hardware Address Takeover (3 of 3) AU546.0

Notes:

HWAT: node recovery


When the failed node reboots, AIX must be configured to leave the network cards
factory-defined hardware address in place. If AIX is configured to set the network cards
HW address to the alternate hardware address at boot time, then two NICs on the same
network have the same hardware address (weird things happen when you do this).

HWAT: resource moved back to home node


If HACMP ultimately moves the resource group back to the now recovered node, then
the hardware address of the NIC on the backup node is restored to its factory setting,
and the LAA associated with the service IP address lands on the same NIC on the
recovered node as the service IP address lands on.

Copyright IBM Corp. 1998, 2005 Appendix C. IPAT via IP Replacement C-19
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Implementing Hardware Address Takeover


Someone just got a great deal on a dozen used FOOL-97x
computers for the summer students to use
They run some strange proprietary operating system which refuses
to update its ARP cache in response to either ping or gratuitous
ARP packets

bondar hudson

D D

A A

Copyright IBM Corporation 2005

Figure C-13. Implementing Hardware Address Takeover AU546.0

Notes:

Hardware Address Takeover (HWAT)


In this scenario, we will implement HWAT to support the new computers discussed in
the visual.
Just imagine how much money they have saved once they realize that these new
computers dont do what the summer students need done!
In the meantime, it looks like we need to implement hardware address takeover in order
to support these FOOL-97Xs.

Reality check
A side note is probably in order: although most TCP/IP-capable systems respect
gratuitous ARP, there are strange devices out there that do not. This scenario is phoney
but it presents a real if rather unlikely problem. For example, the ATM network does not
support gratuitous ARP and so could be a candidate for the use of HWAT.

C-20 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

AP
Our Plan for Implementing HWAT
1. Stop cluster services on both cluster nodes
Use the graceful shutdown option to bring down the resource groups and
their applications
2. Remove the alias service labels from the Resources
They are in the wrong subnet for replacement
They are automatically removed from the RG
3. Convert the net_ether_01 Ethernet network to use IPAT via IP
replacement:
Disable IPAT via IP aliasing on the Ethernet network.
Update /etc/hosts on both cluster nodes to describe service IP labels and
addresses on the 192.168.15.0 subnet
Use the procedure described in the networking to select the (Locally
Administered Addresses (LAA) addresses
Configure new service IP labels with these LAA addresses in the HACMP
SMIT screens
4. Define resource groups to use the new service IP labels.
5. Synchronize the changes
6. Restart HACMP on the two nodes.

Copyright IBM Corporation 2005

Figure C-14. Our Plan for Implementing HWAT AU546.0

Notes:

Implementing HWAT
In order to use HWAT, we must use IPAT via replacement.

Stop cluster services


Changing from IPAT via aliasing to IPAT via replacement can not be done dynamically,
we must stop the cluster.

Remove existing service IP labels


The service IP labels used for IPAT via aliasing cannot be used for IPAT via
replacement. They are on the wrong subnet. We will either need to change our service
addresses or change our non-service addresses. In this scenario, we choose to change
the service addresses.

Copyright IBM Corp. 1998, 2005 Appendix C. IPAT via IP Replacement C-21
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Convert the network to use IPAT via replacement


In addition to the obvious step of disabling IPAT via aliasing, we also need to update our
name resolution for the new service IP labels and we need to create an alternate
hardware address or Locally Administered Address (LAA) for each service IP label.

Name resolution changes


One slight problem with the above procedure is that it requires the users (or the DNS
administrator) to change the service IP address that they are using. It would arguably
be better if we preserved the service IP address. However, this would require more
network reconfiguration work and it isnt totally clear that the difference is significant in
the grand scheme of things. Note that either approach requires the cooperation of the
network administrators as we will require IP addresses and probably DNS changes.

C-22 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

AP
Stopping HACMP
# smit clstop
Stop Cluster Services
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
[Entry Fields]
* Stop now, on system restart or both now +
Stop Cluster Services on these nodes [bondar,hudson] +
BROADCAST cluster shutdown? true +
* Shutdown mode graceful +

F1=Help F2=Refresh F3=Cancel F4=List


F5=Reset F6=Command F7=Edit F8=Image
F9=Shell F10=Exit Enter=Do

Copyright IBM Corporation 2005

Figure C-15. Stopping HACMP AU546.0

Notes:

Stop HACMP
Make sure that HACMP is shut down gracefully, as we cant have the application
running while we are changing service IP addresses.

Copyright IBM Corp. 1998, 2005 Appendix C. IPAT via IP Replacement C-23
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Removing a Service IP Label


Press Enter here and you will be prompted to confirm the removal.

Configure HACMP Service IP Labels/Addresses


Move cursor to desired item and press Enter.
Add a Service IP Label/Address
Change/Show a Service IP Label/Address
Remove Service IP Label(s)/Address(es)

+--------------------------------------------------------------------------+
Select Service IP Label(s)/Address(es) to Remove

Move cursor to desired item and press F7.
ONE OR MORE items can be selected.
Press Enter AFTER making all selections.

xweb
yweb
zweb
F1=Help F2=Refresh F3=Cancel
F7=Select F8=Image F10=Exit
F1 Enter=Do /=Find n=Find Next
F9+--------------------------------------------------------------------------+

Repeat for both service IP labels.


Copyright IBM Corporation 2005

Figure C-16. Removing a Service IP Label AU546.0

Notes:

Remove any service labels configure for IPAT via aliasing


An attempt to convert the network to IPAT via IP replacement fails if there are any
service IP labels that dont conform to the IPAT via IP replacement rules.

C-24 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

AP
Disable IPAT via Aliases
Set the "Enable IP Address Takeover via IP Aliases" setting to "No"
and press Enter.

Change/Show an IP-Based Network in the HACMP Cluster


Type or select values in entry fields.
Press Enter AFTER making all desired changes.
[Entry Fields]
* Network Name net_ether_01
New Network Name []
* Network Type [ether] +
* Netmask [255.255.255.0] +
* Enable IP Address Takeover via IP Aliases [No] +
IP Address Offset for Heartbeating over IP Aliases []

F1=Help F2=Refresh F3=Cancel F4=List


F5=Reset F6=Command F7=Edit F8=Image
F9=Shell F10=Exit Enter=Do

Copyright IBM Corporation 2005

Figure C-17. Disable IPAT via Aliases AU546.0

Notes:

Introduction
Here we change the net_ether_01 network to disable IPAT via aliasing.

Copyright IBM Corp. 1998, 2005 Appendix C. IPAT via IP Replacement C-25
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

The Updated /etc/hosts


Here's the key portion of the /etc/hosts file with the service IP
labels moved to the 192.168.15.0 subnet:
192.168.5.29 bondar # persistent node IP label on bondar
192.168.15.29 bondar-if1 # bondar's first boot IP label
192.168.16.29 bondar-if2 # bondar's second boot IP label
192.168.5.31 hudson # persistent node IP label on hudson
192.168.15.31 hudson-if1 # hudson's first boot IP label
192.168.16.31 hudson-if2 # hudson's second boot IP label
192.168.15.92 xweb # the IP label for the application normally
# resident on bondar
192.168.15.70 yweb # the IP label for the application normally
# resident on hudson

Note that neither bondar or hudson's network configuration (as


defined with the AIX TCP/IP smit screens) needs to be changed
Note that we are not renaming the interface IP labels to
something like bondar_boot and bondar_standby as changing IP
labels in an HACMP cluster can be quite a bit of work (it is often
easier to delete the cluster definition and start over)

Copyright IBM Corporation 2005

Figure C-18. The Updated /etc/hosts AU546.0

Notes:

IPAT via replacement rules


Remember the rules for IP addresses for IPAT via IP replacement networks (slightly
reworded):
1) The service IP labels must all be on the same subnet
2) There must be one NIC on each host which has an IP address on the same
subnet as the service IP labels (in HACMP 4.x terminology, these NICs are
boot adapters)
3) The other NICs on each node must each be in a different subnet than the
service IP labels (in HACMP 4.x terminology, these NICs are standby
adapters)
In a cluster with only two NICs per node, NIC IP addresses which conform to the IPAT
via IP aliasing rules also conform to the IPAT via replacement so only the service IP
labels need to be changed.

C-26 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

AP
Creating a
Locally Administered Address (LAA)
Each service IP label using HWAT will need an LAA
The LAA must be unique on the cluster's physical network
The MAC address based technologies (Ethernet, Token ring and
FDDI) use six byte hardware addresses of the form:
xx.xx.xx.xx.xx.xx
The factory-set MAC address of the NIC will start with 0, 1, 2 or 3
A MAC address that starts with 0, 1, 2 or 3 is called a Globally
Administered Address (GAA) because it is assigned to the
NIC's vendor by a central authority
Incrementing this first digit by 4 transforms the GAA into a Locally
Administered Address (LAA) which will be unique worldwide
(unless someone has already used the same GAA to create an
LAA which isn't likely since GAAs are unique worldwide)

Copyright IBM Corporation 2005

Figure C-19. Creating a Locally Administered Address (LAA) AU546.0

Notes:

Hardware addresses
Hardware addresses must be unique, at a minimum, on the local network to which they
are connected. The factory set hardware address for each network interface card (NIC)
is administered by a central authority and should be unique in the world. These
addresses are called Globally Administered Addresses (GAAs).

Locally Administered Addresses (LAAs)


Incrementing the first nibble of the GAA by 4 transforms it into an LAA.
Using this method to create an alternate address should provide you with an address
that is also globally unique, as noted in the visual.
Note: According to the IEEE 802 standard for LAN MAC addresses, the second bit
transmitted on the LAN medium (the 4 bit) is the local/global bit. If this bit is zero, the

Copyright IBM Corp. 1998, 2005 Appendix C. IPAT via IP Replacement C-27
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

address is a GAA. Setting this bit to one indicates that this address is locally
administered.

C-28 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

AP
Creating Two LAAs for Our Cluster
Here are two Globally Administered Addresses (GAAs)
taken from Ethernet adapters in the cluster:
0.4.ac.17.19.64
0.6.29.ac.46.8
First we make sure that each number is two digits long by
adding leading zeros as necessary:
00.04.ac.17.19.64
00.06.29.ac.46.08
Verify that the first digit is 0, 1, 2 or 3:
Yep!
Add 4 to the first digit of each GAA:
40.04.ac.17.19.64
40.06.29.ac.46.08
Done! These two addresses are now LAAs

Copyright IBM Corporation 2005

Figure C-20. Creating Two LAAs for Our Cluster AU546.0

Notes:

Copyright IBM Corp. 1998, 2005 Appendix C. IPAT via IP Replacement C-29
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Hardware Address Takeover Issues


Do not enable the ALTERNATE hardware address field in the
SMIT devices menu
Causes the adapter to boot on your chosen LAA rather than the
burned in ROM address.
Causes serious communications problems and puts the cluster in to an
unstable state.
Correct method is to enter your chosen LAA in to the smit HACMP
menus (remove the periods or colons before entering it into the field).
The Token-Ring documentation states that the LAA must start
with 42
The FDDI documentation states that the first nibble (digit) of the first
byte of the LAA must be 4, 5, 6 or 7 (which is compatible with the
method for creating LAAs described earlier)
Token-Ring adapters do not release the LAA if AIX crashes.
AIX must be set to reboot automatically after a system crash
(see smitty chgsys)

Copyright IBM Corporation 2005

Figure C-21. Hardware Address Takeover Issues AU546.0

Notes:

Issues
The main thing to remember is that you do NOT configure the ALTERNATE hardware
address field in the SMIT devices panel.
You MUST leave that blank and configure this using the SMIT HACMP menus.

C-30 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

AP
Redefining the Service IP Labels for HWAT
Redefine the two service IP labels. Note that the periods are stripped
out before the LAA is entered into the HW Address field.
Add a Service IP Label/Address configurable on Multiple Nodes (extended)
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
[Entry Fields]
* IP Label/Address [xweb] +
* Network Name net_ether_01
Alternate HW Address to accompany IP Label/Address [4004ac171964]

You probably shouldn't use the particular LAAs


shown on these foils in your cluster. Select your
own LAAs using the procedure described
earlier.

F1=Help F2=Refresh F3=Cancel F4=List


F5=Reset F6=Command F7=Edit F8=Image
F9=Shell F10=Exit Enter=Do

Don't forget to specify the second LAA for the second service IP label.
Copyright IBM Corporation 2005

Figure C-22. Redefining the Service IP Labels for HWAT AU546.0

Notes:

Redefining the service IP labels


Define each of the service IP labels making sure to specify a different LAA address for
each one.
The Alternate HW Address to accompany IP Label/Address is specified as a series
of hexadecimal digits without intervening periods or any other punctuation.
If IPAT via IP replacement is specified for the network, which it is in this case, you get an
error or a warning from this screen if you try to define service IP labels which do not
conform to the rules for service IP labels on IPAT via IP replacement networks.

Copyright IBM Corp. 1998, 2005 Appendix C. IPAT via IP Replacement C-31
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Synchronize Your Changes


Synchronize the changes and run through the test plan.

HACMP Verification and Synchronization


Type or select values in entry fields.
Press Enter AFTER making all desired changes.
[Entry Fields]
* Verify, Synchronize or Both [Both] +
Force synchronization if verification fails? [No] +
* Verify changes only? [No] +
* Logging [Standard] +

F1=Help F2=Refresh F3=Cancel F4=List


F5=Reset F6=Command F7=Edit F8=Image
F9=Shell F10=Exit Enter=Do

Copyright IBM Corporation 2005

Figure C-23. Synchronize Your Changes AU546.0

Notes:

Synchronize
Dont forget to synchronize.

C-32 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1
Student Notebook

AP
Checkpoint
1. For IPAT via replacement (select all that apply)
a. Each service IP address must be in the same subnet as one of
the non-service addresses
b. Each service IP address must be in the same subnet
c. Each service IP address cannot be in any non-service address
subnet
2. True or False?
If the takeover node is not the home node for the resource group
and the resource group does not have a Startup policy of Online
Using Distribution Policy, the service IP address replaces the IP
address of a NIC with an IP address in the same subnet as the
subnet of the service IP address
3. True or False?
In order to use HWAT, you must enable and complete the
ALTERNATE ETHERNET address field in the SMIT devices
menu
4. True or False?
You must stop the cluster in order to change from IPAT via
aliasing to IPAT via replacement

Copyright IBM Corporation 2005

Figure C-24. Checkpoint AU546.0

Notes:

Copyright IBM Corp. 1998, 2005 Appendix C. IPAT via IP Replacement C-33
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Unit Summary
IPAT via IP replacement:
May require fewer subnets than IPAT via aliasing
May require more NICs than IPAT via aliasing
Supports hardware address takeover
HACMP replaces non-service IP labels with service IP labels on the
same subnet as the service IP label when the resource group is
started on its home node or if the Startup Policy is distributed
HACMP replaces non-service IP labels with service IP labels on a
different subnet from the service IP label when the resource group is
moved to any other node
IPAT via IP replacement configuration issues
Service IP address must be the same subnet as one of the non-service subnets
All service IP addresses must be in the same subnet
You must have at least as many NICs on each node as service IP addresses
Hardware Address Takeover (HWAT) issues
Alternate hardware address (Locally Administered Address or LAA) must be
configured in HACMP. Do NOT use standard SMIT field.
Alternate hardware address must be unique.

Copyright IBM Corporation 2005

Figure C-25. Unit Summary AU546.0

Notes:

C-34 HACMP Implementation Copyright IBM Corp. 1998, 2005


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.1.0.1

backpg
Back page


Vous aimerez peut-être aussi