Vous êtes sur la page 1sur 554

Front cover

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Install and customize Productivity Center for Disk Install and customize Productivity Center for Replication Use Productivity Center to manage your storage

Mary Lovelace Jason Bamford Dariusz Ferenc Madhav Vaze

ibm.com/redbooks

International Technical Support Organization Managing Disk Subsystems using IBM TotalStorage Productivity Center September 2005

SG24-7097-01

Note: Before using this information and the product it supports, read the information in Notices on page ix.

Second Edition (September 2005) This edition applies to Version 2 Release 1 of IBM TotalStorage Productivity Center (product number 5608-TC1, 5608-TC4, 5608-TC5).
Copyright International Business Machines Corporation 2004, 2005. All rights reserved. Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp.

Contents
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .x Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi The team that wrote this redbook. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi Become a published author . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii Chapter 1. IBM TotalStorage Productivity Center overview . . . . . . . . . . . . . . . . . . . . . . 1 1.1 Introduction to IBM TotalStorage Productivity Center . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.1.1 Standards organizations and standards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.2 IBM TotalStorage Open Software family . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.3 IBM TotalStorage Productivity Center . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.3.1 Data subject matter expert: TotalStorage Productivity Center for Data . . . . . . . . . 5 1.3.2 Fabric subject matter expert: Productivity Center for Fabric . . . . . . . . . . . . . . . . . . 7 1.3.3 Disk subject matter expert: TotalStorage Productivity Center for Disk . . . . . . . . . 10 1.3.4 Replication subject matter expert: Productivity Center for Replication . . . . . . . . . 12 1.4 IBM TotalStorage Productivity Center . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 1.4.1 Productivity Center for Disk and Productivity Center for Replication . . . . . . . . . . 15 1.4.2 Event services. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 1.5 Taking steps toward an On Demand environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 Chapter 2. Key concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1 Standards organizations and standards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1.1 CIM/WEB management model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Storage Networking Industry Association . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.1 The SNIA Shared Storage Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.2 SMI Specification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.3 Integrating existing devices into the CIM model . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.4 CIM Agent implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.5 CIM Object Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3 Common Information Model (CIM) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.1 How the CIM Agent works. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4 Service Location Protocol (SLP) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.1 SLP architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.2 SLP service agent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.3 SLP user agent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.4 SLP directory agent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.5 Why use an SLP DA? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.6 When to use DAs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.7 SLP configuration recommendation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.8 Setting up the Service Location Protocol Directory Agent . . . . . . . . . . . . . . . . . . 2.4.9 Configuring SLP Directory Agent addresses. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5 Productivity Center for Disk and Replication architecture . . . . . . . . . . . . . . . . . . . . . . . Chapter 3. TotalStorage Productivity Center suite installation . . . . . . . . . . . . . . . . . . 3.1 Installing the IBM TotalStorage Productivity Center . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1.1 Configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1.2 Installation prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 26 26 27 27 28 29 30 30 31 32 32 33 33 34 35 38 38 39 40 41 42 43 44 44 45 iii

Copyright IBM Corp. 2004, 2005. All rights reserved.

3.1.3 TCP/IP ports used by TotalStorage Productivity Center . . . . . . . . . . . . . . . . . . . . 45 3.1.4 Default databases created during install . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 3.2 Pre-installation check list. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 3.2.1 User IDs and security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 3.2.2 Certificates and key files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 3.3 Services and service accounts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 3.3.1 Starting and stopping the managers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 3.3.2 Uninstall Internet Information Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 3.3.3 SNMP install . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 3.4 IBM TotalStorage Productivity Center for Fabric. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 3.4.1 The computer name . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 3.4.2 Database considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 3.4.3 Windows Terminal Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 3.4.4 Tivoli NetView . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 3.4.5 Personal firewall . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 3.4.6 Change the HOSTS file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 3.5 Installation process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 3.5.1 Prerequisite product install: DB2 and WebSphere . . . . . . . . . . . . . . . . . . . . . . . . 62 3.5.2 Installing IBM Director . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 3.5.3 Tivoli Agent Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 3.5.4 IBM TotalStorage Productivity Center for Disk and Replication Base. . . . . . . . . . 86 3.5.5 IBM TotalStorage Productivity Center for Disk . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 3.5.6 IBM TotalStorage Productivity Center for Replication. . . . . . . . . . . . . . . . . . . . . 100 3.5.7 IBM TotalStorage Productivity Center for Fabric. . . . . . . . . . . . . . . . . . . . . . . . . 107 Chapter 4. CIMOM installation and configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 Planning considerations for Service Location Protocol . . . . . . . . . . . . . . . . . . . . . . . . 4.2.1 Considerations for using SLP DAs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.2 SLP configuration recommendation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3 General performance guidelines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4 Planning considerations for CIMOM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4.1 CIMOM configuration recommendations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.5 Installing CIM agent for ESS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.5.1 ESS CLI install . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.5.2 ESS CIM Agent install. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.5.3 Post Installation tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.6 Configuring the ESS CIM Agent for Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.6.1 Registering ESS Devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.6.2 Register ESS server for Copy services. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.6.3 Restart the CIMOM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.6.4 CIMOM User Authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.7 Verifying connection to the ESS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.7.1 Problem determination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.7.2 Confirming the ESS CIMOM is available . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.7.3 Setting up the Service Location Protocol Directory Agent . . . . . . . . . . . . . . . . . 4.7.4 Configuring IBM Director for SLP discovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.7.5 Registering the ESS CIM Agent to SLP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.7.6 Verifying and managing CIMOMs availability . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.8 Installing CIM agent for IBM DS4000 family . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.8.1 Verifying and Managing CIMOM availability . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.9 Configuring CIMOM for SAN Volume Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.9.1 Adding the SVC TotalStorage Productivity Center for Disk user account. . . . . . 119 120 120 120 121 122 123 123 124 124 128 137 139 139 141 142 143 144 147 148 150 152 153 154 155 164 166 167

iv

Managing Disk Subsystems using IBM TotalStorage Productivity Center

4.9.2 Registering the SAN Volume Controller host in SLP . . . . . . . . . . . . . . . . . . . . . 4.10 Configuring CIMOM for TotalStorage Productivity Center for Disk summary . . . . . . 4.10.1 SLP registration and slptool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.10.2 Persistency of SLP registration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.10.3 Configuring slp.reg file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 5. TotalStorage Productivity Center common base use . . . . . . . . . . . . . . . 5.1 Productivity Center common base: Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 Launching TotalStorage Productivity Center. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3 Exploiting Productivity Center common base . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.1 Configure MDM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.2 Launch Device Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.3 Discovering new storage devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.4 Manage CIMOMs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.5 Manually removing old CIMOM entries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4 Performing volume inventory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5 Working with ESS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.1 Changing the display name of an ESS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.2 ESS Volume inventory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.3 Assigning and unassigning ESS volumes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.4 Creating new ESS volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.5 Launch device manager for an ESS device . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6 Working with SAN Volume Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6.1 Changing the display name of a SAN Volume Controller . . . . . . . . . . . . . . . . . . 5.6.2 Working with SAN Volume Controller mdisks . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6.3 Creating new Mdisks on supported storage devices. . . . . . . . . . . . . . . . . . . . . . 5.6.4 Create and view SAN Volume Controller Vdisks . . . . . . . . . . . . . . . . . . . . . . . . 5.7 Working with DS4000 family or FAStT storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.7.1 Changing the display name of a DS4000 or FAStT . . . . . . . . . . . . . . . . . . . . . . 5.7.2 Working with DS4000 or FAStT volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.7.3 Creating DS4000 or FAStT volumes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.7.4 Assigning hosts to DS4000 and FAStT volumes . . . . . . . . . . . . . . . . . . . . . . . . 5.7.5 Unassigning hosts from DS4000 or FAStT volumes. . . . . . . . . . . . . . . . . . . . . . 5.8 Event Action Plan Builder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.8.1 Applying an Event Action Plan to a managed system or group . . . . . . . . . . . . . 5.8.2 Exporting and importing Event Action Plans . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 6. TotalStorage Productivity Center for Disk use . . . . . . . . . . . . . . . . . . . . . 6.1 Performance Manager GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2 Exploiting Performance Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.1 Performance Manager data collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.2 Using IBM Director Scheduler function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.3 Reviewing Data collection task status. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.4 Managing Performance Manager Database . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.5 Performance Manager gauges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.6 ESS thresholds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.7 Data collection for SAN Volume Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.8 SAN Volume Controller thresholds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3 Exploiting gauges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.1 Before you begin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.2 Creating gauges example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.3 Zooming in on the specific time period . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.4 Modify gauge to view array level metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

173 173 174 175 175 177 178 178 179 180 181 181 187 189 194 197 198 198 200 201 202 203 204 204 206 207 209 210 211 212 213 214 215 219 221 227 228 228 229 235 236 239 242 257 260 261 263 263 263 265 266

Contents

6.3.5 Modify gauge to review multiple metrics in same chart. . . . . . . . . . . . . . . . . . . . 268 6.4 Performance Manager command line interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269 6.4.1 Performance Manager CLI commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270 6.4.2 Sample command outputs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271 6.5 Volume Performance Advisor (VPA) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272 6.5.1 VPA introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272 6.5.2 The provisioning challenge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273 6.5.3 Workload characterization and workload profiles . . . . . . . . . . . . . . . . . . . . . . . . 273 6.5.4 Workload profile values. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274 6.5.5 How the Volume Performance Advisor makes decisions . . . . . . . . . . . . . . . . . . 275 6.5.6 Enabling the Trace Logging for Director GUI Interface . . . . . . . . . . . . . . . . . . . . 276 6.6 Getting started. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277 6.6.1 Workload profiles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277 6.6.2 Using VPA with predefined Workload profile . . . . . . . . . . . . . . . . . . . . . . . . . . . 278 6.6.3 Launching VPA tool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278 6.6.4 ESS User Validation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 280 6.6.5 Configuring VPA settings for the ESS diskspace request. . . . . . . . . . . . . . . . . . 283 6.6.6 Choosing Workload Profile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 287 6.6.7 Choosing candidate locations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 290 6.6.8 Verify settings for VPA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 292 6.6.9 Approve recommendations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 292 6.6.10 VPA loopback after Implement Recommendations selected . . . . . . . . . . . . . . 294 6.7 Creating and managing Workload Profiles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303 6.7.1 Choosing Workload Profiles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304 6.8 Remote Console installation for TotalStorage Productivity Center for Disk - Performance Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 314 6.8.1 Installing IBM Director Console. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315 6.8.2 Installing TotalStorage Productivity Center for Disk Base Remote Console. . . . 319 6.8.3 Installing Remote Console for Performance Manager function. . . . . . . . . . . . . . 323 6.8.4 Launching Remote Console for TotalStorage Productivity Center . . . . . . . . . . . 328 Chapter 7. TotalStorage Productivity Center for Fabric use . . . . . . . . . . . . . . . . . . . 7.1 TotalStorage Productivity Center for Fabric overview . . . . . . . . . . . . . . . . . . . . . . . . . 7.1.1 Zoning overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1.2 Supported switches for zoning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1.3 Deployment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1.4 Enabling zone control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1.5 TotalStorage Productivity Center for Disk eFix . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1.6 Installing the eFix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2 Installing Fabric remote console . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3 TotalStorage Productivity Center for Disk integration . . . . . . . . . . . . . . . . . . . . . . . . . 7.4 Launching TotalStorage Productivity Center for Fabric . . . . . . . . . . . . . . . . . . . . . . . Chapter 8. TotalStorage Productivity Center for Replication use. . . . . . . . . . . . . . . . 8.1 TotalStorage Productivity Center for Replication overview . . . . . . . . . . . . . . . . . . . . . 8.1.1 Supported Copy Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1.2 Replication session . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1.3 Storage group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1.4 Storage pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1.5 Relationship of group, pool, and session . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1.6 Copyset and sequence concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2 Exploiting TotalStorage Productivity Center for Replication . . . . . . . . . . . . . . . . . . . . 8.2.1 Before you start. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 331 332 332 333 335 336 338 338 340 346 352 355 356 356 358 359 359 360 361 361 362

vi

Managing Disk Subsystems using IBM TotalStorage Productivity Center

8.2.2 Creating a storage group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2.3 Modifying a storage group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2.4 Viewing storage group properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2.5 Deleting a storage group. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2.6 Creating a storage pool. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2.7 Modifying a storage pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2.8 Deleting a storage pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2.9 Viewing storage pool properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2.10 Storage paths . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2.11 Point-in-Time Copy: Creating a session . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2.12 Creating a session: Verifying source-target relationship. . . . . . . . . . . . . . . . . . 8.2.13 Continuous Synchronous Remote Copy: Creating a session . . . . . . . . . . . . . . 8.2.14 Managing a Point-in-Time copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2.15 Managing a Continuous Synchronous Remote Copy . . . . . . . . . . . . . . . . . . . . 8.3 Using Command Line Interface (CLI) for replication . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3.1 Session details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3.2 Starting a session . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3.3 Suspending a session . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3.4 Terminating a session . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 9. Problem determination. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.1 Troubleshooting tips: Host configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.1.1 IBM Director logfiles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.1.2 Using Event Action Plans . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.1.3 Restricting discovery scope in TotalStorage Productivity Center . . . . . . . . . . . . 9.1.4 Following discovery using Windows raswatch utility . . . . . . . . . . . . . . . . . . . . . . 9.1.5 DB2 database checking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.1.6 IBM WebSphere tracing and logfile browsing . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.1.7 SLP and CIM Agent problem determination . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.1.8 Enabling SLP tracing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.1.9 ESS registration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.1.10 Viewing Event entries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2 Replication Manager problem determination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2.1 Diagnosing an indications problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2.2 Restarting the replication environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3 Enabling trace logging. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3.1 Enabling WebSphere Application Server trace . . . . . . . . . . . . . . . . . . . . . . . . . . 9.4 Enabling trace logging. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.4.1 ESS user authentication problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.4.2 SVC Data collection task failure due to previous running task . . . . . . . . . . . . . . Chapter 10. Database management and reporting . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.1 DB2 database overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.2 Database purging in TotalStorage Productivity Center . . . . . . . . . . . . . . . . . . . . . . . 10.2.1 Performance Manager database panel. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.3 IBM DB2 tool suite . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.3.1 Command Line Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.3.2 Development Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.3.3 General Administration Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.3.4 Monitoring Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.4 DB2 Command Center overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.4.1 Command Center navigation example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.5 DB2 Command Center custom report example . . . . . . . . . . . . . . . . . . . . . . . . . . . .

362 366 367 368 369 372 373 374 375 375 379 385 389 395 407 409 411 414 415 421 422 422 422 423 423 423 428 429 430 431 431 434 435 435 435 435 444 444 445 449 450 450 451 453 454 455 456 457 457 458 462

Contents

vii

10.5.1 Extracting LUN data report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.5.2 Command Center report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.6 Exporting collected performance data to a file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.6.1 Control Center. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.6.2 Data extraction tools, tips and reporting methods. . . . . . . . . . . . . . . . . . . . . . . 10.7 Database backup and recovery overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.8 Backup example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Appendix A. TotalStorage Productivity Center DB2 table formats. . . . . . . . . . . . . . . A.1 Performance Manager tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A.1.1 VPVPD table. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A.1.2 VPCFG table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A.1.3 VPVOL table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A.1.4 VPCCH table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Appendix B. Worksheets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . B.1 User IDs and passwords. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . B.1.1 Server information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . B.1.2 User IDs and passwords to lock the key files . . . . . . . . . . . . . . . . . . . . . . . . . . . B.2 Storage device information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . B.2.1 IBM Enterprise Storage Server. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . B.2.2 IBM FAStT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . B.2.3 IBM SAN Volume Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Appendix C. Event management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . C.1 Event management introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . C.1.1 Understanding events and event actions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . C.1.2 Understanding event filters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . C.1.3 Event Actions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . C.1.4 Event Data Substitution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . C.1.5 Updating Event Plans, Filters, and Actions . . . . . . . . . . . . . . . . . . . . . . . . . . . . Related publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Other Publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Online resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . How to get IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Help from IBM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

462 465 481 482 485 490 494 497 498 498 498 499 500 505 506 506 506 508 508 509 510 511 512 512 513 519 521 523 527 527 527 528 528 528

Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 529

viii

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Notices
This information was developed for products and services offered in the U.S.A. IBM may not offer the products, services, or features discussed in this document in other countries. Consult your local IBM representative for information on the products and services currently available in your area. Any reference to an IBM product, program, or service is not intended to state or imply that only that IBM product, program, or service may be used. Any functionally equivalent product, program, or service that does not infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to evaluate and verify the operation of any non-IBM product, program, or service. IBM may have patents or pending patent applications covering subject matter described in this document. The furnishing of this document does not give you any license to these patents. You can send license inquiries, in writing, to: IBM Director of Licensing, IBM Corporation, North Castle Drive Armonk, NY 10504-1785 U.S.A. The following paragraph does not apply to the United Kingdom or any other country where such provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of express or implied warranties in certain transactions, therefore, this statement may not apply to you. This information could include technical inaccuracies or typographical errors. Changes are periodically made to the information herein; these changes will be incorporated in new editions of the publication. IBM may make improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time without notice. Any references in this information to non-IBM Web sites are provided for convenience only and do not in any manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the materials for this IBM product and use of those Web sites is at your own risk. IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring any obligation to you. Information concerning non-IBM products was obtained from the suppliers of those products, their published announcements or other publicly available sources. IBM has not tested those products and cannot confirm the accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the capabilities of non-IBM products should be addressed to the suppliers of those products. This information contains examples of data and reports used in daily business operations. To illustrate them as completely as possible, the examples include the names of individuals, companies, brands, and products. All of these names are fictitious and any similarity to the names and addresses used by an actual business enterprise is entirely coincidental. COPYRIGHT LICENSE: This information contains sample application programs in source language, which illustrates programming techniques on various operating platforms. You may copy, modify, and distribute these sample programs in any form without payment to IBM, for the purposes of developing, using, marketing or distributing application programs conforming to the application programming interface for the operating platform for which the sample programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore, cannot guarantee or imply reliability, serviceability, or function of these programs. You may copy, modify, and distribute these sample programs in any form without payment to IBM for the purposes of developing, using, marketing, or distributing application programs conforming to IBM's application programming interfaces.

Copyright IBM Corp. 2004, 2005. All rights reserved.

ix

Trademarks
The following terms are trademarks of the International Business Machines Corporation in the United States, other countries, or both:
Eserver e-business on demand iSeries z/OS AIX Cloudscape Cube Views CICS DataJoiner DB2 Universal Database DB2 Enterprise Storage Server ESCON FlashCopy Informix Intelligent Miner IBM Lotus MVS NetView OS/390 QMF Redbooks Redbooks (logo) S/390 Tivoli Enterprise Tivoli Enterprise Console Tivoli TotalStorage WebSphere

The following terms are trademarks of other companies: Intel, Pentium, Xeon, Intel logo, Intel Inside logo, and Intel Centrino logo are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States, other countries, or both. Excel, Microsoft, Windows, and the Windows logo are trademarks of Microsoft Corporation in the United States, other countries, or both. EJB, Java, JDBC, and all Java-based trademarks are trademarks of Sun Microsystems, Inc. in the United States, other countries, or both. UNIX is a registered trademark of The Open Group in the United States and other countries. Linux is a trademark of Linus Torvalds in the United States, other countries, or both. SET, SET Secure Electronic Transaction, and the SET Logo are trademarks owned by SET Secure Electronic Transaction LLC. Other company, product, and service names may be trademarks or service marks of others.

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Preface
IBM TotalStorage Productivity Center is designed to provide a single point of control for managing networked storage devices that implement the Storage Management Initiative Specification (SMI-S), including the IBM TotalStorage SAN Volume Controller, Enterprise Storage Server, and FAStT. TotalStorage Productivity Center includes the IBM Tivoli Bonus Pack for SAN Management, bringing together device management with fabric management, to help enable the storage administrator to manage the Storage Area Network from a central point. The storage administrator has the ability to configure storage devices, manage the devices, and view the Storage Area Network from a single point. This software offering is intended to complement other members of the IBM TotalStorage Virtualization family by simplifying and consolidating storage management activities. This IBM Redbook includes an introduction to the TotalStorage Productivity Center and its components. It provides detailed information about the installation and configuration of TotalStorage Productivity Center for Disk and TotalStorage Productivity Center for Replication and how to use them. It is intended for anyone wanting to learn about TotalStorage Productivity Center and how it complements an on demand environment and for those planning to install and use the product.

The team that wrote this redbook


This redbook was produced by a team of specialists from around the world working at the International Technical Support Organization, San Jose Center. Mary Lovelace is a Consulting IT Specialist in the International Technical Support Organization. She has more than 20 years of experience with IBM in large systems, storage and storage networking product education, system engineering and consultancy, and systems support. Jason Bamford is a Certified IT Specialist in the IBM Software Business, United Kingdom. He has 21 years customer experience in finance, commercial and public sector accounts, deploying mid-range systems in AIX, Windows and other UNIX variants. An IBM employee for the past eight years, Jason specializes in IBM software storage products and is a subject matter expert in the UK for Tivoli Storage Manager. Dariusz Ferenc is a Technical Support Specialist with Storage Systems Group at IBM Poland. He has been with IBM for four years and he has nearly 10 years of experience in storage systems. He is in Technical Support in a CEMA region and is an IBM Certified Specialist in various storage products. His responsibility involves providing technical support and designing storage solutions. Darek holds a degree in Computer Science from the Poznan University of Technology, Poland. Madhav Vaze is an Accredited Senior IT Specialist and ITS Storage Engagement Lead in Singapore, specializing in storage solutions for Open Systems. Madhav has 19 years of experience in the IT services industry and five years of experience in IBM storage hardware and software. He has acquired the Brocade BFCP and SNIA professional certification.

Copyright IBM Corp. 2004, 2005. All rights reserved.

xi

The team: Dariusz, Jason, Mary, Madhav

Thanks to the following people for their contributions to this project: Sangam Racherla International Technical Support Organization, San Jose Center Bob Haimowitz ITSO Raleigh Center Diana Duan Michael Liu Richard Kirchofer Paul Lee Thiha Than Bill Warren Martine Wedlake IBM San Jose, California Mike Griese Technical Support Marketing Lead Scott Drummond Program Director Storage Networking Curtis Neal Scott Venuti Open Systems Demo Center, San Jose Russ Smith Storage Software Project Management Jeff Ottman Systems Group TotalStorage Education Architect Doug Dunham Tivoli Swat Team

xii

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Ramani Routray Almaden Research Center The original authors of this book are: Ivan Aliprandi William Andrews John A. Cooper Daniel Demer Werner Eggli Tom Smythe Peter Zerbini

Become a published author


Join us for a two- to six-week residency program! Help write an IBM Redbook dealing with specific products or solutions, while getting hands-on experience with leading-edge technologies. You'll team with IBM technical professionals, Business Partners and/or customers. Your efforts will help increase product acceptance and customer satisfaction. As a bonus, you'll develop a network of contacts in IBM development labs, and increase your productivity and marketability. Find out more about the residency program, browse the residency index, and apply online at:
ibm.com/redbooks/residencies.html

Comments welcome
Your comments are important to us! We want our Redbooks to be as helpful as possible. Send us your comments about this or other Redbooks in one of the following ways: Use the online Contact us review redbook form found at:
ibm.com/redbooks

Send your comments in an Internet note to:


redbook@us.ibm.com

Mail your comments to: IBM Corporation, International Technical Support Organization Dept. QXXE Building 80-E2 650 Harry Road San Jose, California 95120-6099

Preface

xiii

xiv

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Chapter 1.

IBM TotalStorage Productivity Center overview


IBM TotalStorage Productivity Center is software, part of the IBM TotalStorage open software family, designed to provide a single point of control for managing both IBM and non-IBM networked storage devices that implement the Storage Management Initiative Specification (SMI-S), including the IBM TotalStorage SAN Volume Controller (SVC), IBM TotalStorage Enterprise Storage Server (ESS), IBM TotalStorage Fibre Array Storage Technology (FAStT), and IBM TotalStorage DS4000 series. TotalStorage Productivity Center is a solution for customers with storage management requirements, who want to reduce the complexities and costs of storage management, including management of SAN-based storage, while consolidating control within a consistent graphical user interface. While the focus of this book is the IBM TotalStorage Productivity Center for Disk and IBM TotalStorage Productivity Center for Replication components of the IBM TotalStorage Productivity Center, this chapter provides an overview of the entire IBM TotalStorage Open Software Family.

Copyright IBM Corp. 2004, 2005. All rights reserved.

1.1 Introduction to IBM TotalStorage Productivity Center


The IBM TotalStorage Productivity Center consists of software components which enable storage administrators to monitor, configure, and manage storage devices and subsystems within a SAN environment. The TotalStorage Productivity Center is based on the recent standard issued by the Storage Networking Industry Association (SNIA). The standard addresses the interoperability of storage hardware and software within a SAN.

1.1.1 Standards organizations and standards


Today, there are at least 10 organizations involved in creating standards for storage, storage management, SAN management, and interoperability. Figure 1-1 shows the key organizations involved in developing and promoting standards relating to storage, storage management, and SAN management, and the relevant standards for which they are responsible.

Figure 1-1 SAN management standards bodies

Key standards for Storage Management are: Distributed Management Task Force (DMTF) Common Information Model (CIM) Standards. This includes the CIM Device Model for Storage, which at the time of writing was Version 2.7.2 for the CIM schema. Storage Networking Industry Association (SNIA) Storage Management Initiative Specification (SMI-S).

Managing Disk Subsystems using IBM TotalStorage Productivity Center

1.2 IBM TotalStorage Open Software family


The IBM TotalStorage Open Software Family, is designed to provide a full range of capabilities, including storage infrastructure management, Hierarchical Storage Management (HSM), archive management, and recovery management. The On Demand storage environment is shown in Figure 1-2. The hardware infrastructure is a complete range of IBM storage hardware and devices providing flexibility in choice of service quality and cost structure. On top of the hardware infrastructure is the virtualization layer. The storage virtualization is infrastructure software designed to pool storage assets, enabling optimized use of storage assets across the enterprise and the ability to modify the storage infrastructure with minimal or no disruption to application services. The next layer is composed of storage infrastructure management to help enterprises understand and proactively manage their storage infrastructure in the on demand world; hierarchical storage management to help control growth; archive management to manage cost of storing huge quantities of data; recovery management to ensure recoverability of data. The top layer is storage orchestration which automates work flows to help eliminate human error.

Figure 1-2 Enabling customer to move toward On Demand

Previously we discussed the next steps or entry points into an On Demand environment. The IBM software products which represent these entry points and which comprise the IBM TotalStorage Open Software Family is shown in Figure 1-3 on page 4.

Chapter 1. IBM TotalStorage Productivity Center overview

Figure 1-3 IBM TotalStorage open software family

1.3 IBM TotalStorage Productivity Center


The IBM TotalStorage Productivity Center is an open storage infrastructure management solution designed to help reduce the effort of managing complex storage infrastructures, to help improve storage capacity utilization, and to help improve administrative efficiency. It is designed to enable an agile storage infrastructure that can respond to On Demand storage needs. The IBM TotalStorage Productivity Center offering is a powerful set of tools designed to help simplify the management of complex storage network environments. The IBM TotalStorage Productivity Center consists of TotalStorage Productivity Center for Disk, TotalStorage Productivity Center for Replication, TotalStorage Productivity Center for Data (formerly Tivoli Storage Resource Manager) and TotalStorage Productivity Center for Fabric (formerly Tivoli SAN Manager). Taking a closer look at storage infrastructure management (see Figure 1-4 on page 5), we focus on four subject matter experts to empower the storage administrators to effectively do their work. Data subject matter expert San Fabric subject matter expert Disk subject matter expert Replication subject matter expert

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Figure 1-4 Centralized, automated storage infrastructure management

1.3.1 Data subject matter expert: TotalStorage Productivity Center for Data
The Data subject matter expert has intimate knowledge of how storage is used, for example whether the data is used by a file system or a database application. Figure 1-5 on page 6 shows the role of the Data subject matter expert which is filled by the TotalStorage Productivity Center for Data (formerly the IBM Tivoli Storage Resource Manager).

Chapter 1. IBM TotalStorage Productivity Center overview

Figure 1-5 Monitor and Configure the Storage Infrastructure Data area

Heterogeneous storage infrastructures, driven by growth in file and database data, consume increasing amounts of administrative time, as well as actual hardware resources. IT managers need ways to make their administrators more efficient and more efficiently utilize their storage resources. Tivoli Storage Resource Manager gives storage administrators the automated tools they need to manage their storage resources more cost-effectively. TotalStorage Productivity Center for Data allows you to identify different classes of data, report how much space is being consumed by these different classes, and take appropriate actions to keep the data under control. Features of the TotalStorage Productivity Center for Data are: Automated identification of the storage resources in an infrastructure and analysis of how effectively those resources are being used. File-system and file-level evaluation uncovers categories of files that, if deleted or archived, can potentially represent significant reductions in the amount of data that must be stored, backed up and managed. Automated control through policies that are customizable with actions that can include centralized alerting, distributed responsibility and fully automated response. Predict future growth and future at-risk conditions with historical information. Through monitoring and reporting, TotalStorage Productivity Center for Data helps the storage administrator prevent outages in the storage infrastructure. Armed with timely information, the storage administrator can take action to keep storage and data available to the application. TotalStorage Productivity Center for Data also helps to make the most efficient use of storage budgets, by allowing administrators to use their existing storage more efficiently, and more accurately predict future storage growth.

Managing Disk Subsystems using IBM TotalStorage Productivity Center

TotalStorage Productivity Center for Data monitors storage assets, capacity, and usage across an enterprise. TotalStorage Productivity Center for Data can look at: Storage from a host perspective: Manage all the host-attached storage, capacity and consumption attributed to file systems, users, directories, and files Storage from an application perspective: Monitor and manage the storage activity inside different database entities including instance, tablespace, and table Storage utilization and provide chargeback information.

Architecture
The TotalStorage Productivity Center for Data server system manages a number of Agents, which can be servers with storage attached, NAS systems, or database application servers. Information is collected from the Agents and stored in a database repository. The stored information can then be displayed from a native GUI client or browser interface anywhere in the network. The GUI or browser interface gives access to the other functions of TotalStorage Productivity Center for Data, including creating and customizing of a large number of different types of reports and setting up alerts. With TotalStorage Productivity Center for Data, you can: Monitor virtually any host Monitor local, SAN-attached and Network Attached Storage from a browser anywhere on the network For more information refer to the redbook IBM Tivoli Storage Resource Manager: A Practical Introduction, SG24-6886.

1.3.2 Fabric subject matter expert: Productivity Center for Fabric


The storage infrastructure management for Fabric covers the Storage Area Network (SAN). To handle and manage SAN events you need a comprehensive tool. The tool must have a single point of operation and it tool must be able to perform all the tasks from the SAN. This role is filled by the TotalStorage Productivity Center for Fabric (formerly the IBM Tivoli SAN Manager) which is a part of the IBM TotalStorage Productivity Center. The Fabric subject matter expert is the expert in the SAN. Its role is: Discovery of fabric information Provide the ability to specify fabric policies What HBAs to use for each host and for what purpose Objectives for zone configuration (for example, shielding host HBAs from one another and performance) Automatically modify the zone configuration TotalStorage Productivity Center for Fabric provides real-time visual monitoring of SANs, including heterogeneous switch support, and is a central point of control for SAN configuration (including zoning). It automates the management of heterogeneous storage area networks, resulting in Improved Application Availability Predicting storage network failures before they happen enabling preventative maintenance Accelerate problem isolation when failures do happen

Chapter 1. IBM TotalStorage Productivity Center overview

Optimized Storage Resource Utilization by reporting on storage network performance Enhanced Storage Personnel Productivity - Tivoli SAN Manager creates a single point of control, administration and security for the management of heterogeneous storage networks Figure 1-6 describes the requirements that must be addressed by the Fabric subject matter expert.

Figure 1-6 Monitor and Configure the Storage Infrastructure Fabric area

TotalStorage Productivity Center for Fabric monitors and manages switches and hubs, storage and servers in a Storage Area Network. TotalStorage Productivity Center for Fabric can be used for both online monitoring and historical reporting. TotalStorage Productivity Center for Fabric: Manages fabric devices (switches) through outband management. Discovers many details about a monitored server and its local storage through an Agent loaded onto a SAN-attached host (Managed Host). Monitors the network and collects events and traps Launches vendor-provided specific SAN element management applications from the TotalStorage Productivity Center for Fabric Console. Discovers and manages iSCSI devices. Provides a fault isolation engine for SAN problem determination (ED/FI - SAN Error Predictor) TotalStorage Productivity Center for Fabric is compliant with the standards relevant to SAN storage and management.

Managing Disk Subsystems using IBM TotalStorage Productivity Center

TotalStorage Productivity Center for Fabric components


The major components of the TotalStorage Productivity Center for Fabric include: A manager or server, running on a SAN managing server Agents, running on one or more managed hosts Management console, which is by default on the Manager system, plus optional additional remote consoles Outband agents - consisting of vendor-supplied MIBs for SNMP There are two additional components which are not included in the TotalStorage Productivity Center. IBM Tivoli Enterprise Console (TEC) which is used to receive TotalStorage Productivity Center for Fabric generated events. Once forwarded to TEC, These can then be consolidated with events from other applications and acted on according to enterprise policy. IBM Tivoli Enterprise Data Warehouse (TEDW) is used to collect and analyze data gathered by the TotalStorage Productivity Center for Fabric. The Tivoli Data Enterprise Warehouse collects, organizes, and makes data available for the purpose of analysis in order to give management the ability to access and analyze information about its business. The TotalStorage Productivity Center for Fabric functions are distributed across the Manager and the Agent.

TotalStorage Productivity Center for FabricServer


Performs initial discovery of environment: Gathers and correlates data from agents on managed hosts Gathers data from SNMP (outband) agents Graphically displays SAN topology and attributes Provides customized monitoring and reporting through NetView Reacts to operational events by changing its display (Optionally) forwards events to Tivoli Enterprise Console or SNMP managers

TotalStorage Productivity Center for Fabric Agent


Gathers information about: SANs by querying switches and devices for attribute and topology information Host-level storage, such as file systems and LUNs Event and other information detected by HBAs Forwards topology and event information to the Manager

Discover SAN components and devices


TotalStorage Productivity Center for Fabric uses two methods to discover information about the SAN - outband discovery, and inband discovery. Outband discovery is the process of discovering SAN information, including topology and device data, without using the Fibre Channel data paths. Outband discovery uses SNMP queries, invoked over IP network. Outband management and discovery is normally used to manage devices such as switches and hubs which support SNMP.

Chapter 1. IBM TotalStorage Productivity Center overview

In outband discovery, all communications occur over the IP network: TotalStorage Productivity Center for Fabric requests information over the IP network from a switch using SNMP queries on the device. The device returns the information toTotalStorage Productivity Center for Fabric, also over the IP network. Inband discovery is the process of discovering information about the SAN, including topology and attribute data, through the Fibre Channel data paths. In inband discovery, both the IP and Fibre Channel networks are used: TotalStorage Productivity Center for Fabric requests information (via the IP network) from a Tivoli SAN Manager agent installed on a Managed Host. That agent requests information over the Fibre Channel network from fabric elements and end points in the Fibre Channel network. The agent returns the information to TotalStorage Productivity Center for Fabric over the IP network. TotalStorage Productivity Center for Fabric collects, co-relates and displays information from all devices in the storage network, using both the IP network and the Fibre Channel network. If the Fibre Channel network is unavailable for any reason, monitoring can still continue over the IP network.

TotalStorage Productivity Center for Fabric benefits


TotalStorage Productivity Center for Fabric discovers the SAN infrastructure, and monitors the status of all the discovered components. Through Tivoli NetView, the administrator can provide reports on faults on components (either individually or in groups, or smartsets, of components). This will help them increase data availability for applications so the company can either be more efficient, or maximize the opportunity to produce revenue. TotalStorage Productivity Center for Fabric helps the storage administrator: Prevent faults in the SAN infrastructure through reporting and proactive maintenance, and Identify and resolve problems in the storage infrastructure quickly, when a problem Supported devices for TotalStorage Productivity Center for Fabric Provide fault isolation of SAN links. For more information about the TotalStorage Productivity Center for Fabric, refer to IBM Tivoli Storage Area Network Manager: A Practical Introduction, SG24-6848.

1.3.3 Disk subject matter expert: TotalStorage Productivity Center for Disk
The Disk subject matter experts job allows you to manage the disk systems. It will discover and classify all disk systems that exist and draw a picture of all discovered disk systems. The Disk subject matter expert provides the ability to monitor, configure, create disks and do LUN masking of disks. It also does performance trending and performance threshold I/O analysis for both real disks and virtual disks. It also does automated status and problem alerts via SNMP. This role is filled by the TotalStorage Productivity Center for Disk (formerly the IBM TotalStorage Multiple Device Manager Performance Manager component). The requirements addressed by the Disk subject matter expert are shown in Figure 1-7 on page 11. The disk systems monitoring and configuration needs must be covered by a comprehensive management tool like the TotalStorage Productivity Center for Disk.

10

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Figure 1-7 Monitor and configure the Storage Infrastructure Disk area

The TotalStorage Productivity Center for Disk provides the raw capabilities of initiating and scheduling performance data collection on the supported devices, of storing the received performance statistics into database tables for later use, and of analyzing the stored data and generating reports for various metrics of the monitored devices. In conjunction with data collection, the TotalStorage Productivity Center for Disk is responsible for managing and monitoring the performance of the supported storage devices. This includes the ability to configure performance thresholds for the devices based on performance metrics, the generation of alerts when these thresholds are exceeded, the collection and maintenance of historical performance data, and the creation of gauges, or performance reports, for the various metrics to display the collected historical data to the end user. The TotalStorage Productivity Center for Disk enables you to perform sophisticated performance analysis for the supported storage devices.

Functions
TotalStorage Productivity Center for Disk provides the following functions: Collect data from devices The Productivity Center for Disk collects data from the IBM TotalStorage Enterprise Storage Server (ESS), SAN Volume Controller (SVC), DS400 family and SMI-S enabled devices. Each Performance Collector collects performance data from one or more storage groups, all of the same device type (for example, ESS or SAN Volume Controller). Each Performance Collection has a start time, a stop time, and a sampling frequency. The performance sample data is stored in DB2 database tables. Configure performance thresholds You can use the Productivity Center for Disk to set performance thresholds for each device type. Setting thresholds for certain criteria enables Productivity Center for Disk to notify you when a certain threshold has been exceeded, so that you to take action before a critical event occurs.

Chapter 1. IBM TotalStorage Productivity Center overview

11

You can specify what action should be taken when a threshold-exceeded condition occurs. The action may be to log the occurrence or to trigger an event. The threshold settings can vary by individual device. Monitor performance metrics across storage subsystems from a single console Receive timely alerts to enable event action based on customer policies View performance data from the Productivity Center for Disk database You can view performance data from the Productivity Center for Disk database in both graphical and tabular forms. The Productivity Center for Disk allows a TotalStorage Productivity Center user to access recent performance data in terms of a series of values of one or more metrics, associated with a finite set of components per device. Only recent performance data is available for gauges. Data that has been purged from the database cannot be viewed. You can define one or more gauges by selecting certain gauge properties and saving them for later referral. Each gauge is identified through a user-specified name, and once defined, a gauge can be "started", which means it is then displayed in a separate window of the TotalStorage Productivity Center GUI. You can have multiple gauges active at the same time. Gauge definition will be accomplished through a wizard, to aid in entering a valid set of gauge properties. Gauges are saved in the Productivity Center for Disk database and retrieved upon request. When you request data pertaining to a defined gauge, the Performance Manager builds a query to the database, retrieves and formats the data and returns it to you. Once started, a gauge is displayed in its own window, and displays all available performance data for the specified initial date/time range. The date/time range can be changed after the initial gauge widow is displayed. Focus on storage optimization through identification of best LUN The Volume Performance Advisor is an automated tool to help the storage administrator pick the best possible placement of a new LUN to be allocated, that is, the best placement from a performance perspective. It also uses the historical performance statistics collected from the supported devices, to locate unused storage capacity on the SAN that exhibits the best (estimated) performance characteristics. Allocation optimization involves several variables which are user controlled, such as required performance level and the time of day/week/month of prevalent access. This function is fully integrated with the Device Manager function, this is so that when a new LUN is added, for example, to the ESS, the Performance Manager can seamlessly select the best possible LUN. For detailed information about how to use the functions of the TotalStorage Productivity Center for Disk refer to Chapter 6, TotalStorage Productivity Center for Disk use on page 227.

1.3.4 Replication subject matter expert: Productivity Center for Replication


The Replication subject matter experts job is to provide a single point of control for all replication activities. This role is filled by the TotalStorage Productivity Center for Replication. Given a set of source volumes to be replicated, the Productivity Center for Replication will find the appropriate targets, perform all the configuration actions required, and ensure the source and target volumes relationships are set up. Given a set of source volumes that represent an application, the Productivity Center for Replication will group these in a consistency group, give that consistency group a name, and allow you to start replication on the application. Productivity Center for Replication will start up all replication pairs and monitor them to completion. If any of the replication pairs fail, meaning the application is out of sync, the Productivity Center for Replication will suspend them until the problem is resolved, resync

12

Managing Disk Subsystems using IBM TotalStorage Productivity Center

them and resume the replication. The Productivity Center for Replication provides complete management of the replication process. The requirements addressed by the Replication subject matter expert are shown Figure 1-8. Replication in a complex environment needs to be addressed by a comprehensive management tool like the TotalStorage Productivity Center for Replication.

Figure 1-8 Monitor and Configure the Storage Infrastructure Replication area

Functions
Data replication is the core function required for data protection and disaster recovery. It provides advanced copy services functions for supported storage subsystems on the SAN. Replication Manager administers and configures the copy services functions and monitors the replication actions. Its capabilities consist of the management of two types of copy services: the Continuous Copy (also known as Peer-to-Peer, PPRC, or Remote Copy), and the Point-in-Time Copy (also known as FlashCopy). At this time TotalStorage Productivity Center for Replication supports the IBM TotalStorage ESS. Productivity Center for Replication includes support for replica sessions, which ensures that data on multiple related heterogeneous volumes is kept consistent, provided that the underlying hardware supports the necessary primitive operations. Productivity Center for Replication also supports the session concept, such that multiple pairs are handled as a consistent unit, and that Freeze-and-Go functions can be performed when errors in mirroring occur. Productivity Center for Replication is designed to control and monitor the copy services operations in large-scale customer environments. Productivity Center for Replication provides a user interface for creating, maintaining, and using volume groups and for scheduling copy tasks. The User Interface populates lists of volumes using the Device Manager interface. Some of the tasks you can perform with Productivity Center for Replication are:

Chapter 1. IBM TotalStorage Productivity Center overview

13

Create a replication group. A replication group is a collection of volumes grouped together so that they can be managed concurrently. Set up a Group for replication. Create, save, and name a replication task. Schedule a replication session with the user interface: Create Session Wizard. Select Source Group. Select Copy Type. Select Target Pool. Save Session.

Start a replication session A user can also perform these tasks with the Productivity Center for Replication command-line interface. For more information about the Productivity Center for Replication functions refer to Chapter 8, TotalStorage Productivity Center for Replication use on page 355.

1.4 IBM TotalStorage Productivity Center


All the subject matter experts, for Data, Fabric, Disk, and Replication are components of the IBM TotalStorage Productivity Center. The IBM TotalStorage Productivity Center is the first offering to be delivered as part of the IBM TotalStorage Open Software Family. The IBM TotalStorage Productivity Center is an open storage infrastructure management solution designed to help reduce the effort of managing complex storage infrastructures, to help improve storage capacity utilization, and to help improve administrative efficiency. It is designed to enable an agile storage infrastructure that can respond to on demand storage needs. The IBM TotalStorage Productivity Center allows you to manage your storage infrastructure using existing storage management products Productivity Center for Data, Productivity Center for Fabric, Productivity Center for Disk and Productivity Center for Replication from one physical place. The IBM TotalStorage Productivity Center components can be launched from the IBM TotalStorage Productivity Center launch pad as shown in Figure 1-9 on page 15.

14

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Figure 1-9 IBM TotalStorage Productivity Center Launch Pad

The IBM TotalStorage Productivity Center establishes the foundation for IBMs e-business On Demand technology. We need the function in an On Demand environment to provide IT resources On Demand - when the resources are needed by an application to support the customers business process. Of course, we are able to provide resources or remove resources today but the question is how. The process is expensive and time consuming. The IBM TotalStorage Productivity Center is the basis for the provisioning of storage resources to make the e-business On Demand environment a reality. In the future there will be more automation required to handle the hugh amount work in the provisioning area, more automation like the BM TotalStorage Productivity Center launch pad provides. Automation means workflow. Workflow is the key to getting work automated. IBM has a long history and investment in building workflow engines and work flows. Today IBM is using the IBM Tivoli Intelligent Orchestrator and IBM Tivoli Provisioning Manager to satisfy the resource requests in the e-business on demand environment in the server arena. The IBM Tivoli Intelligent Orchestrator and The IBM Tivoli Provisioning Manager provide the provisioning in the e-business On Demand environment.

1.4.1 Productivity Center for Disk and Productivity Center for Replication
The Productivity Center for Disk and Productivity Center for Replication is software that has been designed to enable administrators to manage SANs and storage from a single console. This software solution is designed specifically for managing networked storage components based on the SMI-S, including: IIBM TotalStorage SAN Volume Controller IBM TotalStorage Enterprise Storage Server (ESS) IBM TotalStorage Fibre Array Storage Technology (FAStT) IBM TotalStorage DS4000 series SMI enabled device

Chapter 1. IBM TotalStorage Productivity Center overview

15

Figure 1-10 Managing multiple devices

Productivity Center for Disk and Productivity Center for Replication are built on IBM Director, a comprehensive server management solution. Using Director with the multiple device management solution enables administrators to consolidate the administration of IBM storage subsystems and provide advanced storage management functions (including replication and performance management) across multiple IBM storage subsystems. It interoperates with SAN Management and Enterprise System Resource Manager (ESRM) products from IBM, includingTotalStorage Productivity Center for Data and SAN Management products from other vendors. In a SAN environment, multiple devices work together to create a storage solution. The Productivity Center for Disk and Productivity Center for Replication provides integrated administration, optimization, and replication features for interacting SAN devices, including the SAN Volume Controller and DS4000 Family devices. It provides an integrated view of the underlying system so that administrators can drill down through the virtualized layers to easily perform complex configuration tasks and more productively manage the SAN infrastructure. Because the virtualization layers support advanced replication configurations, the Productivity Center for Disk and Productivity Center for Replication products offer features that simplify the configuration, monitoring, and control of disaster recovery and data migration solutions. In addition, specialized performance data collection, analysis, and optimization features are provided. As the SNIA standards mature, the Productivity Center view will be expanded to include CIM-enabled devices from other vendors, in addition to IBM storage. Figure 1-11 on page 17 provides an overview of Productivity Center for Disk and Productivity Center for Replication.

16

Managing Disk Subsystems using IBM TotalStorage Productivity Center

IBM TotalStorage Productivity Center Performance Manager Replication Manager

Device Manager
IBM Director IBM TotalStorage Productivity Center for Fabric
WebSphere Application Server DB2

Figure 1-11 Productivity Center overview

The Productivity Center for Disk and Productivity Center for Replication provides support for configuration, tuning, and replication of the virtualized SAN. As with the individual devices, the Productivity Center for Disk and Productivity Center for Replication layers are open and can be accessed via a GUI, CLI, or standards-based Web Services. Productivity Center for Disk and Productivity Center for Replication provide the following functions: Device Manager - Common function provided when you install the base prerequisite products for either Productivity Center for Disk or Productivity Center for Replication Performance Manager - provided by Productivity Center for Disk Replication Manager - provided by Productivity Center for Replication

Device Manager
The Device Manager is responsible for the discovery of supported devices; collecting asset, configuration, and availability data from the supported devices; and providing a limited topography view of the storage usage relationships between those devices. The Device Manager builds on the IBM Director discovery infrastructure. Discovery of storage devices adheres to the SNIA SMI-S specification standards. Device Manager uses the Service Level Protocol (SLP) to discover SMI-S enabled devices. The Device Manager creates managed objects to represent these discovered devices. The discovered managed objects are displayed as individual icons in the Group Contents pane of the IBM Director Console as shown in Figure 1-12 on page 18.

Chapter 1. IBM TotalStorage Productivity Center overview

17

Figure 1-12 IBM Director Console

Device Manager provides a subset of configuration functions for the managed devices, primarily LUN allocation and assignment. Its function includes certain cross-device configuration, as well as the ability to show and traverse inter-device relationships. These services communicate with the CIM Agents that are associated with the particular devices to perform the required configuration. Devices that are not SMI-S compliant are not supported. The Device Manager also interacts and provides some SAN management functionality when IBM Tivoli SAN Manager is installed. The Device Manager health monitoring keeps you aware of hardware status changes in the discovered storage devices. You can drill down to the status of the hardware device, if applicable. This enables you to understand which components of a device are malfunctioning and causing an error status for the device.

SAN Management
When a supported SAN Manager is installed and configured, the Device Manager leverages the SAN Manager to provide enhanced function. Along with basic device configuration functions such as LUN creation, allocation, assignment, and deletion for single and multiple devices, basic SAN management functions such as LUN discovery, allocation, and zoning are provided in one step. IBM TotalStorage Productivity Center for Fabric (formerly IBM Tivoli SAN Manager) is currently the supported SAN Manager. The set of SAN Manager functions that will be exploited are: The ability to retrieve the SAN topology information, including switches, hosts, ports, and storage devices The ability to retrieve and to modify the zoning configuration on the SAN The ability to register for event notification, to ensure Productivity Center for Disk is aware when the topology or zoning changes as new devices are discovered by the SAN Manager, and when hosts' LUN configurations change

18

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Performance Manager function


The Performance Manager function provides the raw capabilities of initiating and scheduling performance data collection on the supported devices, of storing the received performance statistics into database tables for later use, and of analyzing the stored data and generating reports for various metrics of the monitored devices. In conjunction with data collection, the Performance Manager is responsible for managing and monitoring the performance of the supported storage devices. This includes the ability to configure performance thresholds for the devices based on performance metrics, the generation of alerts when these thresholds are exceeded, the collection and maintenance of historical performance data, and the creation of gauges, or performance reports, for the various metrics to display the collected historical data to the end user. The Performance Manager enables you to perform sophisticated performance analysis for the supported storage devices.

Functions
Collect data from devices The Performance Manager collects data from the IBM TotalStorage Enterprise Storage Server (ESS), IBM TotalStorage SAN Volume Controller (SVC), IBM TotalStorage DS4000 series and SMI-S enabled devices. The performance collection task collects performance data from one or more storage groups, all of the same device type (for example, ESS or SVC). Each performance collection task has a start time, a stop time, and a sampling frequency. The performance sample data is stored in DB2 database tables. Configure performance thresholds You can use the Performance Manager to set performance thresholds for each device type. Setting thresholds for certain criteria enables Performance Manager to notify you when a certain threshold has been exceeded, so that you can take action before a critical event occurs. You can specify what action should be taken when a threshold-exceeded condition occurs. The action may be to log the occurrence or to trigger an event. The threshold settings can vary by individual device. The eligible metrics for threshold checking are fixed for each storage device. If the threshold metrics are modified by the user, the modifications are accepted immediately and applied to checking being performed by active performance collection tasks. Examples of threshold metrics include: Disk utilization value Average cache hold time Percent of sequential I/Os I/O rate NVS full value Virtual disk I/O rate Managed disk I/O rate There is a user interface that supports threshold settings, enabling a user to: Modify a threshold property for a set of devices of like type. Modify a threshold property for a single device. Reset a threshold property to the IBM-recommended value (if defined) for a set of devices of like type. IBM-recommended critical and warning values will be provided for all thresholds known to indicate potential performance problems for IBM storage devices.

Chapter 1. IBM TotalStorage Productivity Center overview

19

Reset a threshold property to the IBM-recommended value (if defined) for a single device. Show a summary of threshold properties for all of the devices of like type. View performance data from the Performance Manager database.

Gauges
The Performance Manager supports a performance-type gauge. The performance-type gauge presents sample-level performance data. The frequency at which performance data is sampled on a device depends on the sampling frequency that you specify when you define the performance collection task. The maximum and minimum values of the sampling frequency depend on the device type. The static display presents historical data over time. The refreshable display presents near real-time data from a device that is currently collecting performance data. The Performance Manager enables a Productivity Center for Disk user to access recent performance data in terms of a series of values of one or more metrics associated with a finite set of components per device. Only recent performance data is available for gauges. Data that has been purged from the database cannot be viewed. You can define one or more gauges by selecting certain gauge properties and saving them for later referral. Each gauge is identified through a user-specified name and, when defined, a gauge can be started, which means that it is then displayed in a separate window of the Productivity Center GUI. You can have multiple gauges active at the same time. Gauge definition is accomplished through a wizard to aid in entering a valid set of gauge properties. Gauges are saved in the Productivity Center for Disk database and retrieved upon request. When you request data pertaining to a defined gauge, the Performance Manager builds a query to the database, retrieves and formats the data, and returns it to you. When started, a gauge is displayed in its own window, and it displays all available performance data for the specified initial date/time range. The date/time range can be changed after the initial gauge window is displayed. For performance-type gauges, if a metric selected for display is associated with a threshold enabled for checking, the current threshold properties are also displayed in the gauge window and are updated each time the gauge data is refreshed.

Database services for managing the collected performance data


The performance data collected from the supported devices is stored in a DB2 database. Database services are provided that enable you to manage the potential volumes of data. Database purge function A database purge function deletes older performance data samples and, optionally, the associated exception data. Flexibility is built into the purge function, and it enables you to specify the data to purge, allowing important data to be maintained for trend purposes. You can specify to purge all of the sample data from all types of devices older than a specified number of days. You can specify to purge the data associated with a particular type of device. If threshold checking was enabled at the time of data collection, you can exclude data that exceeded at least one threshold value from being purged. You can specify the number of days that data is to remain in the database before being purged. Sample data and, optionally, exception data older than the specified number of days will be purged. A reorganization function is performed on the database tables after the sample data is deleted from the respective database tables.

20

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Database information function Due to the amount of data collected by the Performance Manager function provided by Productivity Center for Disk, the database should be monitored to prevent it from running out of space. The database information function returns the database % full. This function can be invoked from either the Web user interface or the CLI.

Volume Performance Advisor


The advanced performance analysis provided by Productivity Center for Disk is intended to address the challenge of allocating more storage in a storage system so that the users of the newly allocated storage achieve the best possible performance. The Volume Performance Advisor is an automated tool that helps the storage administrator pick the best possible placement of a new LUN to be allocated (that is, the best placement from a performance perspective). It also uses the historical performance statistics collected from the supported devices to locate unused storage capacity on the SAN that exhibits the best (estimated) performance characteristics. Allocation optimization involves several variables that are user-controlled, such as required performance level and the time of day/week/month of prevalent access. This function is fully integrated with the Device Manager function so that, for example, when a new LUN is added to the ESS, the Device Manager can seamlessly select the best possible LUN.

Replication Manager function


Data replication is the core function required for data protection and disaster recovery. It provides advanced copy services functions for supported storage subsystems on the SAN. Productivity Center for Replication administers and configures the copy services functions and monitors the replication actions. Its capabilities consist of the management of two types of copy services: the Continuous Copy (also known as Peer-to-Peer, PPRC, or Remote Copy), and the Point-in-Time Copy (also known as FlashCopy). Currently replication functions are provided for the IBM TotalStorage ESS. Productivity Center for Replication includes support for replica sessions, which ensures that data on multiple related heterogeneous volumes is kept consistent, provided that the underlying hardware supports the necessary primitive operations. Multiple pairs are handled as a consistent unit, Freeze-and-Go functions can be performed when errors in mirroring occur. Productivity Center for Replication is designed to control and monitor the copy services operations in large-scale customer environments. Productivity Center for Replication is controlled by applying predefined policies to Groups and Pools, which are groupings of LUNs that are managed by the Replication Manager. It provides the ability to copy a Group to a Pool, in which case it creates valid mappings for source and target volumes and optionally presents them to the user for verification that the mapping is acceptable. In this case, it manages Pool membership by removing target volumes from the pool when they are used, and by returning them to the pool only if the target is specified as being discarded when it is deleted.

1.4.2 Event services


At the heart of any systems management solution is the ability to alert the system administrator in the event of a system problem. IBM Director provides a method of alerting called Event Action Plans, which enables the definition of event triggers independently from actions that might be taken. An event is an occurrence of a predefined condition relating to a specific managed object that identifies a change in a system process or a device. The notification of that change can be
Chapter 1. IBM TotalStorage Productivity Center overview

21

generated and tracked (for example, notification that a Productivity Center component is not available). Productivity Center for Disk and Productivity Center for Replication take full advantage of, and build upon, the IBM Director Event Services. The IBM Director includes sophisticated event-handling support. Event Action Plans can be set up that specify what steps, if any, should be taken when particular events occur in the environment. Director Event Management encompasses the following concepts: Events can be generated by any managed object. IBM Director receives such events and calls appropriate internal event handlers that have been registered. Actions are user-configured steps to be taken for a particular event or type of event. There can be zero or more actions associated with a particular action plan. System administrators can create their own actions by customizing particular predefined actions. Event Filters are a set of characteristics or criteria that determine whether an incoming event should be acted on. Event Action Plans are associations of one or more event filters with one or more actions. Event Action Plans become active when you apply them to a system or a group of systems. The IBM Director Console includes an extensive set of GUI panels, called the Event Action Plan Builder, that enable the user to create action plans and event filters. Event Filters can be configured using the Event Action Plan Builder and set up with a variety of criteria, such as event types, event severities, day and time of event occurrence, and event categories. This allows control over exactly what action plans are invoked for each specific event. Productivity Center provides extensions to the IBM Director event management support. It takes full advantage of the IBM Director built-in support for event logging and viewing. It generates events that will be externalized. Action plans can be created based on filter criteria for these events. The default action plan is to log all events in the event log. It creates additional event families, and event types within those families, that will be listed in the Event Action Plan Builder. Event actions that enable Productivity Center functions to be exploited from within action plans will be provided. An example is the action to indicate the amount of historical data to be kept.

1.5 Taking steps toward an On Demand environment


So what is an On Demand operating environment? It is not a specific set of hardware and software. Rather, it is an environment that supports the needs of the business, allowing it to become and remain responsive, variable, focused, and resilient. An On Demand operating environment unlocks the value within the IT infrastructure to be applied to solving business problems. It is an integrated platform, based on open standards, to enable rapid deployment and integration of business applications and processes. Combined with an environment that allows true virtualization and automation of the infrastructure, it enables delivery of IT capability On Demand. An On Demand operating environment must be: Flexible Self-managing Scalable Economical 22
Managing Disk Subsystems using IBM TotalStorage Productivity Center

Resilient Based on open standards The move to an On Demand storage environment is an evolving one, it does not happen all at once. There are several next steps that you may take to move to the On Demand environment. Constant changes to the storage infrastructure (upgrading or changing hardware for example) can be addressed by virtualization which provides flexibility by hiding the hardware and software from users and applications. Empower administrators with automated tools for managing heterogeneous storage infrastructures. and eliminate human error. Control storage growth with automated identification and movement of low-activity or inactive data to a hierarchy of lower-cost storage. Manage cost associated with capturing point-in-time copies of important data for regulatory or bookkeeping requirements by maintaining this inactive data in a hierarchy of lower-cost storage. Ensure recoverability through the automated creation, tracking and vaulting of reliable recovery points for all enterprise data. The ultimate goal to eliminate human errors by preparing for Infrastructure Orchestration software that can be used to automate workflows. No matter which steps you take to an On Demand environment there will be results. The results will be improved application availability, optimized storage resource utilization, and enhanced storage personnel productivity.

Chapter 1. IBM TotalStorage Productivity Center overview

23

24

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Chapter 2.

Key concepts
This chapter gives you an understanding of the basic concepts that you must know in order to use TotalStorage Productivity Center. These concepts include standards for storage management, Service Location Protocol (SLP), Common Information Model (CIM) agent, and Common Information Model Object Manager (CIMOM).

Copyright IBM Corp. 2004, 2005. All rights reserved.

25

2.1 Standards organizations and standards


Today, there are at least 10 organizations involved in creating standards for storage, storage management, SAN management, and interoperability. Figure 2-1 shows the key organizations involved in developing and promoting standards relating to storage, storage management, and SAN management, and the relevant standards for which they are responsible.

Figure 2-1 SAN standards bodies

Key standards for storage management are: Distributed Management Task Force (DMTF) Common Information Model (CIM) Standards. This includes the CIM Device Model for Storage, which at the time of writing was Version 2.7.2 for the CIM schema Storage Networking Industry Association (SNIA) Storage Management Initiative Specification (SMI-S).

2.1.1 CIM/WEB management model


CIM was developed as part of the Web-Based Enterprise Management (WBEM) initiative by the Distributed Management Task Force (DMTF) to simplify management of distributed systems. It uses an object-oriented approach to describe management information, and the description (data model) is platform- and vendor-independent. CIM profiles have already been developed for some devices, such as Storage Subsystems, Fibre Channel switches, and NAS devices. IBMs intent is to support CIM-based management as and when device manufacturers deliver CIM-based management interfaces.

26

Managing Disk Subsystems using IBM TotalStorage Productivity Center

CIM/WBEM technology uses a powerful human and machine readable language called the managed object format (MOF) to precisely specify object models. Compilers can be developed to read MOF files and automatically generate data type definitions, interface stubs, and GUI constructs to be inserted into management applications.

2.2 Storage Networking Industry Association


The Storage Networking Industry Association (SNIA) was incorporated in December 1997 as a nonprofit trade association that is made up of over 200 companies. SNIA includes well established storage component vendors as well as emerging storage technology companies. The SNIA mission is to ensure that storage networks become efficient, complete, and trusted solutions across the IT community. The SNIA vision is to provide a point of cohesion for developers of storage and networking products in addition to system integrators, application vendors, and service providers for storage networking. SNIA provides architectures, education, and services that will propel storage networking solutions into the broader market.

2.2.1 The SNIA Shared Storage Model


IBM is an active member of SNIA and fully supports SNIAs goals to produce the open architectures, protocols, and APIs required to make storage networking successful. IBM has adopted the SNIA Storage Model and is basing its storage software strategy and road map on this industry-adopted architectural model for storage, as shown in Figure 2-2.

Figure 2-2 The SNIA Storage Model

IBM is committed to deliver best-of-breed products in all aspects of the SNIA storage model, including:

Chapter 2. Key concepts

27

Block aggregation The block layer in the SNIA model is responsible for providing low-level storage to higher levels. Ultimately, data is stored on native storage devices such as disk drives, solid-state disks, and tape drives. These devices can be used directly, or the storage they provide can be aggregated into one or more block vectors to increase or decrease their size, or provide redundancy. Block aggregation or Block level virtualization is used to deliver a powerful set of techniques that, when used individually or in combination, serve many purposes, such as: Space management through combining or splitting native storage into new, aggregated block storage Striping through spreading the aggregated block storage across several native storage devices Redundancy through point-in-time copy and both local and remote mirroring File aggregation or File level virtualization The file/record layer in the SNIA model is responsible for packing items such as files and databases into larger entities such as block-level volumes and storage devices. File aggregation or File level virtualization is used to deliver a powerful set of techniques that, when used individually or in combination, serve many purposes, such as: Allow data sharing and collaboration across heterogeneous servers with high performance and full locking support Enhance productivity by providing centralized and simplified management through policy-based storage management automation Increase storage utilization by reducing the amount of duplicate data and by sharing free and temporary space across servers In the area of block aggregation, IBM offers the IBM TotalStorage SAN Volume Controller (SVC), implemented in an in-band model. In the area of file aggregation in a SAN, IBM offers IBM TotalStorage SAN File System, a SAN-wide file system implemented in an out-of-band model. Both of these solutions will adhere to open industry standards. For more information about SMI-S/CIM/WBEM, see the SNIA and DMTF Web sites:
http://www.snia.org http://www.dmtf.org

2.2.2 SMI Specification


SNIA has fully adopted and enhanced CIM standard for Storage Management in its SMI Specification (SMI-S). SMI-S was launched in mid-2002 to create and develop a universal open interface for managing storage devices including storage networks. The idea behind SMI-S is to standardize the management interfaces so that management applications can utilize these and provide cross device management. This means that a newly introduced device can be immediately managed as it will conform to the standards. SMI-S extends CIM/WBEM with the following features: A single management transport: Within the WBEM architecture, the CIM-XML over HTTP protocol was selected for this transport in SMI-S. A complete, unified, and rigidly specified object model: SMI-S defines profiles and recipes within the CIM that enables a management client to reliably utilize a component vendors implementation of the standard, such as the control of LUNs and Zones in the context of a SAN.

28

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Consistent use of durable names: As a storage network configuration evolves and is re-configure, key long-lived resources like disk volumes must be uniquely and consistently identified over time. Rigorously documented client implementation considerations: SMI-S provides client developers with vital information for traversing CIM classes within a device/subsystem and between devices/subsystems such that complex storage networking topologies can be successfully mapped and reliably controlled An automated discovery system: SMI-S compliant products, when introduced in a SAN environment, will automatically announce their presence and capabilities to other constituents. Resource locking: SMI-S compliant management applications from multiple vendors can exist in the same storage device or SAN and cooperatively share resources via a lock manager. The models and protocols in the SMI-S implementation are platform-independent, enabling application development for any platform, and enabling them to run on different platforms. The SNIA will also provide interoperability tests which will help vendors to test their applications and devices if they conform to the standard.

2.2.3 Integrating existing devices into the CIM model


As these standards are still evolving, we cannot expect that all devices will support the native CIM interface, and because of this, the SMI-S is introducing CIM Agents and CIM Object Managers. The agents and object managers bridge proprietary device management to device management models and protocols used by SMI-S. The agent is used for one device and an object manager for a set of devices. This type of operation is also called proxy model and is shown in Figure 2-3. The CIM Agent or CIM Object Manager (CIMOM) will translate a proprietary management interface to the CIM interface. The CIM Agent for the IBM TotalStorage Enterprise Storage Server includes a CIMOM inside it.

Figure 2-3 CIM Agent / Object Manager Chapter 2. Key concepts

29

In the future, more and more devices will be native CIM compliant, and will therefore have a built-in Agent as shown in the Embedded Model in Figure 2-3 on page 29. When widely adopted, SMI-S will streamline the way that the entire storage industry deals with management. Management application developers will no longer have to integrate incompatible feature-poor interfaces into their products. Component developers will no longer have to push their unique interface functionality to application developers. Instead, both will be better able to concentrate on developing features and functions that have value to end-users. Ultimately, faced with reduced costs for management, end-users will be able to adopt storage-networking technology faster and build larger, more powerful networks.

2.2.4 CIM Agent implementation


When a CIM Agent implementation is available for a supported device, the device may be accessed and configured by management applications using industry-standard XML-over-HTTP transactions. This interface enables TotalStorage Productivity Center for Data, TotalStorage Productivity Center for Disk, TotalStorage Productivity Center for Replication, IBM Director, and vendor tools to manage the SAN infrastructure more effectively. By implementing a standard interface over all devices, an open environment is created in which tools from a variety of vendors can work together. This reduces the cost of developing integrated management applications, installing and configuring management applications, and managing the SAN infrastructure. Figure 2-4 is an overview of the CIM agent.

Figure 2-4 CIM agent overview

The CIM agent includes a CIM Object Manager (CIMOM) which adapts various devices using a plug-in called a provider. The CIM agent can work as a proxy or can be imbedded in storage devices. When the CIM agent is installed as a proxy, the IBM CIM agent can be installed on the same server that supports the device user interface.

2.2.5 CIM Object Manager


The SNIA SMI-S standard designates that either a proxy or an embedded agent may be used to implement CIM. In each case, the CIM objects are supported by a CIM Object Manager. External applications communicate with CIM through HTTP to exchange XML messages that are used to configure and manage the device. In a proxy configuration, the CIMOM runs outside of the device and can manage multiple devices. In this case, a provider component is installed into the CIMOM to enable the CIMOM to manage specific devices such as the ESS or SAN Volume Controller. The providers adapt the CIMOM to work with different devices and subsystems. In this way, a single CIMOM installation can be used to access more than one device type, and more than one device of each type on a subsystem. The CIMOM acts as a catcher for requests that are sent from storage management applications. The interactions between catcher and sender use the language and models defined by the SMI-S standard.

30

Managing Disk Subsystems using IBM TotalStorage Productivity Center

This enables storage management applications, regardless of vendor, to query status and perform command and control using XML-based CIM interactions. Figure 2-5 shows CIM enablement model.

Figure 2-5 CIM enablement model

2.3 Common Information Model (CIM)


The Common Information Model (CIM) Agent provides a means by which a device can be managed by common building blocks rather than proprietary software. If a device is CIM-compliant, software that is also CIM-compliant can manage the device. Vendor applications can benefit from adopting the common information model because they can manage CIM-compliant devices in a common way, rather than using device-specific programming interfaces. Using CIM, you can perform tasks in a consistent manner across devices and vendors. A CIM agent typically involves the following components: Agent code: An open-systems standard that interprets CIM requests and responses as they transfer between the client application and the device. CIM Object Manager (CIMOM): The common conceptual framework for data management that receives, validates, and authenticates the CIM requests from the client application. It then directs the requests to the appropriate component or device provider. Client application: A storage management program, like TotalStorage Productivity Center, that initiates CIM requests to the CIM agent for the device. Device: The storage server that processes and hosts the client application requests. Device provider: A device-specific handler that serves as a plug-in for the CIM. That is, the CIMOM uses the handler to interface with the device.
Chapter 2. Key concepts

31

Service Location Protocol (SLP): A directory service that the client application calls to locate the CIMOM.

2.3.1 How the CIM Agent works


The CIM Agent typically works in the following way (see Figure 2-6): (1) The client application locates the CIMOM by calling an SLP directory service. (2) When the CIMOM is first invoked, (3) it registers itself to the SLP and supplies its location, IP address, port number, and the type of service it provides. (4) With this information, the client application starts to directly communicate with the CIMOM. The client application then (5) sends CIM requests to the CIMOM. As requests arrive, the CIMOM validates and authenticates each request. (6) It then directs the requests to the appropriate functional component of the CIMOM or to a device provider. (7) The provider makes calls to a device-unique programming interface on behalf of the CIMOM to satisfy (8)-(9)-(10) client application requests.

Figure 2-6 CIM Agent work flow

2.4 Service Location Protocol (SLP)


The Service Location Protocol (SLP) is an Internet Engineering Task Force (IETF) standard, documented in Request for Comments (RFCs) 2165, 2608, 2609, 2610, and 2614. SLP provides a scalable framework for the discovery and selection of network services.

32

Managing Disk Subsystems using IBM TotalStorage Productivity Center

SLP enables the discovery and selection of generic services, which could range in function from hardware services such as those for printers or fax machines, to software services such as those for file servers, e-mail servers, Web servers, databases, or any other possible services that are accessible through an IP network. Traditionally, to use a particular service, an end-user or client application needs to supply the host name or network IP address of that service. With SLP, however, the user or client no longer needs to know individual host names or IP addresses (for the most part). Instead, the user or client can search the network for the desired service type and an optional set of qualifying attributes. For example, a user could specify to search for all available printers that support Postscript. Based on the given service type (printers), and the given attributes (Postscript), SLP searches the user's network for any matching services, and returns the discovered list to the user.

2.4.1 SLP architecture


The Service Location Protocol (SLP) architecture includes three major components, a service agent, a user agent, and a directory agent. The service agent and user agent are required components in an SLP environment, whereas the SLP directory agent is optional. Following is a description of these components: Service agent (SA) A process working on the behalf of one or more network services to broadcast the services. User agent (UA) A process working on the behalf of the user to establish contact with some network service. The UA retrieves network service information from the service agents or directory agents. Directory agent (DA) A process that collects network service broadcasts. Note: The SLP directory agent is completely different and separate from the IBM Director Agent, which occupies the lowest tier in the IBM Director architecture.

2.4.2 SLP service agent


The Service Location Protocol (SLP) service agent (SA) is a component of the SLP architecture that works on behalf of one or more network services to broadcast the availability of those services. The SA replies to external service requests using IP unicasts to provide the requested information about the registered services, if it is available. The SA can run in the same process or in a different process as the service itself. But in either case, the SA supports registration and de-registration requests for the service. The service registers itself with the SA during startup, and removes the registration for itself during shutdown. In addition, every service registration is associated with a life-span value, which specifies the time that the registration will be active. A service is required to reregister itself periodically, before the life-span of its previous registration expires. This ensures that expired registration entries are not kept. For instance, if
Chapter 2. Key concepts

33

a service becomes inactive without removing the registration for itself, that old registration will be removed automatically when its life-span expires. The maximum life-span of a registration is 65,535 seconds (about 18 hours).

2.4.3 SLP user agent


The Service Location Protocol (SLP) user agent (UA) is a process working on the behalf of the user to establish contact with some network service. The UA retrieves service information from the service agents or directory agents. The UA is a component of SLP that is closely associated with a client application or a user who is searching for the location of one or more services on the network. You can use the SLP UA by defining a service type that you want the SLP UA to locate. The SLP UA then retrieves a set of discovered services, including their service Uniform Resource Locator (URL) and any service attributes. You can then use the service's URL to connect to the service. The SLP UA locates the registered services, based on a general description of the services that the user or client application has specified. This description usually consists of a service type, and any service attributes, which are matched against the service URLs registered in the SLP service agents. The SLP UA usually runs in the same process as the client application, although it is not necessary to do so. The SLP UA processes find requests by sending out multicast messages to the network and targeting all SLP SAs within the multicast range with a single User Datagram Protocol (UDP) message. The SLP UA is, therefore, able to discover these SAs with a minimum of network overhead. When an SA receives a service request, it compares its own registered services with the requested service type and any service attributes, if specified, and returns matches to the UA using a unicast reply message. The SLP UA follows the multicast convergence algorithm, and sends out repeated multicast messages until no new replies are received. The resulting set of discovered services, including their service URL and any service attributes, are returned to the client application or user. The client application or user is then responsible for contacting the individual services, as needed, using the service's URL (see Figure 2-7 on page 35).

34

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Figure 2-7 Service Location Protocol user agent

A SLP UA is not required to discover all matching services that exist on the network, but only enough of them to provide useful results. This restriction is mainly due to the transmission size limits for UDP packets, which could be exceeded when there are many registered services or when the registered services have lengthy URLs or a large number of attributes. However, in most modern SLP implementations, the UAs are able to recognize truncated service replies and establish TCP connections to retrieve all of the information of the registered services. With this type of UA and SA implementation, the only exposure that remains is when there are too many SAs within the multicast range, which could cut short the multicast convergence mechanism. This exposure can be mitigated by the SLP administrator by setting up one or more SLP DAs.

2.4.4 SLP directory agent


The Service Location Protocol (SLP) directory agent (DA) is an optional component of SLP that collects network service broadcasts. The DA is primarily used to simplify SLP administration and to improve SLP performance. The SLP DA can be thought of as an intermediate tier in the SLP architecture, placed between the user agents (UAs) and the service agents (SAs), such that both UAs and SAs communicate only with the DA instead of with each other. This eliminates a large portion of the multicast request or reply traffic on the network, and it protects the SAs from being overwhelmed by too many service requests if there are many UAs in the environment. Figure 2-8 on page 36 shows the interactions of the SLP UAs and SAs in an environment with SLP DAs.

Chapter 2. Key concepts

35

S A

CIMOM

Subnet A
CIMOM S A DA CIMOM

S A

CIMOM

Subnet B

MDM S A SLP UA CIMOM

S A

CIMOM S A CIMOM

CIMOM

S A

CIMOM DA

Subnet C
Figure 2-8 SLP UA, SA and DA interaction

When SLP DAs are present, the behavior of both SAs and UAs changes significantly. When an SA is first initializing, it performs a DA discovery using a multicast service request and specifies the special, reserved service type service:directory-agent. This process is also called active DA discovery, and it is achieved through the same mechanism as any other discovery using SLP. Similarly, in most cases, an SLP UA also performs active DA discovery using multicasting when it first starts up. However, if the SLP UA is statically configured with one or more DA addresses, it uses those addresses instead. If it is aware of one or more DAs, either through static configuration or active discovery, it sends unicast service requests to those DAs instead of multicasting to SAs. The DA replies with unicast service replies, providing the requested service Uniform Resource Locators (URLs) and attributes. Figure 2-9 on page 37 shows the interactions of UAs and SAs with DAs, during active DA discovery.

36

Managing Disk Subsystems using IBM TotalStorage Productivity Center

S LP D A
D A A dvertisem ent D A A dvertisem ent

C lient or user

S LP UA

S ervice R equest "S ervice: D irectory A gent

S LP D A
D A A dvertisem ent D A A dvertisem ent

S ervice R equest "Service: D irectory A gent

S LP SA

S ervice

D A A dvertisem ent D A A dvertisem ent

S LP D A
Figure 2-9 Service Location Protocol DA functions

The SLP DA functions very similarly to an SLP SA, receiving registration and deregistration requests, and responding to service requests with unicast service replies. There are a couple of differences, however, where DAs provide more functionality than SAs. One area, mentioned previously, is that DAs respond to service requests of the service:directory-agent service type with a DA advertisement response message, passing back a service URL containing the DA's IP address. This allows SAs and UAs to perform active discovery on DAs. One other difference is that when a DA first initializes, it sends out a multicast DA advertisement message to advertise its services to any existing SAs (and UAs) that might already be active on the network. UAs can optionally listen for, and SAs are required to listen for, such advertisement messages. This listening process is also sometimes called passive DA discovery. When the SA finds a new DA through passive DA discovery, it sends registration requests for all its currently registered services to that new DA. Figure 2-10 on page 38 shows the interactions of DAs with SAs and UAs, during passive DA discovery.

Chapter 2. Key concepts

37

SLP U A

SLP SA

SLP U A

DA
A d v e rti sem ent

SLP DA

DA
A d v e rti sem ent

SLP SA

SLP U A
Figure 2-10 Service Location Protocol passive DA discovery

SLP SA

2.4.5 Why use an SLP DA?


The primary reason to use DAs is to reduce the amount of multicast traffic involved in service discovery. In a large network with many UAs and SAs, the amount of multicast traffic involved in service discovery can become so large that network performance degrades. By deploying one or more DAs, UAs must unicast to DAs for service and SAs must register with DAs using unicast. The only SLP-registered multicast in a network with DAs is for active and passive DA discovery. SAs register automatically with any DAs they discover within a set of common scopes. Consequently, DAs within the UAs scopes reduce multicast. By eliminating multicast for normal UA request, delays and timeouts are eliminated. DAs act as a focal point for SA and UA activity. Deploying one or several DAs for a collection of scopes provides a centralized point for monitoring SLP activity. You can deploy any number of DAs for a particular scope or scopes, depending on the need to balance the load. In networks without multicast outing enabled, you can configure SLP to use broadcast. However, broadcast is very inefficient, because it requires each host to process the message. Broadcast also does not normally propagate across routers. As a result, in a network without multicast, DAs can be deployed on multihomed hosts to bridge SLP advertisements between the subnets.

2.4.6 When to use DAs


Use DAs in your enterprise if any of the following conditions are true: Multicast SLP traffic exceeds 1% of the bandwidth on your network, as measured by snoop. UA clients experience long delays or timeouts during multicast service request. You want to centralize monitoring of SLP service advertisements for particular scopes on one or several hosts. Your network does not have multicast enabled and consists of multiple subnets that must share services.

38

Managing Disk Subsystems using IBM TotalStorage Productivity Center

2.4.7 SLP configuration recommendation


Some configuration recommendations are provided for enabling TotalStorage Productivity Center to discover a larger set of storage devices. These recommendations cover some of the more common SLP configuration problems. This topic discusses router configuration, SLP directory agent configuration, and environment configuration.

Router configuration
Configure the routers in the network to enable general multicasting or to allow multicasting for the SLP multicast address and port, 239.255.255.253, port 427. The routers of interest are those that are associated with subnets that contain one or more storage devices that are to be discovered and managed by TotalStorage Productivity Center. To configure your router hardware and software, refer to your router reference and configuration documentation.

SLP directory agent configuration


Configure the SLP directory agents (DAs) to circumvent the multicast limitations. With statically configured DAs, all service requests are unicast by the user agent. Therefore, it is possible to configure one DA for TotalStorage Productivity Center each subnet that contains storage devices that are to be discovered by TotalStorage Productivity Center. One DA is sufficient for each of such subnets. Each of these DAs can discover all services within its own subnet, but no other services outside its own subnet. To allow TotalStorage Productivity Center to discover all of the devices, it needs to be statically configured with the addresses of each of these DAs. This can be accomplished using the TotalStorage Productivity Center Discovery Preference panel as discussed in Configuring SLP Directory Agent addresses on page 41. You can use this panel to enter a list of DA addresses. TotalStorage Productivity Center sends unicast service requests to each of these statically configured DAs, and sends multicast service requests on the local subnet on which TotalStorage Productivity Center is installed. Configure an SLP DA by changing the configuration of the SLP service agent (SA) that is included as part of an existing CIM Agent installation. This causes the program that normally runs as an SLP SA to run as an SLP DA. Note: The change from SA to DA does not affect the CIMOM service of the subject CIM Agent, which continues to function normally, sending registration and deregistration commands to the DA directly.

Environment configuration
It might be advantageous to configure SLP DAs in the following environments: In environments where there are other non-TotalStorage Productivity Center SLP UAs that frequently perform discovery on the available services, an SLP DA should be configured. This ensures that the existing SAs are not overwhelmed by too many service requests. In environments where there are many SLP SAs, a DA helps decrease network traffic that is generated by the multitude of service replies. It also ensures that all registered services can be discovered by a given UA. The configuration of an SLP DA is particularly recommended when there are more than 60 SAs that need to respond to any given multicast service request.

Chapter 2. Key concepts

39

2.4.8 Setting up the Service Location Protocol Directory Agent


You can use the following procedure to set up the Service Location Protocol (SLP) Directory Agent (DA) so that TotalStorage Productivity Center can discover devices that reside in subnets other than the one in which TotalStorage Productivity Center resides. Perform the following steps to set up the SLP DAs: 1. Identify the various subnets that contain devices that you want TotalStorage Productivity Center to discover. 2. Each device is associated with a CIM Agent. There might be multiple CIM Agents for each of the identified subnets. Pick one of the CIM Agents for each of the identified subnets. (It is possible to pick more than one CIM Agent per subnet, but it is not necessary for discovery purposes.) 3. Each of the identified CIM Agents contains an SLP service agent (SA), which runs as a daemon process. Each of these SAs is configured using a configuration file named slp.conf. Find the slp.conf file in the CIM Agent installation directory and perform the following steps to edit the file: Make a backup copy of this file and name it slp.conf.bak. Open the slp.conf file and scroll down until you find (or search for) the line ;net.slp.isDA = true Remove the semi-colon (;) at the beginning of the line. Ensure that this property is set to true (= true) rather than false. Save the file. Copy this file (or replace it if the file already exists) to the main windows subdirectory for Windows machines (for example c:\winnt), or in the /etc directory for UNIX machines. 4. Restart the daemon process and the CIMOM process for the CIM Agent. Refer to the CIM Agent documentation for your operating system and Chapter 4, CIMOM installation and configuration on page 119 for more details. Note: The CIMOM process might start automatically when you restart the SLP daemon. 5. You have now converted the SLP SA of the CIM Agent to run as an SLP DA. The CIMOM is not affected and will register itself with the DA instead of the SA. However, the DA will automatically discover all other services registered with other SLP SAs in that subnet. 6. Go to the TotalStorage Productivity Center Discovery Preference settings panel (Figure 2-11 on page 41), and enter the host names or IP addresses of each of the machines that are running the SLP DA that was set up in the prior steps. Note: Enter only a simple host name or IP address; do not enter protocol and port number.

Result
When a discovery task is started (either manually or scheduled), TotalStorage Productivity Center will discover all devices on the subnet on which TotalStorage Productivity Center resides, and it will discover all devices with affinity to the SLP DAs that were configured.

40

Managing Disk Subsystems using IBM TotalStorage Productivity Center

2.4.9 Configuring SLP Directory Agent addresses


Perform this task to configure the addresses for the Service Location Protocol (SLP) Directory Agent (DA) for TotalStorage Productivity Center.TotalStorage Productivity Center uses the DA addresses during device discovery. When configured with DAs, the TotalStorage Productivity Center SLP User Agent (UA) sends service requests to each of the configured DA addresses in turn to discover the registered services for each. The UA also continues discovery of registered services by performing multicast service discovery. This additional action ensures that registered services are discovered when going from an environment without DAs to one with DAs. Note: If you have set up an SLP DA in the subnet that the TotalStorage Productivity Center server is in, you can register specific devices to be discovered and managed by TotalStorage Productivity Center that are outside that subnet.You do this by registering the CIM Agent to SLP. Refer to Chapter 4, CIMOM installation and configuration on page 119 for details. Perform the following steps to configure the addresses for the SLP directory agent: From the IBM Director menu bar, click Options. The Options menu is displayed. From the TotalStorage Productivity Center selections, click Discovery Preferences panel. The Discovery Preferences menu for is displayed. Select MDM SLP Configuration tab (see Figure 2-11).

Figure 2-11 MDM SLP Configuration panel

In the SLP Directory Agent Configuration section, type a valid Internet host name or an IP address (in dotted decimal format). Click Add. The host and scope information that you entered are displayed in the SLP Directory Agents Table. Click Change to change the host name or IP address for a selected item in the SLP Directory Agents Table.

Chapter 2. Key concepts

41

Click Remove to delete a selected a item from the SLP Directory Agents Table. Click OK to add or change the directory agent information. Click Cancel to cancel adding or changing the directory agent information.

2.5 Productivity Center for Disk and Replication architecture


Figure 2-12 provides an overview of the TotalStorage Productivity Center for Disk and TotalStorage Productivity Center for Replication architecture. All of the components of the TotalStorage Productivity Center are shown - Device Manager, TotalStorage Productivity Center for Disk and TotalStorage Productivity Center for Replication. Keep in mind that the TotalStorage Productivity Center for Replication and TotalStorage Productivity Center for Disk are separately orderable features on TotalStorage Productivity Center. The communication protocols and flow between supported devices through the TotalStorage Productivity Center Server and Console are shown.

Multiple Device Manager Console TotalStorage Productivity Center Console Device Mgr. Console Replication Mgr. Console Performance Mgr. Console

IBM Director Console

LAN (TCP / IP)

WAS Device Manager Co-Server Performance Manager Co-Server Replication Manager Co-Server

TotalStorage Prod. Center Server


SOAP IBM DirectorServer

JDBC

IBM DB2 Workgroup Server

LAN (TCP / IP)


ESS ICAT (Proxy) CIMOM / SLP SVC ICAT (Proxy) CIMOM / SLP

FAStT CIMOM / SLP

ESS ESS

SVC FAStT SVC

Figure 2-12 TotalStorage Productivity Center architecture overview

42

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Chapter 3.

TotalStorage Productivity Center suite installation


The components of the IBM TotalStorage Productivity Center can be installed individually using the component install as shipped, or they can be installed using the Suite Installer shipped with the package. In this chapter we document the use of the Suite Installer. Hints and tips based on our experience are included.

Copyright IBM Corp. 2004, 2005. All rights reserved.

43

3.1 Installing the IBM TotalStorage Productivity Center


IBM TotalStorage Productivity Center provides a suite installer that helps guide you through the installation process. You can also use the suite installer to install the components standalone. One advantage of the suite installer is that it will interrogate your system and install required prerequisites. The suite installer will install the following prerequisite products or components in this order: DB2 (required by all the managers) IBM Director (required by TotalStorage Productivity Center for Disk and TotalStorage Productivity Center for Replication) Tivoli Agent Manager (required by Fabric Manager and Data Manager) WebSphere Application Server (required by all the managers except for TotalStorage Productivity Center for Data) The suite installer will then guide you through the installation of the IBM TotalStorage Productivity Center components. You can select more than one installation option at a time, but in this book we focus on the Productivity Center for Disk and Productivity Center for Replication install. The types of installation tasks are: IBM TotalStorage Productivity Center Manager Installations IBM TotalStorage Productivity Center Agent Installations IBM TotalStorage Productivity Center GUI/Client Installations Language Pack Installations Uninstall IBM TotalStorage Productivity Center Products

Considerations
If you want the ESS, SAN Volume Controller, or FAStT storage subsystems to be managed using IBM TotalStorage Productivity Center for Disk, you must install the prerequisite I/O Subsystem Licensed Internal Code and CIM Agent for the devices. See Chapter 4, CIMOM installation and configuration on page 119 for more information. If you are installing the CIM agent for the ESS, you must install it on a separate machine from the Productivity Center for Disk and Productivity Center for Replication code. Note that IBM TotalStorage Productivity Center does not support zLinux on S/390 and does not support windows domains.

3.1.1 Configurations
The storage management components of IBM TotalStorage Productivity Center can be installed on a variety of platforms. However, for the IBM TotalStorage Productivity Center suite, when all four manager components are installed on the same system, the only common platforms for the managers are: Windows 2000 Server with Service Pack 4 Windows 2000 Advanced Server Windows 2003 Enterprise Server Edition Note: Refer to the following Web sites for the updated support summaries, including specific software, hardware, and firmware levels supported. http://www.storage.ibm.com/software/index.html http://www.ibm.com/software/support/

44

Managing Disk Subsystems using IBM TotalStorage Productivity Center

If you are using the storage provisioning workflows, you must install IBM TotalStorage Productivity Center for Disk and TotalStorage Productivity Center for Replication and IBM TotalStorage Productivity Center for Fabric on the same machine. Because of processing requirements, we recommend you install IBM Tivoli Provisioning Manager on a separate Windows machine.

3.1.2 Installation prerequisites


This section lists the minimum prerequisites for installing TotalStorage Productivity Center.

Hardware
Dual Pentium 4 or Xeon 2.4 GHz or faster processors 4 GB of DRAM Network connectivity Subsystem Device Driver (SDD), for IBM TotalStorage Productivity Centerfor Fabric (optional) 80 GB available disk space

Database
The installation of DB2 Version 8.2 is part of the suite installer and is required by all the managers.

3.1.3 TCP/IP ports used by TotalStorage Productivity Center


This section provides an overview of the TCP/IP ports used by TotalStorage Productivity Center.

Productivity Center for Disk and Productivity Center for Replication


The IBM TotalStorage Productivity Center for Disk and IBM TotalStorage Productivity Center for Replication installation program will pre-configure the TCP/IP ports used by WebSphere.
Table 3-1 TCP/IP ports for IBM TotalStorage Productivity Center for Disk and Replication Base Port value 2809 9080 9443 9090 9043 5559 5557 5558 8980 7873 WebSphere ports Bootstrap port HTTP Transport port HTTPS Transport port Administrative Console port Administrative Console Secure Server port JMS Server Direct Address port JMS Server Security port 5 JMS Server Queued Address port SOAP Connector Address port DRS Client Address p

TCP/IP ports used by agent manager


The Agent Manager uses these TCP/IP ports.
Chapter 3. TotalStorage Productivity Center suite installation

45

Table 3-2 TCP/IP ports for agent manager Port value 9511 9512 Usage Registering agents and resource managers Providing configuration updates Renewing and revoking certificates Querying the registry for agent information Requesting ID resets Requesting updates to the certificate revocation list Requesting agent manager information Downloading the truststore file Agent recovery service

9513

80

TCP/IP ports used by IBM TotalStorage Productivity Center for Fabric


The Fabric Manager uses these default TCP/IP ports.
Table 3-3 TCP/IP ports for IBM TotalStorage Productivity Center for Fabric Port value 8080 9550 9551 9552 9553 9554 9555 9556 9557 9558 9559 9560 9661 9562 9563 9564 9565 9565 9567 9568 9569 Usage NetView Remote Web console HTTP port Reserved Reserved Cloudscape server port NVDAEMON port NVREQUESTER port SNMPTrapPort port on which to get events forwarded from Tivoli NetView Reserved Reserved Tivoli NetView Pager daemon Tivoli NetView Object Database daemon Tivoli NetView Topology Manager daemon Tivoli NetView Topology Manager socket Tivoli General Topology Manager Tivoli NetView OVs_PMD request services Tivoli NetView OVs_PMD management services Tivoli NetView trapd socket Tivoli NetView PMD service Tivoli NetView General Topology map service Tivoli NetView Object Database event socket

46

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Port value 9570 9571 9572

Usage Tivoli NetView Object Collection facility socket Tivoli NetView Web server socket Tivoli NetView SnmpServer

Fabric Manager remote console TCP/IP default ports


Table 3-4 TCP/IP ports for IBM TotalStorage Productivity Center for Fabric remote console Port value 9560 9561 9562 9563 9564 9565 9569 9570 9571 9572 9573 9574 9575 9576 9577 9578 9579 9580 9581 9582 Usage HTTP port 9561 Reserved Reserved Tomcats Local Server port Tomcats warp port NVDAEMON port NVREQUESTER port Tivoli NetView Pager daemon Tivoli NetView Object Database daemon Tivoli NetView Topology Manager daemon Tivoli NetView Topology Manager socket Tivoli General Topology Manager Tivoli NetView OVs_PMD request services Tivoli NetView OVs_PMD management services Tivoli NetView trapd socket Tivoli NetView PMD service Tivoli NetView General Topology map service Tivoli NetView Object Database event socket Tivoli NetView Object Collection facility socket Tivoli NetView Web server socket Tivoli NetView SnmpServer

Fabric agents TCP/IP ports


Table 3-5 TCP/IP ports for IBM TotalStorage Productivity Center for Fabric agents Port value 9510 9514 9515 Usage Common agent Used to restart the agent Used to restart the agent

Chapter 3. TotalStorage Productivity Center suite installation

47

3.1.4 Default databases created during install


During the installation of IBM TotalStorage Productivity Center we recommend that you use DB2 as the preferred database type. Table 3-6 lists the default databases that the installer will create during the installation.
Table 3-6 Default DB2 databases Application IBM Director Tivoli Agent Manager IBM TotalStorage Productivity Center for Disk and Replication Base IBM TotalStorage Productivity Center for Disk IBM TotalStorage Productivity Center for Replication hardware subcomponent IBM TotalStorage Productivity Center for Replication element catalog IBM TotalStorage Productivity Center for Replication, Replication Manager IBM TotalStorage Productivity Center for Fabric Dealt Database Name (DB2) No Default (WE created Database: DIRECTOR) IBMCDB DMCOSERV PMDATA ESSHWL ELEMCAT REPMGR ITSANMDB

3.2 Pre-installation check list


The following is a list of the tasks you need to complete in preparation for the install of the IBM TotalStorage Productivity Center. You should print the tables in Appendix B, Worksheets on page 505 to keep track of the information you will need during the install (for example usernames, ports, IP addresses and locations of servers and managed devices). 1. Determine which elements of the TotalStorage Productivity Center you will be installing 2. Uninstall Internet Information Services 3. Grant the user account that will be used to install the TotalStorage Productivity Center the following privileges: Act as part of the operating system Create a token object Increase quotas Replace a process-level token Logon as a service

4. Install and configure SNMP (Fabric requirement) 5. Identify any firewalls and obtain required authorization 6. Obtain the static IP addresses that will be used for the TotalStorage Productivity Center servers

3.2.1 User IDs and security


This section will list and explain the user IDs used in a IBM TotalStorage Productivity Center environment during the installation and also those that are later used to manage and work with TotalStorage Productivity Center. For some of the IDs the table Table 3-8 on page 49 includes a link to further information that is available in the manuals. 48
Managing Disk Subsystems using IBM TotalStorage Productivity Center

Suite Installer user


We recommend you use the Windows Administrator or a dedicated user for the installation of TotalStorage Productivity Center. That user ID should have the user rights shown in Table 3-7.
Table 3-7 Requirements for the Suite Installer user User rights/Policy Act as part of the operating system Used for

DB2 Productivity Center for Disk Fabric Manager DB2 Productivity Center for Disk DB2 Productivity Center for Disk DB2 Productivity Center for Disk DB2 Productivity Center for Disk

Create a token object Increase quotas

Replace a process-level token

Log on as a service Debug programs

Table 3-8 shows the user IDs used in our TotalStorage Productivity Center environment.
Table 3-8 User IDs used in a IBM TotalStorage Productivity Center environment Element Suite Installer DB2 User ID Administrator db2admina New user no yes, will be created no no, default user no Windows DB2 management and Windows Service Account DirAdmin or DirSuper n/a - internal user n/a - internal user Windows Windows Service Account used during the registration of a Resource Manager to the Agent Manager used to authenticate agents and lock the certificate key files Windows Service Account Type Group(s) Usage

IBM Director (see also below) Resource Manager

Administratora managerb

Windows Tivoli Agent Manager Tivoli Agent Manager Windows

Common Agent (see also below) Common Agent

AgentMgrb

itcauserb

yes, will be created yes, will be created

TotalStorage Productivity Center universal user Tivoli NetView IBM WebSphere

TPCSUIDa

Windows

DirAdmin

This ID is used to accomplish connectivity with the managed devices, i.e this ID has to be set up on the CIM agents see Fabric Manager User IDs on page 51 see Fabric Manager User IDs on page 51

Windows Windows

Chapter 3. TotalStorage Productivity Center suite installation

49

Element Host Authentication

User ID
c

New user

Type Windows

Group(s)

Usage see Fabric Manager User IDs on page 51

a. This account can have whatever name you like. b. This account name cannot be changed during the installation. c. The DB2 administrator user ID and password are used here, see Fabric Manager User IDs on page 51.

Granting privileges
Grant privileges to the user ID used to install the IBM TotalStorage Productivity Center for Disk and Replication Base, IBM TotalStorage Productivity Center for Disk, and the IBM TotalStorage Productivity Center for Replication. It is recommended that this user ID be the superuser ID. These user rights are governed by the local security policy and are not initially set as the defaults for administrators. They might not be in effect when you log on as the local administrator. If the IBM TotalStorage Productivity Center installation program does not detect the required user rights for the logged on user name, the program can, optionally, set them. The program can set the local security policy settings to assign these user rights. Alternatively, you can manually set them prior to performing the installation. To manually set these privileges, select the following path and select the appropriate user: Click Start Settings Control Panel Double-click Administrative Tools Double-click Local Security Policy; the Local Security Settings window opens. Expand Local Policies. Double-click User Rights Assignments to see the policies in effect on your system. For each policy added to the user, perform the following steps: Highlight the policy to be checked. Double-click the policy and look for the users name in the Assigned To column of the Local Security Policy Setting window to verify the policy setting. Ensure that the Local Policy Setting and the Effective Policy Setting options are checked. If the user name does not appear in the list for the policy, you must add the policy to the user. Perform the following steps to add the user to the list: a) Click Add on the Local Security Policy Setting window. b) In the Select Users or Groups window, highlight the user of group under the Name column. c) Click Add to put the name in the lower window. d) Click OK to add the policy to the user or group. After these user rights are set (either by the installation program or manually), log off the system, and then log on again in order for the user rights to take effect. You can then restart the installation program to continue with the install of the IBM TotalStorage Productivity Center for Disk and Replication Base.

IBM Director
With Version 4.1, you no longer need to create internal user account. All user IDs must be operating system accounts and members of one of the following: DirAdmin or DirSuper groups (Windows), diradmin or dirsuper groups (Linux) Administrator or Domain Administrator groups (Windows), root (Linux) 50
Managing Disk Subsystems using IBM TotalStorage Productivity Center

In addition to the above there is a host authentication password that is used to allow managed hosts and remote consoles to communicate with IBM Director.

TotalStorage Productivity Center superuser ID


The account used to accomplish connectivity with managed devices has to be part of the DirAdmin (Windows) or diradmin (Linux) group. Do not be confused by the name, it is really only a communication user ID.

Fabric Manager User IDs


During the installation of IBM TotalStorage Productivity Center for Fabric you can select if you want to use individual passwords for the sub components like DB2, IBM WebSphere, NetView and for the Host Authentication. You can also choose use the DB2 administrators user ID and password to make the configuration much simpler. Figure 3-97 on page 113 shows the window where you can choose the options.

3.2.2 Certificates and key files


With in a TotalStorage Productivity Center environment several applications use certificates to ensure security: Productivity Center for Disk, Productivity Center for Replication, and Tivoli Agent Manager.

Productivity Center for Disk and Replication certificates


The WebSphere Application Server that is part of Productivity Center for Disk and Productivity Center for Replication uses certificates for SSL communication. During the installation key files can be generated as a self-signed certificates, but you will have to enter a password for each file to lock it. The default file names are: MDMServerKeyFile.jks MDServerTrusFile.jks The default directory for that key file on the agent manager is: C:\Program Files\IBM\mdm\dm\keys

Tivoli Agent Manager certificates


The Agent Manager comes with demon certificates that you can use, but you can also create new certificates during the installation of agent manager (see Figure 3-49 on page 83). If you choose to create new files, the password that you have entered on the panel shown in Figure 3-50 on page 84 as the Agent registration password will be used to lock the key file: agentTrust.jks The default directory for that key file on the agent manager is: C:\Program Files\IBM\AgentManager\certs There are more key files in that directory, but during the installation and first steps the agentTrust.jks file is the most important one. And this is only important if you let the installer create you own keys.

Chapter 3. TotalStorage Productivity Center suite installation

51

3.3 Services and service accounts


The managers and components that belong to the TotalStorage Productivity Center are started as Windows Services. Table 3-9 provides an overview of the most important services. Note that we did not include all the DB2 services in the table, to keep it simple.
Table 3-9 Services and Service Accounts Element DB2 IBM Director Agent Manager IBM Director Server IBM WebSphere Application Server V5 - Tivoli Agent Manager IBM Tivoli Common Agent 'C:\Program Files\tivoli\ep' IBM WebSphere Application Server V5 - Fabric Manager Tivoli NetView Service Service name Service account db2admin Administrator LocalSystem Comment The account needs to be part of: Administrators and DB2ADMNS You need to modify the account, to be part of one of the groups: DirAdmin or DirSuper You need to set this service to start automatically, after the installation

Common Agent Productivity Center for Fabric Tivoli NetView Service

itcauser LocalSystem NetView

3.3.1 Starting and stopping the managers


To start, stop or restart one of the managers or components you simply use the windows control panel. Table 3-10 is a list of the services.
Table 3-10 Services used for TotalStorage Productivity Center Element DB2 IBM Director Agent Manager Common Agent Productivity Center for Fabric Tivoli NetView Service IBM Director Server IBM WebSphere Application Server V5 Tivoli Agent Manager IBM Tivoli Common Agent - 'C:\Program Files\tivoli\ep' IBM WebSphere Application Server V5 Fabric Manager Tivoli NetView Service Service name Service account db2admin Administrator LocalSystem itcauser LocalSystem NetView

3.3.2 Uninstall Internet Information Services


Make sure Internet Information Services (IIS) is not installed on the server, if it is installed uninstall it using the following procedure. Click Start Settings Control Panel Click Add/Remove Programs

52

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Click Add/Remove Windows Components Remove the tick box from Internet Information Services (IIS)

3.3.3 SNMP install


Before installing the components of the TotalStorage Productivity Center you should install and configure Simple Network Management Protocol (SNMP). Click Start Settings Control Panel Click Add/Remove Programs Click Add/Remove Windows Components Double-click Management and Monitoring Tools Click Simple Network Management Protocol Click OK Close the panels and accept the installation of the components the Windows installation CD or installation files will be required. Make sure that the SNMP services is configured, this can be configured by Right-click My Computer Click Manage Click Services An alternative method follows: Click Start Run... Type in MMC (Microsoft Management console) and click OK. Click Console Add/Remove Snap-in... Click Add and add Services. Select the services and scroll down to SNMP Service as shown in Figure 3-1 on page 54. Double-click SNMP Service. Click the Traps panel tab. Make sure that the public community name is available if not add it. Make sure that on the Security tab Accept SNMP packets from any host is checked.

Chapter 3. TotalStorage Productivity Center suite installation

53

Figure 3-1 SNMP Security

After setting the public community name restart the SNMP community service.

3.4 IBM TotalStorage Productivity Center for Fabric


The primary focus of this book is the install and use of the TotalStorage Productivity Center for Disk and TotalStorage Productivity Center for Replication. We have included the IBM TotalStorage Productivity Center for Fabric for completeness since it is used with the Productivity Center for Disk. There are planning considerations and prerequisite tasks that need to be completed.

3.4.1 The computer name


IBM TotalStorage Productivity Center for Fabric requires fully qualified host names for the manager, managed hosts, and the remote console. To verify your computer name on Windows, follow the procedure below. Rightclick the My Computer icon on your desktop. Click Properties The System Properties panel is displayed. Click on the Network Identification tab. Click on Properties The Identification Changes panel is displayed. Verify that your computer name is entered correctly. This is the name that the computer will be identified as in the network. Also verify that the Full computer name is a fully qualified host name. For example, user1.sanjose.ibm.com is a fully qualified host name. Click More

54

Managing Disk Subsystems using IBM TotalStorage Productivity Center

The DNS Suffix and NetBIOS Computer Name panel is displayed. Verify that the Primary DNS suffix field displays a domain name. The fully qualified host name must match the HOSTS file name (including casesensitive characters).

3.4.2 Database considerations


When you install IBM TotalStorage Productivity Center for Fabric, a DB2 database is automatically created (if you specified the DB2 database). The default database name is TSANMDB. If you installed IBM TotalStorage Productivity Center for Fabric previously, are using a DB2 database, and want to save the information in the database before reinstalling the manager, you must use DB2 commands to back up the database. The default name for the IBM TotalStorage Productivity Center for Fabric DB2 database is TSANMDB. The database name for Cloudscape is TSANMDB. You cannot change the database name for Cloudscape. If you are installing the manager on more than one machine in a Windows domain, the managers on different machines might end up sharing the same DB2 database. To avoid this situation, you must either use different database names or different DB2 user names when installing the manager on different machines.

3.4.3 Windows Terminal Services


You cannot use the Windows Terminal Services to access a machine that is running the IBM TotalStorage Productivity Center for Fabric console (either the manager or remote console machine). Any IBM TotalStorage Productivity Center for Fabric dialogs launched from the SAN menu in Tivoli NetView will appear on the manager or remote console machine only. The dialogs will not appear in the Windows Terminal Services session.

3.4.4 Tivoli NetView


IBM TotalStorage Productivity Center for Fabric also installs Tivoli NetView 7.1.3. If you already have Tivoli NetView 7.1.1 installed, IBM TotalStorage Productivity Center for Fabric upgrades it to Version 7.1.3. If you have a Tivoli NetView release below Version 7.1.1, IBM TotalStorage Productivity Center for Fabric will prompt you to uninstall Tivoli NetView before installing this product. If you have Tivoli NetView 7.1.3 installed, ensure that the following applications are stopped. You can check for Tivoli NetView by opening the Tivoli NetView console icon on your desktop. Web Console Web Console Security MIB Loader MIB Browser Netmon Seed Editor Tivoli Event Console Adaptor

Important: Also ensure that you do not have the Windows 2000 Terminal Services running. Go to the Services panel and check for Terminal Services.

Chapter 3. TotalStorage Productivity Center suite installation

55

User IDs and password considerations


IBM TotalStorage Productivity Center for Fabric only supports local user IDs and groups. IBM TotalStorage Productivity Center for Fabric does not support domain user IDs and groups.

Cloudscape database
If you install IBM TotalStorage Productivity Center for Fabric and specify the Cloudscape database, you will need the following user IDs and passwords: Agent manager name or IP address and password Common agent password to register with the agent manager Resource manager user ID and password to register with the agent manager WebSphere administrative user ID and password host authentication password Tivoli NetView password only

DB2 database
If you install IBM TotalStorage Productivity Center for Fabric and specify the DB2 database, you will need the user IDs and passwords listed below: Agent manager name or IP address and password Common agent password to register with the agent manager Resource manager user ID and password to register with the agent manager DB2 administrator user ID and password DB2 user ID and password v WebSphere administrative user ID and password Host authentication password only Tivoli NetView password only Note: If you are running under Windows 2000, when the IBM TotalStorage Productivity Center for Fabric installation program asks for an existing user ID for WebSphere, that user ID must have the Act as part of the operating system user privilege.

WebSphere
To change the WebSphere user ID and password, follow this procedure: Open the file: <install_location>\apps\was\properties\soap.client.props Modify the following entries: com.ibm.SOAP. login Userid=<user_ID> (enter a value for user_ID) com.ibm.SOAP. login Password=<password> (enter a value for password) Save the file. Run the following script: ChangeWASAdminPass.bat <user_ID> <password> <install_dir> Where <user_ID> is the WebSphere user ID and <password> is the password. <install_dir> is the directory where the manager is installed and is optional. For example, <install_dir> is c:\Program Files\IBM\TPC\Fabric\manager\bin\W32-ix86.

3.4.5 Personal firewall


If you have a software firewall on your system, you should disable the firewall while installing the Fabric Manager. The firewall causes Tivoli NetView installation to fail. You can enable the firewall after you install the Fabric Manager. 56
Managing Disk Subsystems using IBM TotalStorage Productivity Center

Security Considerations
Set up security by using the demonstration certificates or by generating new certificates was a option that was specified when you installed the agent manager as shown in Figure 3-49 on page 83. If you used the demonstration certificates carry on with the installation. If you generated new certificates, follow this procedure: Copy the manager CD image to your computer. Copy the agentTrust.jks file from the agent manager (AgentManager/certs directory) to the /certs directory of the manager CD image. This will overwrite the existing agentTrust.jks file. You can write a new CD image with the new file or keep this image on your computer and point the suite installer to the directory when requested.

3.4.6 Change the HOSTS file


When you install Service Pack 3 for Windows 2000 on your computers, you must follow these steps to avoid addressing problems with IBM TotalStorage Productivity Center for Fabric. The problem is caused by the address resolution protocol which returns the short name (not fully qualified host name). This problem can be avoided by changing the entries in the corresponding host tables on the DNS server and on the local computer. The fully qualified host name must be listed before the short name as shown in Example 3-1. See The computer name on page 54 for details on determining the host name. To correct this problem you will have to edit the HOSTS file. The HOSTS file is in the following directory: %SystemRoot%\system32\drivers\etc\
Example 3-1 Sample HOSTS file # # # # # # # # # # # # # # # This is a sample HOSTS file used by Microsoft TCP/IP for Windows. This file contains the mappings of IP addresses to host names. Each entry should be kept on an individual line. The IP address should be placed in the first column followed by the corresponding host name. The IP address and the host name should be separated by at least one space. Additionally, comments (such as these) may be inserted on individual lines or following the machine name denoted by a '#' symbol. For example: 102.54.94.97 38.25.63.10 rhino.acme.com x.acme.com # source server # x client host

127.0.0.1 localhost # 192.168.123.146 jason.groupa.mycompany.com jason 192.168.123.146 jason jason.groupa.mycompany.com

Note: Host names are casesensitive. This is a WebSphere limitation. Check your host name.

Chapter 3. TotalStorage Productivity Center suite installation

57

3.5 Installation process


Depending on which managers you plan to install, these are the prerequisites programs that are installed first. The suite installer will install these prerequisites programs in this order: DB2 WebSphere Application Server IBM Director Tivoli Agent Manager

The suite installer then launches the installation wizard for each manager you have chosen to install. If you are running the Fabric Manager install under Windows 2000, the Fabric Manager installation requires that user ID must have the Act as part of the operating system and Log on as a service user rights. Insert the IBM TotalStorage Productivity Center suite installer CD into the CDROM drive. If Windows autorun is enabled, the installation program should start automatically. If it does not, open Windows Explorer and go to the IBM TotalStorage Productivity Center CDROM drive. Doubleclick setup.exe. Note: It may take a few moments for the installer program to initialize. Be patient until the language selection panel in Figure 3-2 appears. The language panel is displayed. Select a language from the dropdown list. This is the language that is used for installing this product Click OK as shown in Figure 3-2.

Figure 3-2 Installer Wizard

The Welcome to the InstallShield Wizard for The IBM TotalStorage Productivity Center panel is displayed. Click Next as shown in Figure 3-3 on page 59.

58

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Figure 3-3 Welcome to IBM TotalStorage Productivity Center panel

The Software License Agreement panel is displayed. Read the terms of the license agreement. If you agree with the terms of the license agreement, select the I accept the terms of the license agreement radio button. Click Next to continue as shown in Figure 3-4. If you do not accept the terms of the license agreement, the installation program will end without installing IBM TotalStorage Productivity Center.

Figure 3-4 License agreement

Chapter 3. TotalStorage Productivity Center suite installation

59

The Select Type of Installation panel is displayed. Select Manager installations of Data, Disk, Fabric, and Replication and click Next to continue as shown in Figure 3-5.

Figure 3-5 IBM TotalStorage Productivity Center options panel

The Select the Components panel is displayed. Select the components you want to install. Click Next to continue as shown in Figure 3-6.

Figure 3-6 IBM TotalStorage Productivity Center components WinMgmt is a service of Windows that need to be stopped before proceeding with the install. If the service is running you will see the panel in Figure 3-7 on page 61. Click Next to stop the services.

60

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Figure 3-7 WinMgmt information window

The window in Figure 3-8 will open. Click Next once again to stop WinMgmt. Note: You should stop this service prior to beginning the install of TotalStorage Productivity Center to prevent these windows from appearing.

Figure 3-8 Services information

The Prerequisite Software panel is displayed. The products will be installed in the order listed. Click Next to continue as shown in Figure 3-9 on page 62. In this example, the first prerequisites to be installed are DB2 and WebSphere.

Chapter 3. TotalStorage Productivity Center suite installation

61

Note: The installer will interrogate the server to determine what prerequisites are installed on the server and list what remains to be installed.

Figure 3-9 Prerequisite installation

3.5.1 Prerequisite product install: DB2 and WebSphere


The DB2 installation Information panel is displayed. The products will be installed in the order shown in Figure 3-10 on page 63. From the DB2 installation information panel click Next to continue. Note: If DB2 is already installed on the server the installer will skip the DB2 install.

62

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Figure 3-10 Products to be installed

The DB2 User ID and Password panel is displayed. Accept the default user name or enter a new user ID and password. Click Next to continue as shown in Figure 3-11.

Figure 3-11 DB2 User configuration

The Confirm Target Directories for DB2 panel is displayed. Accept the default directory or enter a target directory. Click Next to continue as shown in Figure 3-12 on page 64.

Chapter 3. TotalStorage Productivity Center suite installation

63

Figure 3-12 DB2 Target Directory

You will be prompted for the location of the DB2 installation image.Browse to the installation image or installer CD select the required information and click Install as shown in Figure 3-13.

Figure 3-13 Installation source

Note: If you use the DB2 CD for this step, the Welcome to DB2 panel is displayed. Click Exit to exit the DB2 installation wizard. The suite installer will guide you through the DB2 installation. The Installing Prerequisites (DB2) panel is displayed with the word Installing on the right side of the panel. When the component is installed a green arrow appears next to the component name (see Figure 3-14 on page 65). Wait for all the prerequisite programs to install. Click Next. Note: Depending on the speed of your machine, this can take from 3040 minutes to install.

64

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Figure 3-14 Installing Prerequisites window - DB2 installing

After DB2 has installed a green check mark will appear next to the text DB2 Universal Database Enterprise Server Edition. The installer will start the install of WebSphere as shown in Figure 3-15.

Figure 3-15 Installing Prerequisites window - WebSphere installing

After WebSphere has installed a green check mark will appear next to the text WebSphere Application Server. The installer will start the install of WebSphere Fixpack as shown in Figure 3-16 on page 66.

Chapter 3. TotalStorage Productivity Center suite installation

65

Figure 3-16 Installing Prerequisites window - WebSphere Fixpack installing

After WebSphere has installed a green check mark will appear next to the text WebSphere Application Server. The installer will start the install of WebSphere Fixpack as shown in Figure 3-15 on page 65.

Figure 3-17 Installing Prerequisites window - WebSphere Fixpack installed

After the DB2 WebSphere, and WebSphere fixpack are installed the DB2 Server installation was successful window opens (see Figure 3-18 on page 67). Click Next to continue.

66

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Figure 3-18 DB2 installation successful

The WebSphere Application Server installation was successful window opens (see Figure 3-19). Click Next to continue.

Figure 3-19 WebSphere Application Server installation was successful

3.5.2 Installing IBM Director


The suite installer will present you with the panel showing the remaining products to be installed. The next prerequisite product to be installed is the IBM Director (see Figure 3-20 on page 68).

Chapter 3. TotalStorage Productivity Center suite installation

67

Figure 3-20 Installer prerequisite products panel

The location of the IBM Director install package panel is displayed. Enter the installation source or insert the CD-ROM and enter the CD drive location. Click Next as shown in Figure 3-21.

Figure 3-21 IBM Director Installation source

The next panel provides information about the IBM Director post install reboot option. Note that you should choose the option to reboot later when prompted (seeFigure 3-22 on page 69). Click Next to continue.

68

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Figure 3-22 IBM Director information

The IBM Director Server - InstallShield Wizard panel is displayed indicating that the IBM Director installation wizard will be launched. Click Next to continue (see Figure 3-23).

Figure 3-23 IBM Director InstallShield Wizard

The License Agreement window opens next. Read the license agreement. Click I accept the terms in the license agreement radio button as shown in Figure 3-24 on page 70. Click Next to continue.

Chapter 3. TotalStorage Productivity Center suite installation

69

Figure 3-24 IBM Director licence agreement

The next window is the advertisement for Enhance IBM Director with the new Server Plus Pack window (see Figure 3-25). Click Next to continue.

Figure 3-25 IBM Director information

The Feature and installation directory window opens (see Figure 3-26 on page 71). Accept the default settings and click Next to continue.

70

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Figure 3-26 IBM Director feature and installation directory window

The IBM Director service account information window opens (see Figure 3-27). Type the domain for the IBM Director system administrator. Alternatively, if there is no domain, then type the local host name (this is the recommended setup). Type a user name and password for IBM Director. The IBM Director will run under this user name and you will log on to the IBM Director console using this user name. Click Next to continue.

Figure 3-27 Account information

The Encryption settings window opens as shown in Figure 3-28 on page 72. Accept the default settings in the Encryption settings window. Click Next to continue.

Chapter 3. TotalStorage Productivity Center suite installation

71

Figure 3-28 Encryption settings

In the Software Distribution settings window, accept the default values and click Next as shown in Figure 3-29. Note: The TotalStorage Productivity Center components do not use the software-distribution packages function of IBM Director.

Figure 3-29 Install target directory

The Ready to Install the Program window opens (see Figure 3-30 on page 73). Click Install to continue.

72

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Figure 3-30 Installation ready

The Installing IBM Director server window reports the status of the installation as shown in Figure 3-31.

Figure 3-31 Installation progress

The Network driver configuration window opens. Accept the default settings and click OK to continue.

Chapter 3. TotalStorage Productivity Center suite installation

73

Figure 3-32 Network driver configuration

The secondary window closes and the installation wizard performs additional actions which are tracked in the status window. The Select the database to be configured window opens (see Figure 3-33). Select IBM DB2 Universal Database in the Select the database to be configured window. Click Next to continue.

Figure 3-33 Data base selection

The IBM Director DB2 Universal Database configuration window will open (see Figure 3-34). It might be behind the status window, and you must click it to bring it to the foreground.

74

Managing Disk Subsystems using IBM TotalStorage Productivity Center

In the Database name field, type a new database name for the IBM Director database table or type an existing database name. In the User ID and Password fields, type the DB2 user ID and password that you created during the DB2 installation. Click Next to continue.

Figure 3-34 Database selection configuration

Accept the default DB2 node name LOCAL - DB2 in the IBM Director DB2 Universal Database configuration secondary window as shown in Figure 3-35. Click OK to continue.

Figure 3-35 Database node name selection

The Database configuration in progress window is displayed at the bottom of the IBM Director DB2 Universal Database configuration window. Wait for the configuration to complete and the secondary window to close. Click Finish as shown in Figure 3-36 on page 76 when the Install Shield Wizard Completed window opens.

Chapter 3. TotalStorage Productivity Center suite installation

75

Figure 3-36 Completed installation

Important: Do not reboot the machine at the end of the IBM Director installation. The suite installer will reboot the machine. Click No as shown in Figure 3-37.

Figure 3-37 IBM Director reboot option

Click Next to reboot the machine as shown in Figure 3-38 on page 77. Important: If the server does not reboot at this point cancel the installer and reboot the server.

76

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Figure 3-38 Install wizard completion

After rebooting the machine the installer will initialize. The window Select the installation language to be used for this wizard opens. Select the language and Click OK to continue (see Figure 3-39).

Figure 3-39 IBM TotalStorage Productivity Center installation wizard language selection

The installation confirmation panel is displayed click Next as shown in Figure 3-40 on page 78.

3.5.3 Tivoli Agent Manager


The next product to be installed in the Tivoli Agent Manager (see Figure 3-40 on page 78). The Tivoli Agent manager is required if you are installing the Productivity Center for Fabric or the Productivity Center for Data. It is not required for the Productivity Center for Disk or the Productivity Center for Replication. Click Next to continue.

Chapter 3. TotalStorage Productivity Center suite installation

77

Figure 3-40 IBM TotalStorage Productivity Center installation information

The Package Location panel is displayed (see Figure 3-41). Select the installation source or CD-ROM drive and click Next. Note: If you specify the path for the installation source you must specify the path at the \win directory level.

Figure 3-41 Tivoli Agent Manager installation source

The Tivoli Agent Manager Installer window opens (see Figure 3-42 on page 79). Click Next to continue.

78

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Figure 3-42 Tivoli Agent Manager installer launch window

The Install Shield wizard will start. Then you see the language installation option window in Figure 3-43. Select the required language and click OK.

Figure 3-43 Tivoli Agent Manager installation wizard

The Software License Agreement window opens. Click I accept the terms of the license agreement to continue.

Chapter 3. TotalStorage Productivity Center suite installation

79

Figure 3-44 Tivoli Agent Manager License agreement

The confirm target directories panel is displayed. Enter the directory path or accept the default directory and click Next to continue as shown in Figure 3-45.

Figure 3-45 Tivoli Agent Manager prerequisite source directory

80

Managing Disk Subsystems using IBM TotalStorage Productivity Center

The DB2 information panel is displayed (see Figure 3-46). If you do not want to accept the defaults, enter the DB2 User Name DB2 Port Enter the DB2 Password and click Next to continue.

Figure 3-46 DB2 User information

The WebSphere Application Server Information panel is displayed. This panel lets you specify the host name or IP address, and the cell and node names on which to install the agent manager. If you specify a host name, use the fully qualified host name. For example, specify x330f03.almaden.ibm.com. If you use the IP address, use a static IP address. This value is used in the URLs for all agent manager services. Typically the cell and node name are both the same as the host name of the computer. If WebSphere was installed before you started the agent manager installation wizard, you can look up the cell and node name values in the %WebSphere Application Server_INSTALL_ROOT%\bin\SetupCmdLine.bat file. You can also specify the ports used by the agent manager: Registration (the default is 9511 for the serverside SSL) Secure communications (the default is 9512 for client authentication, twoway SSL) Public communication (the default is 9513) If you are using WebSphere network deployment or a customized deployment, make sure that the cell and node names are correct. For more information about WebSphere deployment, see your WebSphere documentation, Click Next as shown in Figure 3-47 on page 82.

Chapter 3. TotalStorage Productivity Center suite installation

81

Figure 3-47 WebSphere Application Server information

Figure 3-48 WebSphere Application Server information

The Security Certificates panel is displayed in Figure 3-49 on page 83. Specify whether to create new certificates or to use the demonstration certificates. In a typical production

82

Managing Disk Subsystems using IBM TotalStorage Productivity Center

environment, create new certificates. The ability to use demonstration certificates is provided as a convenience for testing and demonstration purposes. Make a selection and click Next to continue.

Figure 3-49 Tivoli Agent Manager security certificates

The security certificate settings panel is displayed. Specify the certificate authority name, security domain, and agent registration password. The agent registration password is the password used to register the agents. You must provide this password when you install the agents. This password also sets the agent manager key store and trust store files. The domain name is used in the right-hand portion of the distinguished name (DN) of every certificate issued by the agent manager. It is the name of the security domain defined by the agent manager. Typically, this value is the registered domain name or contains the registered domain name. For example, for the computer system myserver.ibm.com, the domain name is ibm.com. This value must be unique in your environment. If you have multiple agent managers installed, this value must be different on each agent manager. The default agent registration password is changeMe; click Next as shown in Figure 3-50 on page 84.

Chapter 3. TotalStorage Productivity Center suite installation

83

Figure 3-50 Security certificate settings

Preview Prerequisite Software Information panel is displayed. Click Next as shown in Figure 3-51.

Figure 3-51 Prerequisite reuse information

The Summary Information for Agent Manager panel is displayed. Click Next as shown in Figure 3-52 on page 85. 84
Managing Disk Subsystems using IBM TotalStorage Productivity Center

Figure 3-52 Installation summary

The Installation of Agent Manager Completed panel is displayed, Click Finish as shown in Figure 3-53.

Figure 3-53 Completion summary

The Installation of Agent Manager Successful panel is displayed. Click Next to continue.

Chapter 3. TotalStorage Productivity Center suite installation

85

Important: There are three configuration tasks left to: Start the Agent Manager Service Set the service to start automatically Add a DNS entry for the Agent Recovery Service with the unqualified host name TivoliAgentRecovery and port 80.

Tip: The Database created for the IBM Agent Manager is IBMCDB.

3.5.4 IBM TotalStorage Productivity Center for Disk and Replication Base
There are three separate installs: Install the IBM TotalStorage Productivity Center for Disk and Replication Base code Install the IBM TotalStorage Productivity Center for Disk Install the IBM TotalStorage Productivity Center for Replication IBM TotalStorage Productivity Center for Disk and Replication Base must be installed by a user who is logged on as a local administrator (for example, as the administrator user) on the system where the IBM TotalStorage Productivity Center for Disk and Replication Base will be installed. If you intend to install IBM TotalStorage Productivity Center for Disk and Replication Base as a server, you need the following required system privileges, called user rights, to successfully complete the installation as described in User IDs and security on page 48. Act as part of the operating system Create a token object Increase quotas Replace a process level token Debug programs

Figure 3-54 IBM TotalStorage Productivity Center installation information

86

Managing Disk Subsystems using IBM TotalStorage Productivity Center

The Package Location for Disk and Replication Manager window (Figure 3-54 on page 86) is displayed. Enter the appropriate information and click Next to continue.

Figure 3-55 Package location for Productivity Center Disk and Replication

The Information for Disk and Replication Manager panel is displayed. Click Next to continue as shown in Figure 3-56.

Figure 3-56 Installer information

The Launch Disk and Replication Manager Base panel is displayed indicating that the Disk and Replication Manager installation wizard will be launched. Click Next to continue as shown in Figure 3-57 on page 88.

Chapter 3. TotalStorage Productivity Center suite installation

87

Figure 3-57 IBM TotalStorage Productivity Center for Disk and Replication Base welcome information

The confirm target directories panel is displayed. Enter the directory path or accept the default directory and click Next to continue as shown in Figure 3-58.

Figure 3-58 IBM TotalStorage Productivity Center for Disk and Replication Base Installation directory

88

Managing Disk Subsystems using IBM TotalStorage Productivity Center

The IBM WebSphere selection panel will be displayed, click Next to continue as shown in Figure 3-59.

Figure 3-59 WebSphere Application Server information

If the Installation User ID privileges were not set a information panel stating that the privileges needs to be set will be displayed, click Yes to continue. At this point the installation will terminate, close the installer log and log back on and restart the installer. Select the Typical radio button. Click Next to continue as shown in Figure 3-60 on page 90.

Chapter 3. TotalStorage Productivity Center suite installation

89

Figure 3-60 IBM TotalStorage Productivity Center for Disk and Replication Base type of installation

If the IBM Director Support Program and IBM Director Server service is still running a information panel will be displayed that the services will be stopped click Next to stop the running services as shown in Figure 3-61.

Figure 3-61 Server checks

90

Managing Disk Subsystems using IBM TotalStorage Productivity Center

You must enter the name and password for the IBM TotalStorage Productivity Center for Disk and Replication Base super user ID in the IBM TotalStorage Productivity Center for Disk and Replication Base installation window. This user name must be defined to the operating system. Click Next to continue as shown in Figure 3-62.

Figure 3-62 IBM TotalStorage Productivity Center for Disk and Replication Base Superuser information

You need to enter the user name and password for the IBM DB2 Universal Database Server, click Next to continue as shown in Figure 3-63 on page 92.

Chapter 3. TotalStorage Productivity Center suite installation

91

Figure 3-63 IBM TotalStorage Productivity Center for Disk and Replication Base DB2 user information

If you selected IBM TotalStorage Productivity Center for Disk and Replication Base Server, then you must enter the fully qualified name of the two server key files that were generated previously or that must be generated during or after the IBM TotalStorage Productivity Center for Disk and Replication Base installation in the SSL Configuration window. The information you enter will be used later. Generate a self-signed certificate Select this option if you want the installer to automatically generate these certificate files (used for this installation). Defer the generation of the certificate as a manual post-installation task Select this option if you want to manually generate these certificate files after the installation, using WebSphere Application Server ikeyman utility. In this case the next step, Generate Self-Signed Certificate, is skipped. Fill in the Key file and Trust file password.

92

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Figure 3-64 Key and Trust file options

If you chose to have the installation program generate the certificate for you, the Generate Self-Signed Certificate window opens, after completing all the fields click Next as shown in Figure 3-65.

Figure 3-65 IBM TotalStorage Productivity Center for Disk and Replication Base Certificate information

Chapter 3. TotalStorage Productivity Center suite installation

93

You are presented with the Create Local Database window. Enter the database name, click Next to continue as shown in Figure 3-66.
l

Note: The database name must be unique to IBM TotalStorage Productivity Center for Disk and Replication Base. You cannot share the IBM TotalStorage Productivity Center for Disk and Replication Base database with any other applications.

Figure 3-66 IBM TotalStorage Productivity Center for Disk and Replication Base Database name

The Preview window displays a summary of all of the choices that were made during the customizing phase of the installation, click Install to complete the installation as shown in Figure 3-67 on page 95.

94

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Figure 3-67 IBM TotalStorage Productivity Center for Disk and Replication Base Installer information

The Finish window opens. You can view the log file for any possible error messages. The log file is located in (installeddirectory)\logs\dmlog.txt. The dmlog.txt file contains a trace of the installation actions. Click Finish to complete the installation. The post-install tasks information opens in a Notepad. You should read the information and complete any required tasks.

3.5.5 IBM TotalStorage Productivity Center for Disk


The next product to be installed is the Productivity Center for Disk as indicated in Figure 3-68 on page 96. Click Next to continue.

Chapter 3. TotalStorage Productivity Center suite installation

95

Figure 3-68 IBM TotalStorage Productivity Center installer information

The Package Location for IBM TotalStorage Productivity Center for Disk panel is displayed. Enter the appropriate information and click Next to continue as shown in Figure 3-70 on page 97.

Figure 3-69 Productivity Center for Disk install package location

The Launch IBM TotalStorage Productivity Center for Disk panel is displayed indicating that the IBM TotalStorage Productivity Center for Disk installation wizard will be launched (see Figure 3-70 on page 97). Click Next to continue.

96

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Figure 3-70 IBM TotalStorage Productivity Center for Disk installer

The Productivity Center for Disk Installer - Welcome panel is displayed (see Figure 3-71). Click Next to continue.

Figure 3-71 IBM TotalStorage Productivity Center for Disk Installer Welcome

The confirm target directories panel is displayed. Enter the directory path or accept the default directory (see Figure 3-72 on page 98) and click Next to continue.
Chapter 3. TotalStorage Productivity Center suite installation

97

Figure 3-72 Productivity Center for Disk Installer - Destination Directory

The IBM TotalStorage Productivity Center for Disk Installer - Installation Type panel opens (see Figure 3-73). Select Typical install in the radio button click Next to continue.

Figure 3-73 Productivity Center for Disk Installation Type

The database configuration panel will be opened accept the database name or re-enter a new data base name, click Next to continue as shown in Figure 3-74 on page 99.

98

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Figure 3-74 IBM TotalStorage Productivity Center for Disk database name

Review the information about the IBM TotalStorage Productivity Center for Disk preview panel and click Install as shown in Figure 3-75.

Figure 3-75 IBM TotalStorage Productivity Center for Disk installation preview

Chapter 3. TotalStorage Productivity Center suite installation

99

The installer will create the required database (see Figure 3-76) and install the product. You will see a progress bar for the Productivity Center for Disk install status.

Figure 3-76 Productivity Center for Disk DB2 database creation

When the install is complete you will see the panel in Figure 3-77. You should review the post installation tasks. Click Finish to continue.

Figure 3-77 Productivity Center for Disk Installer - Finish

3.5.6 IBM TotalStorage Productivity Center for Replication


The InstallShield will be displayed. Read the information and click Next to continue as shown in Figure 3-78 on page 101.

100

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Figure 3-78 IBM TotalStorage Productivity Center installation overview

The Package Location for Replication Manager panel is displayed. Enter the appropriate information and click Next to continue The Welcome window opens with suggestions about what documentation to review prior to installation. Click Next to continue as shown in Figure 3-79, or click Cancel to exit the installation.

Figure 3-79 IBM TotalStorage Productivity Center for Replication installation

Chapter 3. TotalStorage Productivity Center suite installation

101

The confirm target directories panel is displayed. Enter the directory path or accept the default directory and click Next to continue as shown in Figure 3-80.

Figure 3-80 IBM TotalStorage Productivity Center for Replication installation directory

The next panel (see Figure 3-81) asks you to select the install type. Select the Typical radio button and click Next to continue.

Figure 3-81 Productivity Center for Replication Install type selection

102

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Enter parameters for the new DB2 Hardware subcomponent database in the database name or accept the default. We recommend you accept the default. Click Next to continue as shown in Figure 3-82. Note: The database name must be unique to the Replication Manager subcomponent. You cannot share the Replication Manager subcomponent database with any other applications or with other Replication Manager subcomponents.

Figure 3-82 IBM TotalStorage Productivity Center for Replication hardware database name

Enter parameters for the new Element Catalog subcomponent database in the database name or accept the default, click Next to continue as shown in Figure 3-83 on page 104. Note: The database name must be unique to the Replication Manager subcomponent. You cannot share the Replication Manager subcomponent database with any other applications or with other Replication Manager subcomponents.

Chapter 3. TotalStorage Productivity Center suite installation

103

Figure 3-83 IBM TotalStorage Productivity Center for Replication element catalog database name

Enter parameters for the new Replication Manager subcomponent database in the database name or accept the default, click Next to continue as shown in Figure 3-84 on page 105. Note: The database name must be unique to the Replication Manager subcomponent. You cannot share the Replication Manager subcomponent database with any other applications or with other Replication Manager subcomponents.

104

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Figure 3-84 IBM TotalStorage Productivity Center for Replication, Replication Manager database name

Select the required database tuning cycle in hours, select Next to continue as shown in Figure 3-85.

Figure 3-85 IBM TotalStorage Productivity Center for Replication database tuning cycle

Chapter 3. TotalStorage Productivity Center suite installation

105

Review the information about the IBM TotalStorage Productivity Center for Replication preview panel and click Install as shown in Figure 3-86.

Figure 3-86 IBM TotalStorage Productivity Center for Replication installation information

The Productivity Center for Replication Installer - Finish panel in Figure 3-87 will be displayed upon successful installation. Read the post installation tasks. Click Finish to complete the installation.

Figure 3-87 Productivity Center for Replication installation successful

106

Managing Disk Subsystems using IBM TotalStorage Productivity Center

3.5.7 IBM TotalStorage Productivity Center for Fabric


We have included the installation for the Productivity Center for Fabric here. Refer to Chapter 7, TotalStorage Productivity Center for Fabric use on page 331 for more information on using the Productivity Center for Fabric with the Productivity Center for Disk. Prior to installing IBM TotalStorage Productivity Center for Fabric, there are prerequisite tasks that need to be completed. These tasks are described in detail in 3.4, IBM TotalStorage Productivity Center for Fabric on page 54. These tasks include: The computer name on page 54 SNMP install on page 53 Database considerations on page 55 Windows Terminal Services on page 55 User IDs and password considerations on page 56 Personal firewall on page 56 Tivoli NetView on page 55 Security Considerations on page 57

Installing the manager


After the successful installation of the Productivity Center for Replication, the suite installer will begin the Productivity Center for Fabric install (see Figure 3-88). Click Next to continue.

Figure 3-88 IBM TotalStorage Productivity Center installation information

The install shield will be displayed read the information and click Next to continue. The Package Location for Productivity Center for Fabric Manager panel is displayed (see Figure 3-89 on page 108). Enter the appropriate information and click Next to continue. Important: The package location at this point is very important If you used the demonstration certificates point to the CD-ROM drive. If you generated new certificates point to the manager CD image with the new agentTrust.jks file.

Chapter 3. TotalStorage Productivity Center suite installation

107

Figure 3-89 Productivity Center for Fabric install package location

The language installation option panel is displayed, select the required language and click OK as shown in Figure 3-90.

Figure 3-90 IBM TotalStorage Productivity Center for Fabric install wizard

The Welcome panel is displayed. Click Next to continue as shown in Figure 3-91 on page 109.

108

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Figure 3-91 IBM TotalStorage Productivity Center for Fabric welcome information

Select the type of installation you want to perform (see Figure 3-92 on page 110). In this case we are installing the IBM TotalStorage Productivity Center for Fabric code. You can also use the suite installer to perform a remote deployment of the Fabric agent.This operation can be performed only if you have previously installed the common agent on a machines. For example, you might have installed the Data agent on the machines and want to add the Fabric agent to the same machines. You must have installed the Fabric Manager before you can deploy the Fabric agent. You cannot select both Fabric Manager Installation and Remote Fabric Agent Deployment at the same time. You can only select one option. Click Next to continue.

Chapter 3. TotalStorage Productivity Center suite installation

109

Figure 3-92 Fabric Manager installation type selection

The confirm target directories panel is displayed. Enter the directory path or accept the default directory and click Next to continue as shown in Figure 3-93.

Figure 3-93 IBM TotalStorage Productivity Center for Fabric installation directory

The Port Number panel is displayed. This is a range of eight port numbers for use by IBM TotalStorage Productivity Center for Fabric. The first port number you specify is considered the primary port number. You only need to enter the primary port number. The primary port number and the next 7 numbers will be reserved for use by IBM TotalStorage Productivity

110

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Center for Fabric. For example, if you specify port number 9550, IBM TotalStorage Productivity Center for Fabric will use port numbers 95509557. Ensure that the port numbers you use are not used by other applications at the same time. To determine which port numbers are in use on a particular computer, type either of the following commands from a command prompt. We recommend you use the first command. netstat -a netstat -an The port numbers in use on the system are listed in the Local Address column of the output. This field has the format host:port. Enter the primary port number as shown in Figure 3-94 and click Next to continue.

Figure 3-94 IBM TotalStorage Productivity Center for Fabric port number

The Database choice panel is displayed. You can select DB2 or Cloudscape. If you select DB2, you must have previously installed DB2 on the server. DB2 is the recommended installation option. Select Next to continue as shown in Figure 3-95 on page 112.

Chapter 3. TotalStorage Productivity Center suite installation

111

Figure 3-95 IBM TotalStorage Productivity Center for Fabric database selection type

The next panel allows you to select the WebSphere Application Server to use in the install. In this installation we used Embedded WebSphere Application Server. Click Next to continue as shown in Figure 3-97 on page 113.

Figure 3-96 Productivity Center for Fabric WebSphere Application Server type selection

The Single or Multiple User ID and Password panel (using DB2) is displayed (see Figure 3-97 on page 113). If you selected DB2 as your database, you will see this panel. This panel allows you to use the DB2 administrative user ID and password for the DB2 user and WebSphere user. You can also use the DB2 administrative password for the host authentication and NetView password.

112

Managing Disk Subsystems using IBM TotalStorage Productivity Center

For example, if you selected all the choices in the panel, you will use the DB2 administrative user ID and password for the DB2 and WebSphere user ID and password. You will also use the DB2 administrative password for the host authentication and NetView password. If you select a choice, you will not be prompted for the user ID or password for each item you select. Note: If you selected Cloudscape as your database, this panel is not displayed. Click Next to continue.

Figure 3-97 IBM TotalStorage Productivity Center for Fabric user and password options

The User ID and Password panel (using DB2) is displayed. If you selected DB2 as your database, you will see this panel. This panel allows you to use the DB2 administrative user ID and password for the DB2, enter the required User ID and Password, click Next to continue as shown in Figure 3-98 on page 114.

Chapter 3. TotalStorage Productivity Center suite installation

113

Figure 3-98 IBM TotalStorage Productivity Center for Fabric database user information

Enter parameters for the new database in the database name or accept the default, click Next to continue as shown in Figure 3-99. Note: The database name must be unique. You cannot share the IBM TotalStorage Productivity Center for Fabric database with any other applications.

Figure 3-99 IBM TotalStorage Productivity Center for Fabric database name

Enter parameters for the database drive, click Next to continue as shown in Figure 3-100 on page 115.

114

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Figure 3-100 IBM TotalStorage Productivity Center for Fabric database drive information

The Agent Manager Information panel is displayed. You must provide the following information: Agent manager name or IP address. This is the name or IP address of your agent manager. Agent manager registration port. This is the port number of your agent manager. Agent registration password (twice). This is the password used to register the common agent with the agent manager as shown in Figure 3-50 on page 84 if the password was not set and the default was accepted the password is changeMe. Resource manager registration user ID. This is the user ID used to register the resource manager with the agent manager (default is manager) Resource manager registration password (twice). This is the password used to register the resource manager with the agent manager (default is password). Fill in the information and click Next to continue as shown in Figure 3-101 on page 116.

Chapter 3. TotalStorage Productivity Center suite installation

115

Figure 3-101 IBM TotalStorage Productivity Center for Fabric agent manager information

The IBM TotalStorage Productivity Center for Fabric Install panel is displayed. This panel provides information about the location and size of the Fabric Manager. Click Next to continue as shown in Figure 3-102.

Figure 3-102 IBM TotalStorage Productivity Center for Fabric installation information

The Status panel is displayed. The installation can take about 1520 minutes to complete. When the installation has completed, the Successfully Installed panel is displayed, click Next to continue as shown in Figure 3-103 on page 117.

116

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Figure 3-103 IBM TotalStorage Productivity Center for Fabric installation status

The install wizard Complete Installation panel is displayed. Do not restart your computer, click No, I will restart my computer later. Click Finish to complete the installation as shown in Figure 3-104.

Figure 3-104 IBM TotalStorage Productivity Center for Fabric restart options

The Install Status panel will be displayed indicating the Productivity Center for Fabric installation was successful. Click Next to continue as shown in Figure 3-105 on page 118.

Chapter 3. TotalStorage Productivity Center suite installation

117

Figure 3-105 IBM TotalStorage Productivity Center installation information

118

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Chapter 4.

CIMOM installation and configuration


This chapter provides a step-by-step guide to configure the Common Information Model Object Manager (CIMOM) and Service Location Protocol (SLP) that are required to use the TotalStorage Productivity Center for Disk and TotalStorage Productivity Center for Replication.

Copyright IBM Corp. 2004, 2005. All rights reserved.

119

4.1 Introduction
After you have completed the installation of TotalStorage Productivity Center for Disk and TotalStorage Productivity Center for Replication, you will need to install and configure the Common Information Model Object Manager (CIMOM) and Service Location Protocol (SLP) agents. Note: For the remainder of this chapter, we refer to the TotalStorage Productivity Center for Disk and TotalStorage Productivity Center for Replication simply as TotalStorage Productivity Center. The TotalStorage Productivity Center for Disk uses SLP as the method for CIM clients to locate managed objects. The CIM clients may have built in or external CIM agents. When a CIM agent implementation is available for a supported device, the device may be accessed and configured by management applications using industry-standard XML-over-HTTP transactions. In this chapter we describe the steps for: Planning considerations for Service Location Protocol (SLP) SLP configuration recommendation General performance guidelines Planning considerations for CIMOM Installing and configuring CIM agent for Enterprise Storage Server Verifying connection to ESS Setting up Service Location Protocol Directory Agent (SLP DA) Installing and configuring CIM agent for DS 4000 Family Configuring CIM agent for SAN Volume Controller

4.2 Planning considerations for Service Location Protocol


The Service Location Protocol (SLP) has three major components, Service Agent (SA) and User Agent (UA) and a Directory Agent (DA). The SA and UA are required components and DA is an optional component. You may have to make a decision whether to use SLP DA in your environment based on considerations as described below.

4.2.1 Considerations for using SLP DAs


You may consider to use DA is to reduce the amount of multicast traffic involved in service discovery. In a large net work with many UAs and SAs, the amount of multicast traffic involved in service discovery can become so large that network performance degrades. By deploying one or more DAs, UAs must unicast to DAs for service and SAs must register with DAs using unicast. The only SLP-registered multicast in a network with DAs is for active and passive DA discovery.

120

Managing Disk Subsystems using IBM TotalStorage Productivity Center

SAs register automatically with any DAs they discover within a set of common scopes. Consequently, DAs within the UAs scopes reduce multicast. By eliminating multicast for normal UA request, delays and time-outs are eliminated. DAs act as a focal point for SA and UA activity. Deploying one or several DAs for a collection of scopes provides a centralized point for monitoring SLP activity. You may consider using DAs in your enterprise if any of the following conditions are true: Multicast SLP traffic exceeds 1% of the bandwidth on your network, as measured by snoop. UA clients experience long delays or time-outs during multicast service request. You want to centralize monitoring of SLP service advertisements for particular scopes on one or several hosts. You can deploy any number of DAs for a particular scope or scopes, depending on the need to balance the load. Your network does not have multicast enabled and consists of multiple subnets that must share services. The configuration of an SLP DA is particularly recommended when there are more than 60 SAs that need to respond to any given multicast service request.

4.2.2 SLP configuration recommendation


Some configuration recommendations are provided for enabling TotalStorage Productivity Center for Disk to discover a larger set of storage devices. These recommendations cover some of the more common SLP configuration problems. This topic discusses router configuration and SLP directory agent configuration.

Router configuration
Configure the routers in the network to enable general multicasting or to allow multicasting for the SLP multicast address and port, 239.255.255.253, port 427. The routers of interest are those that are associated with subnets that contain one or more storage devices that are to be discovered and managed by TotalStorage Productivity Center for Disk. To configure your router hardware and software, refer to your router reference and configuration documentation. Attention: Routers are sometimes configured to prevent passing of multicast packets between subnets. Routers configured this way prevent discovery of systems between subnets using multicasting. Routers can also be configured to restrict the minimum multicast TTL (time-to-live) for packets it passes between subnets, which can result in the need to set the Multicast TTL higher to discover systems on the other subnets of the router. The Multicast TTL controls the time-to-live for the multicast discovery packets. This value typically corresponds to the number of times a packet is forwarded between subnets, allowing control of the scope of subnets discovered. - Multicast discovery does not discover Director V1.x systems or systems using TCP/IP protocol stacks that do not support multicasting (for example, some older Windows 3.x and Novell 3.x TCP/IP implementations).

Chapter 4. CIMOM installation and configuration

121

SLP directory agent configuration


Configure the SLP directory agents (DAs) to circumvent the multicast limitations. With statically configured DAs, all service requests are unicast by the user agent. Therefore, it is possible to configure one DA for each subnet that contains storage devices that are to be discovered by TotalStorage Productivity Center for Disk. One DA is sufficient for each of such subnets. Each of these DAs can discover all services within its own subnet, but no other services outside its own subnet. To allow TotalStorage Productivity Center for Disk to discover all of the devices, it needs to be statically configured with the addresses of each of these DAs. This can be accomplished using the TotalStorage Productivity Center for Disk Discovery Preference panel as discussed in Configuring IBM Director for SLP discovery on page 152. You can use this panel to enter a list of DA addresses. TotalStorage Productivity Center for Disk sends unicast service requests to each of these statically configured DAs, and sends multicast service requests on the local subnet on which TotalStorage Productivity Center for Disk is installed. Configure an SLP DA by changing the configuration of the SLP service agent (SA) that is included as part of an existing CIM Agent installation. This causes the program that normally runs as an SLP SA to run as an SLP DA. Note: The change from SA to DA does not affect the CIMOM service of the subject CIM Agent, which continues to function normally, sending registration and deregistration commands to the DA directly.

4.3 General performance guidelines


Here are some general performance considerations for configuring the TotalStorage Productivity Center for Disk and TotalStorage Productivity Center for Replication environment. Do not overpopulate the SLP discovery panel with SLP agent hosts. Remember that TotalStorage Productivity Center for Disk includes a built-in SLP User Agent (UA) that will receive information about SLP Service Agents and Directory Agents (DA) that reside in the same subnet as the TotalStorage Productivity Center for Disk installation. You should have not more than one DA per subnet. Misconfiguring the IBM Director discovery preferences may impact performance on auto discovery or on device presence checking. It may also result in application time-outs, as attempts are made to resolve and communicate with hosts that are not available. It should be considered mandatory to run the ESS CLI and ESS CIM agent software on another host of comparable size to the main TotalStorage Productivity Center for Disk and TotalStorage Productivity Center for Replication server. Attempting to run a full TotalStorage Productivity Center implementation (Device Manager, Performance Manager, Replication Manager, DB2, IBM Director and the WebSphere Application server) on the same host as the ESS CIM agent, will result in dramatically increased wait times for data retrieval. Based on our ITSO Lab experience, it is suggested to have separate servers for TotalStorage Productivity Center for Disk along with TotalStorage Productivity Center for Replication, ESS CIMOM and DS 4000 family CIMOM. Otherwise, you may have port conflicts, increased wait times for data retrieval and resource contention.

122

Managing Disk Subsystems using IBM TotalStorage Productivity Center

4.4 Planning considerations for CIMOM


The CIM agent includes a CIM Object Manager (CIMOM) which adapts various devices using a plug-in called a provider. The CIM agent can work as a proxy or can be imbedded in storage devices. When the CIM agent is installed as a proxy, the IBM CIM agent can be installed on the same server that supports the device user interface. Figure 4-1 on page 123 shows overview of CIM agent.

Figure 4-1 CIM Agent Overview

You may plan to install CIM agent code on the same server which also has device management interface or you may install it on a separate server. Attention: At this time only few devices come with an integrated CIM Agent, most devices need a external CIMOM for CIM enable management applications (CIM Clients) to be able to communicate with device. For the ease of the installation IBM provides an ICAT (short for Integrated Configuration Agent Technology) which is a bundle that mainly includes the CIMOM, the device provider and an SLP SA.

4.4.1 CIMOM configuration recommendations


Following recommendations are based on our experience in ITSO Lab environment: The CIMOM agent code which you are planning to use, must be supported by the installed version of TotalStorage Productivity Center for Disk. You may refer to the link below for latest updates:
http://www-1.ibm.com/servers/storage/support/software/tpcdisk/

You must have CIMOM supported firmware level on the storage devices. It you have incorrect version or firmware, you may not be able to discover and manage any the storage devices. The data traffic between CIMOM agent and device can be very high, especially during performance data collection. Hence it is recommended to have dedicated server for CIMOM agent. Although, you may configure the same CIMOM agent for multiple devices of same type. You may also plan to locate this server within same data center where storage devices are located. This is in consideration to firewall port requirements. Typically, it is best practice to minimize firewall port openings between data center and external network. If you consolidate the CIMOM servers within the data center then you may be able to minimize and need to open the firewall ports only for TotalStorage Productivity Center for Disk communication with CIMOM.

Chapter 4. CIMOM installation and configuration

123

Co-location of CIM agent instances of the differing type on the same server is not recommended because of resource contention. It is strongly recommended to have a separate and dedicated servers for CIMOM agents and TotalStorage Productivity Center server. This is due to resource contention, TCP/IP port requirements and system services co-existence.

4.5 Installing CIM agent for ESS


Before starting Multiple Device Manager discovery, you must first configure the Common Information Model Object Manager (CIMOM) for ESS. The ESS CIM Agent package is made up of the following parts (see Figure 4-2).

Figure 4-2 ESS CIM Agent Package

This section provides an overview of the installation and configuration of the ESS CIM Agent on a Windows 2000 Advanced Server operating system.

4.5.1 ESS CLI install


The following list of installation and configuration tasks are in the order in which they should be performed: Before you install the ESS CIM Agent you must install the IBM TotalStorage Enterprise Storage System Command Line Interface (ESS CLI). The ESS CIM Agent installation program checks your system for the existence of the ESS CLI and reports that it cannot continue if the ESS CLI is not installed as shown in Figure 4-3 on page 125.

124

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Figure 4-3 ESS CLI install requirement for ESS CIM Agent

Attention: If you are upgrading from a previous version of the ESS CIM Agent, you must uninstall the ESS CLI software that was required by the previous CIM Agent and reinstall the latest ESS CLI software, you must have a minimum ESS CLI level of 2.4.0.236. Perform the following steps to install the ESS CLI for Windows: Insert the CD for the ESS CLI in the CD-ROM drive, run the setup and follow the instructions as shown in Figure 4-4 on page 126 through Figure 4-7 on page 127. Note: The ESS CLI installation wizard detects if you have an earlier level of the ESS CLI software installed on your system and uninstalls the earlier level. After you uninstall the previous version, you must restart the ESS CLI installation program to install the current level of the ESS CLI.

Chapter 4. CIMOM installation and configuration

125

Figure 4-4 InstallShield Wizard for ESS CLI

Figure 4-5 Choose target system panel

126

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Figure 4-6 ESS CLI Setup Status panel

Figure 4-7 ESS CLI installation complete panel

Reboot your system before proceeding with the ESS CIM Agent installation. You must do this because the ESS CLI is dependent on environmental variable settings which will not be in effect for the ESS CIM Agent. This is because the CIM Agent runs as a service unless you reboot your system.

Chapter 4. CIMOM installation and configuration

127

Verify that the ESS CLI is installed: Click Start > Settings > Control Panel. Double-click the Add/Remove Programs icon. Verify that there is an IBM ESS CLI entry. Verify that the ESS CLI is operational and can connect to the ESS. For example, from a command prompt window, issue the following command: esscli -u itso -p itso13sj -s 9.43.226.43 list server Where: 9.43.226.43 represents the IP address of the Enterprise Storage Server itso represents the Enterprise Storage Server Specialist user name itso13sj represents the Enterprise Storage Server Specialist password for the user name Figure 4-8 shows the response from the esscli command.

Figure 4-8 ESS CLI verification

4.5.2 ESS CIM Agent install


To install the ESS CIM Agent in your Windows system, perform the following steps: Log on to your system as the local administrator. Insert the CIM Agent for ESS CD into the CD-ROM drive. The Install Wizard launchpad should start automatically, if you have autorun mode set on your system. You should see launchpad similar to Figure 4-9 on page 129. You may review the Readme file from the launchpad menu. Subsequently, you can Click Installation Wizard. The Installation Wizard starts executing setup.exe program and shows the Welcome panel in Figure 4-10 on page 130. Note: The ESS CIM Agent program should start within 15 - 30 seconds if you have autorun mode set on your system. If the installer window does not open, perform the following steps:

Use a Command Prompt or Windows Explorer to change to the Windows directory on the CD. If you are using a Command Prompt window, run setup.exe. If you are using Windows Explorer, double-click the setup.exe file.

128

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Note: If you are using CIMOM code from IBM download Web site and not from the distribution CD, then you must ensure to use a shorter windows directory pathname. Executing setup.exe from the longer pathname may fail. An example of a short pathname is C:\CIMOM\setup.exe.

Figure 4-9 ESS CIMOM launchpad

The Welcome window opens suggesting what documentation you should review prior to installation. Click Next to continue (see Figure 4-10 on page 130).

Chapter 4. CIMOM installation and configuration

129

Figure 4-10 ESS CIM Agent welcome window

The License Agreement window opens. Read the license agreement information. Select I accept the terms of the license agreement, then click Next to accept the license agreement (see Figure 4-11 on page 131).

130

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Figure 4-11 ESS CIM Agent license agreement

The Destination Directory window opens. Accept the default directory and click Next (see Figure 4-12 on page 132).

Chapter 4. CIMOM installation and configuration

131

Figure 4-12 ESS CIM Agent destination directory panel

The Updating CIMOM Port window opens (see Figure 4-13 on page 133). You Click Next to accept the default port if it available and free in your environment. For our ITSO setup we used default port 5989. Note: If the default port is the same as another port already in use, modify the default port and click Next. Use the following command to check which ports are in use: netstat -a

132

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Figure 4-13 ESS CIM Agent port window

The Installation Confirmation window opens (see Figure 4-14 on page 134). Click Install to confirm the installation location and file size.

Chapter 4. CIMOM installation and configuration

133

Figure 4-14 ESS CIM Agent installation confirmation

The Installation Progress window opens (see Figure 4-15 on page 135) indicating how much of the installation has completed.

134

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Figure 4-15 ESS CIM Agent installation progress

When the Installation Progress window closes, the Finish window opens (see Figure 4-16 on page 136). Check the View post installation tasks check box if you want to continue with post installation tasks when the wizard closes. We recommend you review the post installation tasks. Note: Before proceeding, you might want to review the log file for any error messages. The log file is located in xxx\logs\install.log, where xxx is the destination directory where the ESS CIM Agent for Windows is installed.

Chapter 4. CIMOM installation and configuration

135

Figure 4-16 ESS CIM Agent install complete- starting services

Click Finish to exit the installation wizard (see Figure 4-17 on page 137).

136

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Figure 4-17 ESS CIM Agent install successful

4.5.3 Post Installation tasks


Continue with the following post installation tasks for the ESS CIM Agent.

Verify the installation of the SLP


Verify that the Service Location Protocol is started. Select Start Settings Control Panel. Double-click the Administrative Tools icon. Double-click the Services icon. Find Service Location Protocol in the Services window list. For this component, the Status column should be marked Started as shown in Figure 4-18 on page 138.

Chapter 4. CIMOM installation and configuration

137

Figure 4-18 Verify Service Location Protocol started

If SLP is not started, right-click the SLP and select Start from the pop-up menu. Wait for the Status column to be changed to Started.

Verify the installation of the ESS CIM Agent


Verify that the CIMOM service is started. If you closed the Services window, select Start Settings Control Panel. Double-click the Administrative Tools icon. Double-click the Services icon. Find the IBM CIM Object Manager - ESS in the Services window list. For this component, the Status column should be marked Started and the Startup Type column should be marked Automatic, as shown in Figure 4-19 on page 139.

138

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Figure 4-19 ESS CIM OBject Manager started confirmation

If the IBM CIM Object Manager is not started, right-click the IBM CIM Object Manager ESS and select Start from the pop-up menu. Wait for the Status column to change to Started. If you are able to perform all of the verification tasks successfully, the ESS CIM Agent has been successfully installed on your Windows system. Next, perform the configuration tasks.

4.6 Configuring the ESS CIM Agent for Windows


This task configures the ESS CIM Agent after it has been successfully installed.

4.6.1 Registering ESS Devices


Perform the following steps to configure the ESS CIM Agent: Configure the ESS CIM Agent with the information for each Enterprise Storage Server the ESS CIM Agent is to access. Start Programs IBM TotalStorage CIM Agent for ESS Enable ESS Communication as shown in Figure 4-20 on page 140.

Chapter 4. CIMOM installation and configuration

139

Figure 4-20 Configuring the ESS CIM Agent

Type the command addess <ip> <user> <password> command for each ESS (as shown in Figure 4-21 on page 141): Where, <ip> represents the IP address of the cluster of Enterprise Storage Server <user> represents the Enterprise Storage Server Specialist user name

140

Managing Disk Subsystems using IBM TotalStorage Productivity Center

<password> represents the Enterprise Storage Server Specialist password for the user name

Important: ESS CIM agent relies on ESS CLI connectivity from ESS CIMOM server to ESS devices. Make sure that the ESS devices you are registering are reachable and available at this point. It is recommended to verify this by launching ESS specialist browser from the ESS CIMOM server. You may logon to both ESS clusters for each ESS and make sure you are authenticated with correct ESS passwords and IP addresses. If the ESS are on the different subnet than the ESS CIMOM server and behind a firewall, then you must authenticate through firewall first before registering the ESS with CIMOM. If you have a bi-directional firewall between ESS devices and CIMOM server then you must verify the connection using rsTestConnection command of ESS CLI code. If the ESS CLI connection is not successful, you must authenticate through the firewall in both directions i.e from ESS to CIMOM server and also from CIMOM server to ESS. Once you are satisfied that you are able to authenticate and receive ESS CLI heartbeat with all the ESS successfully, you may proceed for entering ESS IP addresses. If CIMOM agent fails to authenticate with ESSs, then it will not start-up properly and may be very slow, since it retries the authentication.

Figure 4-21 The addess command example

4.6.2 Register ESS server for Copy services


Type the following command for each ESS server that is configured for Copy Services: addesserver <ip> <user> <password> Where <ip> represents the IP address of the Enterprise Storage Server <user> represents the Enterprise Storage Server Specialist user name <password> represents the Enterprise Storage Server Specialist password for the user name

Repeat the previous step for each additional ESS device that you want to configure.

Chapter 4. CIMOM installation and configuration

141

Close the setdevice interactive session by typing exit. Once you have defined all the ESS servers, you must stop and restart the CIMOM to make the CIMOM initialize the information for the ESS servers. Note: CIMOM collects and caches the information from the defined ESS servers at startup time, the starting of the CIMOM might take a longer period of time the next time you start it.

Attention: If the username and password entered is incorrect or the ESS CIM agent does not connect to the ESS this will cause a error and the ESS CIM Agent will not start and stop correctly, use following command to remove the ESS entry that is causing the problem and reboot the server. rmess <ip> Whenever you add or remove ESS from CIMOM registration, you must re-start the CIMOM to pick up updated ESS device list.

4.6.3 Restart the CIMOM


Perform the following steps to use the Windows Start Menu facility to stop and restart the CIMOM. This is required so that CIMOM can register new devices or un-register deleted devices. Stop the CIMOM by selecting Start Programs IBM TotalStorage CIM Agent for ESS Stop CIMOM service. A Command Prompt window opens to track the stoppage of the CIMOM (as shown in Figure 4-22).If the CIMOM has stopped successfully, the following message is displayed:

Figure 4-22 Stop ESS CIM Agent

Restart the CIMOM by selecting Start Programs IBM TotalStorage CIM Agent for ESS Start CIMOM service. A Command Prompt window opens to track the progress of the starting of the CIMOM. If the CIMOM has started successfully, the message shown in Figure 4-23 on page 143 displayed:

142

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Figure 4-23 Restart ESS CIM Agent

Note: The restarting of the CIMOM may take a while because it is connecting to the defined ESS servers and is caching that information for future use.

4.6.4 CIMOM User Authentication


Use the setuser interactive tool to configure the CIMOM for the users who will have the authority to use the CIMOM. The user is the TotalStorage Productivity Center for Disk and Replication superuser. Important: A TotalStorage Productivity Center for Disk and Replication superuserid and password must be created and be the same for all CIMOMs TotalStorage Productivity Center for Disk that is to discover. This user ID should be less than or equal to eight characters. Upon installation of the CIM Agent for ESS, the provided default user name is superuser with a default password of passw0rd. The first time you use the setuser tool, you must use this user name and password combination. Once you have defined other user names, you can start the setuser command by specifying other defined CIMOM user names.

Note: The users which you configure to have authority to use the CIMOM are uniquely defined to the CIMOM software and have no required relationship to operating system user names, the ESS Specialist user names, or the ESS Copy Services user names. Open a Command Prompt window and change directory to the ESS CIM Agent directory, for example C:\Program Files\IBM\cimagent. Type the command setuser -u superuser -p passw0rd at the command prompt to start the setuser interactive session to identify users to the CIMOM. Type the command adduser cimuser cimpass in the setuser interactive session to define new users. Where cimuser represents the new user name to access the ESS CIM Agent CIMOM
Chapter 4. CIMOM installation and configuration

143

cimpass represents the password for the new user name to access the ESS CIM
Agent CIMOM

Close the setdevice interactive session by typing exit. For our ITSO Lab setup we used TPCSUID as superuser and ITSOSJ as password.

4.7 Verifying connection to the ESS


During this task the ESS CIM Agent software connectivity to the Enterprise Storage Server (ESS) is verified. The connection to the ESS is through the ESS CLI software. If the network connectivity fails or if the user name and password that you set in the configuration task is incorrect, the ESS CIM Agent cannot connect successfully to the ESS. The installation, verification, and configuration of the ESS CIM Agent must be completed before you verify the connection to the ESS. Verify that you have network connectivity to the ESS from the system where the ESS CIM Agent is installed. Issue a ping command to the ESS and check that you can see reply statistics from the ESS IP address. Verify that the SLP is active by selecting Start Settings Control Panel. Double-click the Administrative Tools icon. Double-Click the Services icon. You should see similar to Figure 4-18 on page 138. Ensure that Status is Started. Verify that the CIMOM is active by selecting Start Settings Control Panel Administrative Tools Services. Launch Services panel and select IBM CIM Object Manager service. Verify the Status is shown as Started, as shown inFigure 4-24.

Figure 4-24 Verify ESS CIMOM has started

144

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Verify that SLP has dependency on CIMOM, this is automatically configured when you installed the CIM agent software. Verify this by selecting Start Settings Control Panel. Double-click the Administrative Tools icon. Double-Click the Services icon and subsequently select properties on Service Location Protocol as shown in Figure Figure 4-25.

Figure 4-25 SLP properties panel

Click on Properties and select Dependencies tab as shown in Figure 4-26 on page 146. You must ensure that IBM CIM Object Manager has a dependency on Service Location Protocol (this should be the case by default).

Chapter 4. CIMOM installation and configuration

145

Figure 4-26 SLP dependency on CIMOM

Verify CIMOM registration with SLP by selecting Start Programs TotalStorage CIM Agent for ESS Check CIMOM Registration. A window opens displaying the wbem services as shown in.Figure 4-27. These services have either registered themselves with SLP or you have explicitly registered them with SLP using slptool. If you changed the default ports for a CIMOM during installation, the port number should be correctly listed here. It may take some time for a CIM Agent to register with SLP.

Figure 4-27 Verify CIM Agent registration with SLP

Note: If the verification of the CIMOM registration is not successful, stop and restart the SLP and CIMOM services. Note that the ESS CIMOM will attempt to contact each ESS registered to it. Therefore, the startup may take some time, especially if it is not able to connect and authenticate to any of the registered ESSs. Use the verifyconfig -u superuser -p passw0rd command, where superuser is the user name and passw0rd is the password for the user name that you configured to manage the CIMOM, to locate all WBEM services in the local network. You need to define the TotalStorage Productivity Center for Disk superuser name and passw0rd in order for TotalStorage Productivity Center for Disk to have the authority to manage the CIMOM. The 146
Managing Disk Subsystems using IBM TotalStorage Productivity Center

verifyconfig command checks the registration for the ESS CIM Agent and checks that it can connect to the ESSs. At ITSO Lab we had configured two ESSs (as shown in Figure 4-28).

Figure 4-28 The verifyconfig command

4.7.1 Problem determination


You might run into the some errors. If that is a case, you may verify with cimom.log file. This file is located in C:\Program Files\IBM\cimagent directory. You may verify that you have the entries with your current install timestamp as shown in Figure 4-29. The entries of specific interest are: CMMOM050OI Registered service service:wbem:https://x.x.x.x:5989 with SLP SA CMMOM0409I Server waiting for connections This first entry indicates that the CIMOM has sucessfully registered with SLP using the port number specified at ESS CIM agent install time and second indicates that it has started sucessfully and waiting for connections.

Figure 4-29 CIMOM Log Success

Chapter 4. CIMOM installation and configuration

147

If you still have problems, Refer to the IBM TotalStorage Enterprise Storage Server Application Programming Interface Reference for an explanation and resolution of the error messages. You can find this Guide in the doc directory at the root of the CIM Agent CD. The Figure 4-30 shows the location of installguide in doc directory of the CD.

Figure 4-30 ESS Application Programming Interface Reference guide

4.7.2 Confirming the ESS CIMOM is available


Before you proceed, you need to be sure that the ESS CIMOM is listening for incoming connections. To do this run a telnet command from the server where TotalStorage Productivity Center for Disk resides. A successful telnet on the configured port (as indicated by a black screen with cursor on the top left) will tell you that the ESS CIMOM is active. You selected this port during ESS CIMOM code installation. If the telnet connection fails, you will have a panel like the one shown in Figure 4-31. In such case, you have to investigate the problem until you get a blank screen for telnet port.

Figure 4-31 Example of telnet fail onnection

Another method to verify that ESS CIMOM is up and running is to use the CIM Browser interface. For Windows machines change the working directory to c:\Program Files\ibm\cimagent and run startcimbrowser. The WBEM browser in Figure 4-32 on page 149 will appear. The default user name is superuser and the default password is passw0rd. If you have already changed it, using the setuser command, the new userid and

148

Managing Disk Subsystems using IBM TotalStorage Productivity Center

password must be provided. This should be set to the TotalStorage Productivity Center for Disk userid and password.

Figure 4-32 WBEM Browser

When login is successful, you should see a panel like the one in Figure 4-33.

Figure 4-33 CIMOM Browser window

Chapter 4. CIMOM installation and configuration

149

4.7.3 Setting up the Service Location Protocol Directory Agent


You can use the following procedure to set up the Service Location Protocol (SLP) Directory Agent (DA) so that TotalStorage Productivity Center for Disk can discover devices that reside in subnets other than the one in which TotalStorage Productivity Center for Disk resides. Perform the following steps to set up the SLP DAs: 1. Identify the various subnets that contain devices that you want TotalStorage Productivity Center for Disk to discover. 2. Each device is associated with a CIM Agent. There might be multiple CIM Agents for each of the identified subnets. Pick one of the CIM Agents for each of the identified subnets. (It is possible to pick more than one CIM Agent per subnet, but it is not necessary for discovery purposes.) 3. Each of the identified CIM Agents contains an SLP service agent (SA), which runs as a daemon process. Each of these SAs is configured using a configuration file named slp.conf. Perform the following steps to edit the file: For example, if you have ESS CIM agent installed in the default install directory path, then go to C:\Program Files\IBM\cimagent\slp directory. Look for file named slp.conf Make a backup copy of this file and name it slp.conf.bak. Open the slp.conf file and scroll down until you find (or search for) the line ;net.slp.isDA = true Remove the semi-colon (;) at the beginning of the line. Ensure that this property is set to true (= true) rather than false. Save the file. Copy this file (or replace it if the file already exists) to the main windows subdirectory for Windows machines (for example c:\winnt), or in the /etc directory for UNIX machines. 4. It is recommended to reboot the SLP server at this stage. Otherwise, alternatively, you may choose to restart the SLP and CIMOM services. You can do this from your windows desktop Start Menu Settings Control Panel Administrative tools Services. Launch the Services GUI locate the Service Location Protocol, right-click and select stop. It will pop-up another panel which will request to stop IBM CIM Object Manager service. You may click Yes. You may start the SLP daemon again after it has stopped sucessfully. Alternatively, you may choose to re-start the CIMOM using command line as shown in Figure 4-34.

Figure 4-34 Stop and Start CIMOM using commandline

150

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Note: The CIMOM process might not start automatically when you restart the SLP daemon. After you execute stopcimom and startcimom commands shown below, you should get response that it has stopped and started sucessfully. CIMOM startup takes considerable time if you have configured many ESS. To ensure that it has started and is listening, you may verify cimom.log file as shown in Figure 4-29 on page 147. You should see the message as CMMOMxxxx server waiting for connections...

Creating slp.reg file


Important: To avoid to register manually the CIMOM outside the subnet every time that the Service Location Protocol (SLP) is restarted, create a file named slp.reg. The default location for the registration is C:\winnt\. Slpd reads the slp.reg file on startup and re-reads it when ever the SIGHUP signal is received.

slp.reg file example


The following is a slp.reg file sample.
Example 4-1 slp.reg file ############################################################################# # # OpenSLP static registration file # # Format and contents conform to specification in IETF RFC 2614, see also # http://www.openslp.org/doc/html/UsersGuide/SlpReg.html # #############################################################################

#---------------------------------------------------------------------------# Register Service - SVC CIMOMS #----------------------------------------------------------------------------

service:wbem:https://9.43.226.237:5989,en,65535 # use default scopes: scopes=test1,test2 description=SVC CIMOM Open Systems Lab, Cottle Road authors=Aliprandi,Andrews,Cooper,Eggli,Lovelace,Zerbini creation_date=04/02/20 service:wbem:https://9.11.209.188:5989,en,65535 # use default scopes: scopes=test1,test2 description=SVC CIMOM Tucson L2 Lab authors=Aliprandi,Andrews,Cooper,Eggli,Lovelace,Zerbini creation_date=04/02/20 #service:wbem:https://9.42.164.175:5989,en,65535 # use default scopes: scopes=test1,test2 #description=SVC CIMOM Raleigh SAN Central #authors=Aliprandi,Andrews,Cooper,Eggli,Lovelace,Zerbini #creation_date=04/02/20 #---------------------------------------------------------------------------Chapter 4. CIMOM installation and configuration

151

# Register Service - SANFS CIMOMS #---------------------------------------------------------------------------#service:wbem:https://9.82.24.66:5989,en,65535 #Additional parameters for setting the appropriate namespace values #CIM_InteropSchemaNamespace=root/cimv2 #Namespace=root/cimv2 # use default scopes: scopes=test1,test2 #description=SANFS CIMOM Gaithersburg ATS Lab #authors=Aliprandi,Andrews,Cooper,Eggli,Lovelace,Zerbini #creation_date=04/02/20 #service:wbem:https://9.11.209.148:5989,en,65535 #Additional parameters for setting the appropriate namespace values #CIM_InteropSchemaNamespace=root/cimv2 #Namespace=root/cimv2 # use default scopes: scopes=test1,test2 #description=SANFS CIMOM Tucson L2 Lab #authors=Aliprandi,Andrews,Cooper,Eggli,Lovelace,Zerbini #creation_date=04/02/20 #---------------------------------------------------------------------------# Register Service - FAStT CIMOM #---------------------------------------------------------------------------#service:wbem:https://9.1.39.65:5989,en,65535 #CIM_InteropSchemaNamespace=root/lsissi #ProtocolVersion=0 #Namespace=root/lsissi # use default scopes: scopes=test1,test2 #description=FAStT700 CIMOM ITSO Lab, Almaden #authors=Aliprandi,Andrews,Cooper,Eggli,Lovelace,Zerbini #creation_date=04/02/20

4.7.4 Configuring IBM Director for SLP discovery


You have now converted the SLP SA of the CIM Agent to run as an SLP DA. The CIMOM is not affected and will register itself with the DA instead of the SA. However, the DA will automatically discover all other services registered with other SLP SAs in that subnet. Attention: You will need to register the IP address of the server running SLP DA daemon with the IBM Director to facilitate MDM SLP discovery. You can do this using IBM Director console interface of TotalStorage Productivity Center for Disk. At this stage, it is assumed that you have already completed the installation of TotalStorage Productivity Center for Disk on a separate and dedicated server. You may proceed to that server for performing following steps and launch IBM Director console. Go to the IBM Director Console Options Discovery Preference MDM SLP Configuration settings panel, and enter the host names or IP addresses of each of the machines that are running the SLP DA that was set up in the prior steps. As shown in Figure 4-35 on page 153, put in IP address of the SLP DA server and click Add OK.

152

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Figure 4-35 IBM Director Discovery Preference Panel

4.7.5 Registering the ESS CIM Agent to SLP


You need to manually register the ESS CIM agent to the SLP DA only when the following conditions are both true: There is no ESS CIM Agent in the TotalStorage Productivity Center for Disk server subnet (TotalStorage Productivity Center for Disk). The SLP DA used by Multiple Device Manager is also not running an ESS CIM Agent. Tip: If either of the preceding conditions are false, you do not need to perform the following steps. To register the ESS CIM Agent issue the following command on the SLP DA server: C:\>CD C:\Program Files\IBM\cimagent\slp slptool register service:wbem:https://ipaddress:port Where ipaddress is the ESS CIM Agent ip address. For our ITSO setup, we used IP address of our ESS CIMOM server as 9.1.38.48 and default port number 5989. Issue a verifyconfig command as shown in Figure 4-28 on page 147 to confirm that SLP is aware of the registration.

Chapter 4. CIMOM installation and configuration

153

Attention: Whenever you update SLP configuration as shown above, you may have to stop and start slpd daemon. This will enable SLP to register and listen on newly configured ports. Also, whenever you re-start SLP daemon, ensure that IBM ESS CIMOM agent has also re-started. Otherwise you may issue startcimom.bat command, as shown in previous steps. Another alternative is to reboot the CIMOM server. Please note that for ESS CIMOM startup takes longer time.

4.7.6 Verifying and managing CIMOMs availability


You may now verify that TotalStorage Productivity Center for Disk can authenticate and discover the CIMOM agent services which are registered to the SLP DA. Launch the IBM Director Console and select TotalStorage Productivity Center for Disk Manage CIMOMs in the tasks panel as shown in Figure 4-36. The panel shows status of connection to respective CIMOM servers. Our ITSO ESS CIMOM server connection status is indicated in first line. with IP address 9.1.38.48, port 5996 and status as Success.

Figure 4-36 Manage CIMOM panel

Note: The panel shows connection status of all the connections attempted earlier, either sucessfull or failure. It is possible to delete failed connections and clean up this panel manually. In order to verify and re-confirm the connection, you may select the respective connection status and click Properties. Figure 4-37 on page 155 shows the properties panel. You may verify username and password information. The namespace, username and password are picked up automatically, hence it is not required to be entered manually. This is the same username / password you configured in earlier steps with setuser command. This username is used by CIMOM to logon to TotalStorage Productivity Center for Disk. If you have problems getting a successful connection then you may enter manually namespace as /root/ibm and your CIMOM username / password.

154

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Figure 4-37 CIMOM Properties panel

You can click Test Connection button. It should show a similar panel to Figure 4-38. It shows the connection is successful.

Figure 4-38 Test Connection for CIMOM

At this point TotalStorage Productivity Center for Disk has registered ESS CIMOM and is ready for device discovery.

4.8 Installing CIM agent for IBM DS4000 family


The latest code for IBM DS4000 family is available at the IBM support Web site. You need to download the correct and supported level of CIMOM code for TotalStorage Productivity Center for Disk Version 2.1. You can navigate from the following IBM support Web site for TotalStorage Productivity Center for Disk to acquire the correct CIMOM code: http://www-1.ibm.com/servers/storage/support/software/tpcdisk/ You may to have traverse through multiple links to get to the download files. At the time of writing this book, we accessed the Web page as shown in Figure 4-39 on page 156.

Chapter 4. CIMOM installation and configuration

155

Figure 4-39 IBM support matix Web page

While scrolling down the same Web page, we got the following link for DS 4000 CIMOM code as in Figure 4-40 on page 157. This link leads tothe engenio provider Web site. The current supported code level is 1.0.59, as indicated in the Web page.

156

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Figure 4-40 Web download link for DS Family CIMOM code

From the Web site select the operating system used for the server on which the IBM DS family CIM Agent will be installed. You will download a setup.exe file. Save it to a directory on the server you will be installing the DS 4000 CIM Agent on (see Figure 4-41 on page 158).

Chapter 4. CIMOM installation and configuration

157

Figure 4-41 DS CIMOM Install

Launch the setup.exe file to begin the DS 4000 family CIM agent installation. The InstallShield Wizard for LSI SMI-S Provider window opens (see Figure 4-42). Click Next to continue.

Figure 4-42 LSI SMI-SProvider window

158

Managing Disk Subsystems using IBM TotalStorage Productivity Center

The LSI License Agreement window opens next. If you agree with the terms of the license agreement, click Yes to accept the terms and continue the installation (see Figure 4-43 on page 159).

Figure 4-43 LSI License Agreement

The LSI System Info window opens. The minimum requirements are listed along with the install system disk free space and memory attributes as shown in Figure 4-44. If the install system fails the minimum requirements evaluation, then a notification window will appear and the installation will fail. Click Next to continue.

Figure 4-44 System Info window

Chapter 4. CIMOM installation and configuration

159

The Choose Destination Location window appears. Click Browse to choose another location or click Next to begin the installation of the FAStT CIM agent (see Figure 4-45 on page 160).

Figure 4-45 Choose a destination

The InstallShield Wizard will prepare and copy the files into the destination directory. See Figure 4-46.

Figure 4-46 Install Preparation window

160

Managing Disk Subsystems using IBM TotalStorage Productivity Center

The README will appear after the files have been installed. Read through it to become familiar with the most current information (see Figure 4-47 on page 161). Click Next when ready to continue.

Figure 4-47 README file

In the Enter IPs and/or Hostnames window enter the IP addresses and hostnames of the FAStT devices this FAStT CIM agent will manage as shown in Figure 4-48.

Figure 4-48 FAStT device list

Chapter 4. CIMOM installation and configuration

161

Use the Add New Entry button to add the IP addresses or hostnames of the FAStT devices that this FAStT CIM agent will communicate with. Enter one IP address or hostname at a time until all the FAStT devices have been entered and click Next (see Figure 4-49 on page 162).

Figure 4-49 Enter hostname or IP address

Do not enter the IP address of a FAStT device in multiple FAStT CIM Agents within the same subnet. This may cause unpredictable results on the TotalStorage Productivity Center for Disk server and could cause a loss of communication with the FAStT devices. If the list of hostnames or IP addresses has been previously written to a file, use the Add File Contents button which will open the Windows Explorer. Locate and select the file and then click Open to import the file contents. When all the FAStT device hostnames and IP addresses have been entered, click Next to start the SMI-S Provider Service (see Figure 4-50).

Figure 4-50 Provider Service starting

When the Service has started, the installation of the FAStT CIM agent is complete (see Figure 4-51 on page 163).

162

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Figure 4-51 Installation complete

Arrayhosts File
The installer will create a file called %installroot%\SMI-SProvider\wbemservices\cimom\bin\arrayhosts.txt As shown in Figure Figure 4-52. In this file the IP addresses of installed DS 4000 units can be reviewed, added or edited.

Figure 4-52 Arrayhosts file

Verifying LSI Provider Service availability


You can verify from Windows Services Panel that the LSI Provider service has started as shown in Figure 4-53 on page 164. If you change the contents of arrayhost file for adding and deleting DS 4000 devices, then you will need to restart the LSI Provider service using the Windows Services Panel.

Chapter 4. CIMOM installation and configuration

163

Figure 4-53 LSI Provider Service

Registering DS4000 CIM agent


The DS4000 CIM Agent needs to be registered with an SLP DA if the FAStT CIM Agent is in a different subnet then that of IBM TotalStorage Productivity Center for Disk and Replication Base environment. The registration is not currently provided automatically by the CIM Agent. You register the DS 4000 CIM Agent with SLP DA from a command prompt using the slptool command. An example of the slptool command is shown below. You must change the IP address to reflect the IP address of the workstation or server where you installed the DS 4000 family DS 4000 CIM Agent. The IP address of our FAStT CIM Agent is 9.1.38.79 and port 5988. You need to execute this command on your SLP DA server. It our ITSO lab, we used SLP DA on the ESS CIMOM server. You need to go to directory C:\Program Files\IBM\cimagent\slp and run: slptool register service:wbem:http:\\9.1.38.79:5988

Important: You cannot have the FAStT management password set if you are using IBM TotalStorage Productivity Center.

At this point you may run following command on the SLP DA server to verify that DS 4000 family FAStT CIM agent is registered with SLP DA. slptool findsrvs wbem The response from this command will show the available services which you may verify.

4.8.1 Verifying and Managing CIMOM availability


You may now verify that TotalStorage Productivity Center for Disk can authenticate and discover the CIMOM agent services which are registered by SLP DA. You may proceed to your TotalStorage Productivity Center for Disk server. Launch the IBM Director Console and select TotalStorage Productivity Center for Disk Manage CIMOMs in the tasks panel as shown in Figure 4-54 on page 165. The panel shows status of connection to respective CIMOM servers. Our ITS DS4000 CIMOM server 164
Managing Disk Subsystems using IBM TotalStorage Productivity Center

connection status is indicated in first line. with IP address 9.1.38.79, port 5988 and status as Success.

Figure 4-54 Manage CIMOM Panel

Note: The panel shows connection status of all the connections attempted earlier, either sucessfull or failure. It is possible to delete failed connections and clean up this panel manually. In order to verify and re-confirm the connection, you may select the respective connection status and click Properties. The Figure 4-55 shows the properties panel. You may verify username and password information. The namespace, username and password are picked up automatically, hence it is not required to be entered manually. If you have problems for getting successful connection then you may enter manually namespace as /root/lsissi and your CIMOM username / password.

Figure 4-55 DS CIMOM Properties Panel

You can click the Test Connection button. It should show similar panel as Figure 4-56 on page 166. It shows the connection is successful.

Chapter 4. CIMOM installation and configuration

165

Figure 4-56 Test Connection for CIMOM

At this point TotalStorage Productivity Center for Disk has registered DS 4000 CIMOM and ready for device discovery.

4.9 Configuring CIMOM for SAN Volume Controller


The CIM Agent for SAN Volume Controller is part of the SAN Volume Controller Console and provides TotalStorage Productivity Center for Disk with access to SAN Volume Controller clusters. You must customize the CIM Agents in your enterprise to accept the TotalStorage Productivity Center for Disk user name and password. Figure 4-57 explains the communication between TotalStorage Productivity Center for Disk and SAN Volume Controller Environment.

Figure 4-57 TotalStorage Productivity Center for Disk and SVC communication

For additional details on how to configure the SAN Volume Controller Console refer to the redbook IBM TotalStorage Introducing the SAN Volume Controller, SG24-6423. To discover and manage the SAN Volume Controller we need to ensure that our TotalStorage Productivity Center for Disk superuser name and password (the account we 166
Managing Disk Subsystems using IBM TotalStorage Productivity Center

specify in the TotalStorage Productivity Center for Disk configuration panel as shown in Figure 4-58) matches an account defined on the SAN Volume Controller console, in our case we implemented username TPCSUID and password ITSOSJ. You may want to adapt a similar nomenclature and setup the username and password on each SAN Volume Controller CIMOM to be monitored with TotalStorage Productivity Center for Disk.

Figure 4-58 Configure MDM Panel

4.9.1 Adding the SVC TotalStorage Productivity Center for Disk user account
As stated previously, you should implement a unique userid to manage the SAN Volume Controller devices in TotalStorage Productivity Center for Disk. This can be achieved at the SAN Volume Controller console using the following steps: 1. Login to the SAN Volume Controller console with a superuser account 2. Click Users under My Work on the left side of the panel (see Figure 4-59 on page 168).

Chapter 4. CIMOM installation and configuration

167

Figure 4-59 SAN Volume Controller console

3. Select Add a user in the drop-down under Users panel and click Go (see Figure 4-60).

Figure 4-60 SAN Volume Controller console Add a user

168

Managing Disk Subsystems using IBM TotalStorage Productivity Center

4. An introduction window is opened, click Next (see Figure 4-61).

Figure 4-61 SAN Volume Controller Add a user wizard

5. Enter the User Name and Password and click Next (see Figure 4-62 on page 170).

Chapter 4. CIMOM installation and configuration

169

Figure 4-62 SAN Volume Controller Console Define users panel

6. Select your candidate cluster and move it to the right under Administrator Clusters (see Figure 4-63). Click Next to continue.

Figure 4-63 SAN Volume Controller console Assign administrator roles

7. Click Next after you Assign service roles (see Figure 4-64 on page 171).

170

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Figure 4-64 SAN Volume Controller Console Assign user roles

8. Click Finish after you Verify user roles (see Figure 4-65 on page 172).

Chapter 4. CIMOM installation and configuration

171

Figure 4-65 SAN Volume Controller Console Verify user roles

9. After you click Finish, the Viewing users panel opens (see Figure 4-66).

Figure 4-66 SAN Volume Controller Console Viewing Users

172

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Confirming the SAN Volume Controller CIMOM is available


Before you proceed, you need to be sure that the CIMOM on the SAN Volume Controller is listening for incoming connections. To do this, issue a telnet command from the server where TotalStorage Productivity Center for Disk resides. A successful telnet on port 5989 (as indicated by a black screen with cursor on the top left) will tell you that the CIMOM SAN Volume Controller console is active. If the telnet connection fails, you will have a panel like the one in Figure 4-67.

Figure 4-67 Example of telnet fail connection

4.9.2 Registering the SAN Volume Controller host in SLP


The next step to detecting an SAN Volume Controller is to manually register the SAN Volume Controller console to the SLP DA. Tip: If your SAN Volume Controller console resides in the same subnet as the TotalStorage Productivity Center for Disk server, SLP registration will be automatic so you do not need to perform the following step. To register the SAN Volume Controller Console perform the following command on the SLP DA server: slptool register service:wbem:https://ipaddress:5989 Where ipaddress is the SAN Volume Controller console ip address. Run a verifyconfig command to confirm that SLP ia aware of the SAN Volume Controller console registration.

4.10 Configuring CIMOM for TotalStorage Productivity Center for Disk summary
TotalStorage Productivity Center for Disk discovers both IBM storage devices that comply with the Storage Management Initiative Specification (SMI-S) and SAN devices such as switches, ports, and hosts. SMIS-compliant storage devices are discovered using the Service Location Protocol (SLP). The TotalStorage Productivity Center for Disk server software performs SLP discovery on the network. The User Agent looks for all registered services with a service type of service:wbem. TotalStorage Productivity Center for Disk performs the following discovery tasks:
Chapter 4. CIMOM installation and configuration

173

Locates individual storage devices Retrieves vital characteristics for those storage devices Populates The TotalStorage Productivity Center for Disk internal databases with the discovered information The TotalStorage Productivity Center for Disk can also access storage devices through the CIM Agent software. Each CIM Agent can control one or more storage devices. After the CIMOM services have been discovered through SLP, The TotalStorage Productivity Center for Disk contacts each of the CIMOMs directly to retrieve the list of storage devices controlled by each CIMOM. TotalStorage Productivity Center for Disk gathers the vital characteristics of each of these devices. For The TotalStorage Productivity Center for Disk to successfully communicate with the CIMOMs, the following conditions must be met: A common user name and password must be configured for all the CIM Agent instances that are associated with storage devices that are discoverable by TotalStorage Productivity Center for Disk (use adduser as described in 4.6.4, CIMOM User Authentication on page 143). That same user name and password must also be configured for TotalStorage Productivity Center for Disk using the Configure MDM task in the TotalStorage Productivity Center for Disk interface. If a CIMOM is not configured with the matching user name and password, it will be impossible to determine which devices the CIMOM supports. As a result, no devices for that CIMOM will appear in the IBM Director Group Content pane. The CIMOM service must be accessible through the IP network. The TCP/IP network configuration on the host where TotalStorage Productivity Center for Disk is installed must include in its list of domain names all the domains that contain storage devices that are discoverable by TotalStorage Productivity Center for Disk. It is important to verify that CIMOM is up and running. To do that, use the following command from TotalStorage Productivity Center for Disk server: telnet CIMip port Where, CIMip is the ip address where CIM Agent run and port is the port value used for the communication (5989 for secure connection, 5988 for unsecure connection).

4.10.1 SLP registration and slptool


TotalStorage Productivity Center for Disk uses Service Location Protocol (SLP) discovery, which requires that all of the CIMOMs that TotalStorage Productivity Center for Disk discovers are registered using the Service Location Protocol (SLP). SLP can only discover CIMOMs that are registered in its IP subnet. For CIMOMs outside of the IP subnet, you need to use an SLP DA and register the CIMOM using slptool. Ensure that the CIM_InteropSchemaNamespace and Namespace attributes are specified. For example, type the following command: slptool register service:wbem:https://myhost.com:port Where, myhost.com is the name of the server hosting the CIMOM, and port is the port number of the service, such as 5989.

174

Managing Disk Subsystems using IBM TotalStorage Productivity Center

4.10.2 Persistency of SLP registration


Although it is acceptable to register services manually into SLP, it is possible for SLP users to to statically register existing services (applications that were not compiled to use the SLP library) using a configuration file that SLP reads at startup, called slp.reg. All of the registrations are maintained by slpd and will remain registered as long as slpd is alive. The Service Location Protocol (SLP) registration is lost if the server where SLP resides is rebooted or when the Service Location Protocol (SLP) service is stopped. A Service Location Protocol (SLP) manual registration is needed for all the CIMOMs outside the subnet where SLP DA resides. Important: to avoid to register manually the CIMOM outside the subnet every time that the Service Location Protocol (SLP) is restarted, create a file named slp.reg. The default location for the registration is for Windows machines c:\winnt, or /etc directory for UNIX machines. Slpd reads the slp.reg file on startup and re-reads it when ever the SIGHUP signal is received.

4.10.3 Configuring slp.reg file


Here is an example of the slp.reg file:
############################################################################# # # OpenSLP static registration file # # Format and contents conform to specification in IETF RFC 2614, see also # http://www.openslp.org/doc/html/UsersGuide/SlpReg.html # #############################################################################

#---------------------------------------------------------------------------# Register Service - SVC CIMOMS #---------------------------------------------------------------------------service:wbem:https://9.43.226.237:5989,en,65535 # use default scopes: scopes=test1,test2 description=SVC CIMOM Open Systems Lab, Cottle Road authors=Aliprandi,Andrews,Cooper,Eggli,Lovelace,Zerbini creation_date=04/02/20 service:wbem:https://9.11.209.188:5989,en,65535 # use default scopes: scopes=test1,test2 description=SVC CIMOM Tucson L2 Lab authors=Aliprandi,Andrews,Cooper,Eggli,Lovelace,Zerbini creation_date=04/02/20 #service:wbem:https://9.42.164.175:5989,en,65535 # use default scopes: scopes=test1,test2 #description=SVC CIMOM Raleigh SAN Central #authors=Aliprandi,Andrews,Cooper,Eggli,Lovelace,Zerbini #creation_date=04/02/20 #---------------------------------------------------------------------------Chapter 4. CIMOM installation and configuration

175

# Register Service - SANFS CIMOMS #---------------------------------------------------------------------------#service:wbem:https://9.82.24.66:5989,en,65535 #Additional parameters for setting the appropriate namespace values #CIM_InteropSchemaNamespace=root/cimv2 #Namespace=root/cimv2 # use default scopes: scopes=test1,test2 #description=SANFS CIMOM Gaithersburg ATS Lab #authors=Aliprandi,Andrews,Cooper,Eggli,Lovelace,Zerbini #creation_date=04/02/20 #service:wbem:https://9.11.209.148:5989,en,65535 #Additional parameters for setting the appropriate namespace values #CIM_InteropSchemaNamespace=root/cimv2 #Namespace=root/cimv2 # use default scopes: scopes=test1,test2 #description=SANFS CIMOM Tucson L2 Lab #authors=Aliprandi,Andrews,Cooper,Eggli,Lovelace,Zerbini #creation_date=04/02/20 #---------------------------------------------------------------------------# Register Service - FAStT CIMOM #---------------------------------------------------------------------------#service:wbem:https://9.1.39.65:5989,en,65535 #CIM_InteropSchemaNamespace=root/lsissi #ProtocolVersion=0 #Namespace=root/lsissi # use default scopes: scopes=test1,test2 #description=FAStT700 CIMOM ITSO Lab, Almaden #authors=Aliprandi,Andrews,Cooper,Eggli,Lovelace,Zerbini #creation_date=04/02/20

176

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Chapter 5.

TotalStorage Productivity Center common base use


This chapter provides information about the functions of the Productivity Center common base. The components of Productivity Center common base include: Configuring MDM Launch and log on to TotalStorage Productivity Center Launching device managers Performing device discovery Performing device inventory collection Working with ESS Working with SAN Volume Controller Working with IBM DS4000 family (formally FAStT) Event management

Copyright IBM Corp. 2004, 2005. All rights reserved.

177

5.1 Productivity Center common base: Introduction


Before using Productivity Center common base features you need to perform some configuration steps. This will permit you to detect storage devices to be managed. Version 2.1 of Productivity Center common base permits you to discover and manage: ESS 2105-F20, 2105-800, 2105-750 SAN Volume Controller (SVC) DS4000 family (formally FAStT product range) Provided you have discovered a supported IBM storage device, Productivity Center common base storage management functions will be available for drag-and-drop operations. Alternatively, right-click the discovered device to display a drop-down with all available functions specific to it. We will review the available operations that can be performed in the sections that follows. Note: Not all functions of TotalStorage Productivity Center are applicable to all device types. For example, they cannot display the virtual disks on a DS4000 because the virtual disks concept is only applicable to the SAN Volume Controller. The sections that follow cover the functions available for each of the supported device types.

5.2 Launching TotalStorage Productivity Center


Productivity Center common base along with TotalStorage Productivity Center for Disk and TotalStorage Productivity Center for Replication are accessed via the TotalStorage Productivity Center Launchpad (Figure 5-1) icon on your desktop. Select Manage Disk Performance and Replication to start the IBM Director console interface.

Figure 5-1 TotalStorage Productivity Center launchpad

Alternatively access IBM Director from Windows Start Programs IBM Director IBM Director Console Log on to IBM Director using the superuser id and password defined at installation. Please note that passwords are case sensitive. Login values are: IBM Director Server: Hostname of the machine where IBM Director is installed User ID: The username to logon with. This is the superuser ID. Enter it in the form <hostname>\<username>

178

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Password: The case sensitive superuser ID password Figure 5-2 shows the IBM Director Login panel you will see after launching IBM Director.

Figure 5-2 IBM Director Log on

5.3 Exploiting Productivity Center common base


The Productivity Center common base module adds the Multiple Device Manager submenu task on the right-hand Tasks pane of the IBM Director Console as shown in Figure 5-3 on page 180. Note: The Multiple Device Manager product has been rebranded to TotalStorage Productivity Center for Disk and TotalStorage Productivity Center for Replication. You will still see the name Multiple Device Manager in some panels and messages. Productivity Center common base will install the following sub-components into the Multiple Device Manager menu: Configure MDM Launch Device Manager Launch Tivoli SAN Manager (now called TotalStorage Productivity Center for Fabric) Manage CIMOMs Manage Storage Units (menu) Inventory Status Managed Disks Virtual Disks Volumes

Chapter 5. TotalStorage Productivity Center common base use

179

Note: The Manage Performance and Manage Replication tasks that you see in Figure 5-3 on page 180 become visible when TotalStorage Productivity Center for Disk or TotalStorage Productivity Center for Replication are installed. Although this chapter covers Productivity Center common base you would have installed either TotalStorage Productivity Center for Disk, TotalStorage Productivity Center for Replication or both.

Figure 5-3 IBM Director Console with Productivity Center common base

5.3.1 Configure MDM


The name Multiple Device Manager (MDM) is now known as Productivity Center common base. However this version of the code shows the previous MDM name. It should not be necessary to alter any values in here unless passwords need to change. This menu option (Figure 5-4 on page 181) allows you to perform the following actions: Provide a Productivity Center common base superuser account name and password. The username field will be populated with the value provided at installation. There is no reason to change this value. Provide information about the DB2 host. Again this value will be populated with the information available when the installation was performed and there should be no reason to modify the value in this field. Provide location and password information for TotalStorage Productivity Center for Fabric. This version of software carries the previous name of fabric, Tivoli SAN manager (TSANM). 180
Managing Disk Subsystems using IBM TotalStorage Productivity Center

See Chapter 7, TotalStorage Productivity Center for Fabric use on page 331 for more details on using and configuring TotalStorage Productivity Center for Fabric.

Figure 5-4 Configure MDM

5.3.2 Launch Device Manager


The Launch Device Manager task may be dragged onto an available storage device. For ESS this will open the ESS Specialist window for a chosen device. For SAN Volume Controller it will launch a browser session to that device. For DS4000 or FAStT devices is function is not available.

5.3.3 Discovering new storage devices


Assuming that you have followed the steps outlined in Chapter 4, CIMOM installation and configuration on page 119. The following tasks should be completed in order to discover devices defined to our Productivity Center common base host: All CIM agents are running and are registered with the SLP server. The SLP agent host is defined in the IBM Director options (Figure 5-5 on page 182) if it resides in a different subnet to that of the TotalStorage Productivity Center server (Options Discovery Preferences MDM SLP Configuration tab). Note: If the Productivity Center common base host server resides in the same subnet as the CIMOM, then it is not a requirement that the SLP DA host IP address be specified in the Discovery Preferences (Figure 5-5). Refer to Chapter 2, Key concepts on page 25 for details. 1. Discovery will happen automatically based on preferences that are defined in the Options Discovery Preferences MDM SLP Configuration tab. The default values for Auto discovery interval and Presence check interval is set to 0 (see Figure 5-5 on page 182). These values should be set to a more suitable value, for example to 1 hour for Auto discovery interval and 15 minutes for Presence check interval. The values you

Chapter 5. TotalStorage Productivity Center common base use

181

specify will have a performance impact on the CIMOMs and Productivity Center common base servers, so do not set these values too low.

Figure 5-5 Discovery Preferences MDM SLP Configuration

2. Turn off automatic inventory on discovery Important: Because of the time and CIMOM resources needed to perform inventory on storage devices it is undesirable and unnecessary to perform this each time Productivity Center common base performs a device discovery. Turn off automatic inventory by selecting Options Server Preferences as shown in Figure 5-6 on page 183.

182

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Figure 5-6 Selecting Server Preferences

Now uncheck the Collect On Discovery tick box as shown in Figure 5-7, all other options can remain unchanged. Select OK when done.

Figure 5-7 Server Preferences

3. You can click the Discover all Systems in the top left corner of the IBM Director Console to initiate an immediate discovery task (see Figure 5-8 on page 184).

Chapter 5. TotalStorage Productivity Center common base use

183

Figure 5-8 Discover All Systems icon

4. You can also use the IBM Director Scheduler to create a scheduled job for new device discovery. Either click the scheduler icon in the IBM Director tool bar or use the menu, Tasks Scheduler (see Figure 5-9 on page 185).

184

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Figure 5-9 Tasks Scheduler option for Discovery

In the Scheduler click File New Job (see Figure 5-10).

Figure 5-10 Task Scheduler Discovery job

Establish parameters for the new job. Under the Date/Time tab. Include date and time to perform the job, and whether the job is to be repeated (see Figure 5-11 on page 186).

Chapter 5. TotalStorage Productivity Center common base use

185

Figure 5-11 Discover job parameters

From the Task tab (see Figure 5-12), select Discover MDM storage devices/SAN Elements, then click Select.

Figure 5-12 Discover job selection task

Click File Save as, or use the Save as icon. Provide a descriptive job name in the Save Job panel (see Figure 5-13 on page 187) and click OK.

186

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Figure 5-13 Discover task job name

5.3.4 Manage CIMOMs


The Manage CIMOMs menu option as seen in Figure 5-6 on page 183 lets you view the CIMOMs that have been discovered by Productivity Center common base. It should not normally be necessary to alter any information in the panel. The connection status of each CIMOM is displayed. A success state means that Productivity Center common base is able to connect to the CIMOM using the Namespace, User name and Password defined to it. It does not mean that the CIMOM can access a storage device.

Figure 5-14 Discovered CIMOMs list

To view or change the details of a CIMOM or perform a connection test, select the CIMOM as seen in Figure 5-14 and then the click the Properties button from the right of the panel. Figure 5-15 on page 188 shows the properties for a DS4000 or FAStT CIMOM.

Chapter 5. TotalStorage Productivity Center common base use

187

Figure 5-15 CIMOM details for an DS4000 or FAStT CIMOM

Important: Namespace must be set to \root\lsissi for DS4000 and FAStT CIMOMs. It should be discovered automatically but if your connection fails, please verify. Also DS4000 and FAStT CIMOMs do not need a User name or Password set. Entering them has no effect on the success of a Test Connection.

Figure 5-16 CIMOM details for a SAN Volume Controller

Figure 5-16 shows the CIMOM properties for a SAN Volume Controller. Important: Namespace must be set to \root\ibm for SAN Volume Controller CIMOMs. It should be discovered automatically but if you experience connection failures, please verify it has been set correctly. For more detailed information about configuring CIMOMs, refer to Chapter 4, CIMOM installation and configuration on page 119.

188

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Tip: If you move or delete CIMOMs in your environment the old CIMOM entries are not automatically updated and entries with a Failure status will be seen as in Figure 5-14. These invalid entries can slow down discovery performance as TotalStorage Productivity Center tries to contact them each time it performs a discovery. You cannot delete CIMOM entries directly from the Productivity Center common base interface. Delete them using the DB2 control center tool as described in 5.3.5, Manually removing old CIMOM entries on page 189.

5.3.5 Manually removing old CIMOM entries


It may be necessary from time to time to remove CIMOMs entries from Productivity Center common base. This can happen if you move a CIMOM to another server in your environment, change the CIMOMs IP address etc... Productivity Center common base does not allow direct removal of a CIMOM entry using the Director interface. To delete a CIMOM remove the data rows manually from DB2 using the process that follows. Process overview: Delete any non-existing storage devices from the TotalStorage Productivity Center that are associated with the CIMOM entry to be removed. Launch DB2 Control Center. Navigate to the DMCOSERV database. Locate the DMCIMOM table. Delete the data rows relating to old CIMOM(s). Commit changes to DMCIMOM table. Locate the BASEENTITY table. Filter rows DISCRIM_BASEENTITY = DMCIMOM. Delete the data rows relating to the old CIMOM(s). Commit changes to BASEENTITY table. Locate the DMREFERENCE table. Delete the data rows relating to the old CIMOM(s). Commit changes to DB2 table. The following figures illustrate the process. Before deleting a non-existing CIMOM(s) through DB2 tables first delete any storage devices that are associated with them in TotalStorage Productivity Center. Right-click the selected device and choose Delete as shown in Figure 5-17 on page 190.

Chapter 5. TotalStorage Productivity Center common base use

189

Figure 5-17 Delete invalid device from TotalStorage Productivity Center

Launch DB2 Control Center (Figure 5-18 on page 191). This is a general administration tool for managing DB2 databases and table. Attention: DB2 Control Center is a database administration tool. It gives you direct and complete access to the data stored in all the TotalStorage Productivity Center databases. Altering data through this tool can cause damage to the TotalStorage Productivity Center environment. Be careful not to alter data unnecessarily using this tool.

190

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Figure 5-18 Launching DB2 Control Center

Launch the DB2 Control Center as seen in Figure 5-18.

Figure 5-19 Navigate to the DMCOSERV database

Navigate down the structure in the left hand panel to open up the DMCOSERV database, then click the Tables option. A list of tables for this database will appear in the right hand upper panel as seen in Figure 5-19. Locate the DMCIMOM table as shown and double-click to open a new window (Figure 5-20 on page 192) showing the data rows.

Chapter 5. TotalStorage Productivity Center common base use

191

Figure 5-20 Deleting rows from the DMCIMOM table in DB2

Identify the CIMOM rows to be deleted by their IP address as shown in Figure 5-20. Click once on the row to be delete to select it. Click on the Delete Row button to remove it from the table. When you have made your changes you must click the Commit button for the table changes to be made effective. Now click Close to finish with this table. If you make any mistakes before you have pressed the Commit button you can click the Roll Back button to undo the changes. Now locate the BASEENTITY table from Control Center panel as seen in Figure 5-19 on page 191. Open it with a double-click. This table contains many rows of data. Filter the data to show only entries that relate to CIMOMs. Click the Filter button to open the filter panel as seen in Figure 5-22 on page 193.

Figure 5-21 BASEENTITY table

192

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Figure 5-22 Filtering the BASEENTITY table

Enter DMCIMOM in the values field as shown in Figure 5-22 and click OK. The table data is now filtered to show only CIMOM entries as seen in Figure 5-23.

Figure 5-23 BASEENTITY table filtered to CIMOMs

Use a single click to select the entries by IP address that relate to the non-existent CIMOMs. Click Delete Row to remove them. Click Commit to make the changes effective, then Close. You can used Roll Back to undo any mistakes before a Commit.

Chapter 5. TotalStorage Productivity Center common base use

193

Figure 5-24 DMREFERENCE table

Now locate the DMREFERENCE table from Control Center panel as seen in Figure 5-19 on page 191. Open it with a double-click. Note: The DMREFERENCE table may contain more than one entry for each of the non-existent CIMOM(s). It may not contain any rows at all for the CIMOM(s). Delete all relevant rows for the non-existent CIMOM(s) if they exist. If there are no rows in this table for the CIMOM(s) you are deleting they are not linked to any devices and this is OK.

5.4 Performing volume inventory


This function is used to collect the detailed volume information from a discovered device and place it into the Productivity Center common base databases. You need to do this at least once before Productivity Center common base can start to work with a device. When the Productivity Center common base functions are subsequently used to create/remove LUNs the volume inventory is automatically kept up to date and it is therefore not necessary to repeatedly run inventory collection from the storage devices. Version 2.1 of Productivity Center common base does not currently contain the full feature set of all functions for the supported storage devices. This will make it necessary to use the storage devices own management tools for some tasks. For instance you can create new Vdisks with Productivity Center common base on a SAN Volume Controller but you cannot delete them. You will need to use the SAN Volume Controllers own management tools to do this. For these types of changes to be reflected in Productivity Center common base an inventory collection will be necessary to re-synchronize the storage device and Productivity Center common base inventory. Attention: The use of volume inventory is common to ALL supported storage devices and must be performed before disk management functions are available.

194

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Figure 5-25 Launch Perform Inventory Collection

To start inventory collection right-click the chosen device and select Perform Inventory Collection as shown in Figure 5-25. A new panel will appear (Figure 5-26) as a progress indication that the inventory process is running. At this stage Productivity Center common base is talking to the relevant CIMOM to collect volume information from the storage device. After a short while the information panel will indicated that the collection has been successful. You can now close this window.

Figure 5-26 Inventory collection in progress

Attention: When the panel in Figure 5-26 indicates that the collection has been successfully completed, it does not necessarily mean that the volume information has been fully processed by Productivity Center common base at this point. To track the detailed processing status, launch the Inventory Status tasks seen in Figure 5-27.

Chapter 5. TotalStorage Productivity Center common base use

195

Figure 5-27 Launch Inventory Status

To see the processing status of an inventory collection launch the Inventory Status task as seen in Figure 5-27.

Figure 5-28 Inventory Status

The example Inventory Status panel seen in Figure 5-28 shows the progress of the processing for a SAN Volume Controller. Use the refresh button in the bottom left of the panel to update it with the latest progress. You can also launch the Inventory Status panel before starting an inventory collection to watch the process end to end. In our test lab the inventory process time for an SVC took around 2 minutes end to end.

196

Managing Disk Subsystems using IBM TotalStorage Productivity Center

5.5 Working with ESS


This section covers the Productivity Center common base functions that are available when managing ESS devices. There are two ways to access Productivity Center functions for a given device and these can be seen in Figure 5-29. Tasks access: You will see in the right hand task panel that there are a number of available tasks under the Manage Storage Units section. These management function can be invoked by dragging them onto the chosen device. However not all functions are applicable to all supported devices. Right-click access: To access all functions available for a specific device simply right-click it to see a drop-down menu of options for that device. Figure 5-29 shows the drop-down menu for an ESS. Figure 5-29 also shows the functions of TotalStorage Productivity Center for Disk and TotalStorage Productivity Center for Replication. Although this chapter only covers the Productivity Center common base functions you would always have either or both TotalStorage Productivity Center for Disk, TotalStorage Productivity Center for Replication installed.

Figure 5-29 Accessing Productivity Center common base functions

Chapter 5. TotalStorage Productivity Center common base use

197

5.5.1 Changing the display name of an ESS


You can change the display name of a discovered ESS device to something more meaningful to your organization. Right-click the chosen ESS (Figure 5-30) and select the Rename option.

Figure 5-30 Changing the display name of an ESS

Enter a more meaningful device name as in Figure 5-31 and click OK.

Figure 5-31 Entering a user defined subsystem name

5.5.2 ESS Volume inventory


To view the status of the volumes available within a given ESS device, perform one of the following: Right-click the ESS device and select Volumes as in Figure 5-32 on page 199 On the right-hand side under the Tasks column, drag Managed Storage Units Volumes onto the storage device you want to query. Tip: Before volumes can be displayed, as with other storage devices managed by Productivity Center common base, an initial inventory must be completed. If you try to view volumes for an ESS that has not been inventoried, you will receive a notification that this needs to be done. To perform an inventory collection see section 5.4, Performing volume inventory on page 194.

198

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Figure 5-32 Working with ESS volumes

In either case, in the bottom left corner, the status will change from Ready to Starting Task and it will remain this way until the volume inventory appears. Figure 5-33 shows the Volumes panel for the select ESS device that will appear.

Figure 5-33 ESS volume inventory panel

Chapter 5. TotalStorage Productivity Center common base use

199

5.5.3 Assigning and unassigning ESS volumes


From the ESS volume inventory panel (Figure 5-33 on page 199) you can modify existing volume assignments by either assigning a volume to a new host port(s) or by unassigning a host from an existing volume to host port(s) mapping. To assign a volume to a host port, you can click the Assign host button on the right side of the volume inventory panel (Figure 5-33 on page 199). You will be presented with a panel like the one in Figure 5-34. Select from the list of available host port world wide port names (wwpns), and select either a single host port wwpn, or more than one by holding down the control <Ctrl> key and selecting multiple host ports. When the desired host ports have been selected for Volume assignment, click OK.

Figure 5-34 Assigning ESS LUNs

When you click OK, TotalStorage Productivity Center for Fabric will be called to assist with zoning this volume to the host. If TotalStorage Productivity Center for Fabric is not installed you will see a message panel as in Figure 5-35 on page 201. When the volume has been successfully assigned to the selected host port the Assign host ports panel will disappear and the ESS Volumes panel will be displayed once again, reflecting now the additional host port mapping number in the far right side of the panel, in the Number of host ports column. Note: If TotalStorage Productivity Center for Fabric (formerly known as TSANM) is installed, refer to Chapter 7, TotalStorage Productivity Center for Fabric use on page 331 for complete details of its operation. Also note that TotalStorage Productivity Center for Fabric is only invoked for zoning when assigning hosts to ports. It is not invoked to remove zones when hosts are unassigned.

200

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Figure 5-35 Tivoli SAN Manager warning

5.5.4 Creating new ESS volumes


To create new ESS volumes select the Create button from the Volumes panel as seen in Figure 5-33 on page 199. The Create volume panel will appear (Figure 5-36).

Figure 5-36 ESS create volume

Use the drop-down fields to select the Storage type and choose from Available arrays on the ESS. Then enter the number of volumes you want to create and the Volume quantity along with the Requested size. Finally select the host ports you want to have access to the new volumes from the Defined host ports scrolling list. You can select multiple hosts by holding down the control key <Ctrl> while clicking on hosts. On clicking OK TotalStorage Productivity Center for Fabric will be called to assist with zoning the new volumes to the host(s). If TotalStorage Productivity Center for Fabric (formally known as TSANM) is not installed you will see an message panel as seen in Figure 5-37 on page 202. If TotalStorage Productivity Center for Fabric is installed refer to Chapter 7, TotalStorage Productivity Center for Fabric use on page 331 for complete details of its operation.

Chapter 5. TotalStorage Productivity Center common base use

201

Figure 5-37 Tivoli SAN Manager warning

5.5.5 Launch device manager for an ESS device


This option allows you to link directly to the ESS Specialist of the chosen device: Right-click the ESS storage resource, and select Launch Device Manager. On the right-hand side under the Tasks column, drag Managed Storage Units Launch Device Managers onto the storage device you want to query

Figure 5-38 ESS specialist launched by Productivity Center common base

202

Managing Disk Subsystems using IBM TotalStorage Productivity Center

5.6 Working with SAN Volume Controller


This section covers the Productivity Center common base functions that are available when managing SAN Volume Controller subsystems. There are two ways to access Productivity Center functions for a given device and these can be seen in Figure 5-39 on page 204. Tasks access: You will see in the right hand task panel that there are a number of available task under the Manage Storage Units section. These management functions can be invoked by dragging them onto the chosen device. However not all functions are appropriate to all supported devices. Right-click access: To access all functions available for a specific device right-click it to see a drop-down menu of options for that device. Figure 5-39 on page 204 shows the drop-down menu for a SAN Volume Controller. Note: Overall the SAN Volume Controller functionality offered in Productivity Center common base compared to that of the native SAN Volume Controller Web based GUI is fairly limited in Version 2.1. There is the ability to add existing unmanaged LUNs to existing Mdisk groups, but there are no tools to remove Mdisks from a group or create/delete Mdisk groups. The functions available for Vdisks are similar too. Productivity Center common base can create new Vdisks in a given Mdisk group but there is little other control over the placement of these volumes. It is not possible to remove Vdisks or reassign them to other hosts using Productivity Center common base.

Chapter 5. TotalStorage Productivity Center common base use

203

5.6.1 Changing the display name of a SAN Volume Controller


You can change the display name of a discovered SAN Volume Controller to something more meaningful in your organization. Right-click the chosen device (Figure 5-39) and select the Rename option.

Figure 5-39 Changing the display name of an SVC

Figure 5-40 Enter a user defined SAN Volume Controller name

Enter a meaningful name for the device and click OK as in Figure 5-40.

5.6.2 Working with SAN Volume Controller mdisks


To view the properties of SAN Volume Controller managed disks (Mdisk) as shown in Figure 5-41 on page 205 perform one of the following: Right-click the SVC storage resource, and select Managed Disks. On the right-hand side under the Tasks column, drag Managed Storage Units Managed Disks onto the storage device you want to query.

204

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Tip: Before SAN Volume Controller managed disk properties (mdisks) can be displayed, as with other storage devices managed by Productivity Center common base, an initial inventory must be completed. If you try to use the Managed Disk function on a SAN Volume Controller that has not been inventoried, you will receive a notification that this needs to be done. Refer to 5.4, Performing volume inventory on page 194 for details of performing this.

Figure 5-41 The mdisk properties panel for SAN Volume Controller

Figure 5-41 shows candidate, or unmanaged Mdisks, that are available for inclusion into an existing mdisk group. To add one or more unmanaged disks to an existing mdisk group: Select the MDisk group from the pull-down. Select one mdisk from the list of candidate mdisks, or use the <Ctrl> key to select multiple disks. Click the OK button at the bottom of the window and the selected Mdisk(s) will be added to the Mdisk group.

Chapter 5. TotalStorage Productivity Center common base use

205

5.6.3 Creating new Mdisks on supported storage devices


Attention: The Create button as seen in Figure 5-41 is not for creating new Mdisk groups. It is for creating new Mdisks on storage devices serving the SAN Volume controller. It is not possible to create new Mdisk groups using Version 2.1 of Productivity Center common base. Select the Mdisk group from the pull-down (Figure 5-41 on page 205). Select the Create button. A new panel opens to create the storage volume (Figure 5-42). Select a device accessible to the SVC (devices not marked by an asterisk). Devices marked with an asterisk are not acting as storage to the selected SAN Volume Controller. Figure 5-42 shows an ESS with an asterisk next to it. This is because of the setup on the test environment. Make sure the device you select does not have an asterisk next to it. Specify the number of Mdisks in the Volume quantity and size in the Requested volume size. Select the Defined SVC ports that should be assigned to these new Mdisks. Note: If TotalStorage Productivity Center for Fabric is installed and configured extra panels will appear to create appropriate zoning for this operation. See Chapter 7, TotalStorage Productivity Center for Fabric use on page 331 for details. Click OK to start a process that will create new volume on the selected storage device and then added then to the SAN Volume Controllers Mdisk group

Figure 5-42 Create volumes to be added as Mdisks Productivity Center common base will now requests the specified storage amount from the specified backend storage device.

206

Managing Disk Subsystems using IBM TotalStorage Productivity Center

5.6.4 Create and view SAN Volume Controller Vdisks


To create or view the properties of SAN Volume Controller virtual disks (Vdisk) as shown in Figure 5-43 perform one of the following: Right-click the SVC storage resource, and select Virtual Disks. On the right-hand side under the Tasks column, drag Managed Storage Units Virtual Disks onto the storage device you want to query. In Version 2.1 of Productivity Center common base it is not possible to delete Vdisks. It is also not possible to assign or reassign Vdisks to a host after the creation process. Keep this in mind when working with storage use Productivity Center common base on a SAN Volume Controller. These task can still be performed using the native SAN Volume Controller Web based GUI. Tip: Before SAN Volume Controller virtual disk properties (Vdisks) can be displayed, as with other storage devices managed by Productivity Center common base, an initial inventory must be completed. If you try to use the Virtual Disk function on a SAN Volume Controller that has not been inventoried, you will receive a notification that this needs to be done. To perform an inventory collection see section 5.4, Performing volume inventory on page 194.

Figure 5-43 Launch Virtual Disks

Viewing vdisks
Figure 5-44 on page 208 show the Vdisk inventory and volume attributes for the selected SAN Volume controller.

Chapter 5. TotalStorage Productivity Center common base use

207

Figure 5-44 The vdisk properties panel

Creating a vdisk
To create a new Vdisk use the Create button as shown in Figure 5-44. You need to provide a suitable Vdisk name and select the Mdisk group from which you want to create the Vdisk.Specify the number of Vdisks to be created and the size in megabytes or gigabytes that each Vdisk should be. Figure 5-45 on page 209 shows example input in these fields.

208

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Figure 5-45 SAN Volume Controller vdisk creation

The Host ports section of the Vdisk properties panel allows you to use TotalStorage Productivity Center for Fabric (formally TSANM) functionality to perform zoning actions to provide Vdisk access to specific host WWPNS. If TSANM is not installed, you will receive the warning shown in Figure 5-46. If TotalStorage Productivity Center for Fabric is installed refer to Chapter 7, TotalStorage Productivity Center for Fabric use on page 331 for details on how to configure and use it.

Figure 5-46 Tivoli SAN Manager warning

5.7 Working with DS4000 family or FAStT storage


This section covers the Productivity Center common base functions that are available when managing DS4000 and FAStT type subsystems. There are two ways to access Productivity Center functions for a given device and these can be seen in Figure 5-47 on page 210.
Chapter 5. TotalStorage Productivity Center common base use

209

Tasks access: You will see in the right hand task panel that there are a number of available tasks under the Manage Storage Units section. These management function can be invoked by dragging them onto the chosen device. However not all functions are appropriate to all supported devices. Right-click access: To access all functions available for the selected device, right-click it to see a drop-down menu of options for it; Figure 5-47. Figure 5-47 shows the functions of TotalStorage Productivity Center for Disk and TotalStorage Productivity Center for Replication. Although this chapter only covers the Productivity Center common base functions you would always have and/or TotalStorage Productivity Center for Disk, TotalStorage Productivity Center for Replication installed

5.7.1 Changing the display name of a DS4000 or FAStT


You can change the display name of a discovered DS4000 or FAStT subsystem to something more meaningful to your organization. Right-click the selected DS4000 or FAStT and click the Rename option; Figure 5-47.

Figure 5-47 Changing the display name of a DS4000 or FAStT

Figure 5-48 Entering a user defined display name for DS4000 or FAStT name

210

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Enter a meaningful name for the device and click OK as in Figure 5-48 on page 210

5.7.2 Working with DS4000 or FAStT volumes


To view the status of the volumes available within a selected DS4000 or FAStT device, perform one of the following: Right-click the DS4000 or FAStT storage resource, and select Volumes. On the right-hand side under the Tasks column, drag Managed Storage Units Volumes onto the storage device you want to query. In either case, in the bottom left corner, the status will change from Ready to Starting Task and it will remain this way until the volume inventory is completed (see Figure 5-50 on page 212). Note: Before DS4000 or FAStT volume properties can be displayed, as with other storage devices managed by Productivity Center common base, an initial inventory must be completed. Refer to 5.4, Performing volume inventory on page 194 for details.

Figure 5-49 Working with DS4000 and FAStT volumes

Chapter 5. TotalStorage Productivity Center common base use

211

Figure 5-50 DS4000 and FAStT volumes panel

Figure 5-50 shows the volume inventory for the selected device. From this panel you can Create and Delete volumes or assign and unassign volumes to hosts.

5.7.3 Creating DS4000 or FAStT volumes


To create new storage volumes on a DS4000 or FAStT select the Create button from the right side of the Volumes panel (Figure 5-50). You will be presented with the Create volume panel as in Figure 5-51 below.

Figure 5-51 DS4000 or FAStT create volumes

Select the desired Storage Type and array from Available arrays using the drop-downs. Then enter the Volume quantity and Requested volume size of the new volumes. Finally select the host posts you want to assign to the new volumes from the Defined host ports scroll box, holding the <Crtl> key to select multiple ports. The Defined host ports section of the panel allows you to use TotalStorage Productivity Center for Fabric (formally TSANM) functionality to perform zoning actions to provide volume access to specific host WWPNS. If TSANM is not installed, you will receive the warning shown in Figure 5-52 on page 213. If TotalStorage Productivity Center for Fabric is installed refer to Chapter 7, TotalStorage Productivity Center for Fabric use on page 331 for details on how to configure and use it.

212

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Figure 5-52 Tivoli SAN Manager warning

If TotalStorage Productivity Center for Fabric is not installed click OK to continue.

5.7.4 Assigning hosts to DS4000 and FAStT volumes


Use this feature to assign hosts to an existing DS4000 or FAStT volume. To assign a DS4000 or FAStT volume to a host port first select a volume by clicking on it from the volumes panel (Figure 5-50 on page 212). Now click the Assign host button from the right side of the Volumes panel. You will be presented with a panel as in Figure 5-53. Select from the list of available host ports world wide port names (wwpns), and select either a single host port wwpn, or more than one by holding down the control <Ctrl> key and selecting multiple host ports. When the desired host ports have been selected for host assignment, click OK .

Figure 5-53 Assign host ports to DS4000 or FAStT

The Defined host ports section of the panel allows you to use TotalStorage Productivity Center for Fabric (formally TSANM) functionality to perform zoning actions to provide volume access to specific host WWPNS. If TSANM is not installed, you will receive the warning shown in Figure 5-54 on page 214. If TotalStorage Productivity Center for Fabric is installed refer to Chapter 7, TotalStorage Productivity Center for Fabric use on page 331 for details on how to configure and use it.

Chapter 5. TotalStorage Productivity Center common base use

213

Figure 5-54 Tivoli SAN Manager warning

If TotalStorage Productivity Center for Fabric is not installed click OK to continue.

5.7.5 Unassigning hosts from DS4000 or FAStT volumes


To unassign a DS4000 or FAStT volume from a host port first select a volume by clicking on it from the volumes panel (Figure 5-50 on page 212). Now click the Unassign host button from the right side of the Volumes panel. You will be presented with a panel as in Figure 5-55. Select from the list of available host port world wide port names (wwpns), and select either a single host port wwpn, or more than one by holding down the control <Ctrl> key and selecting multiple host ports. When the desired host ports have been selected for host assignment, click OK Note: If the Unassign host button is grayed out when you have selected a volume this means that there are no current hosts assignment for that volume. If you believe this is incorrect it could be that the Productivity Center common base inventory is out of step with this devices configuration. This can arise when an administrator makes changes to the device outside of the Productivity Center common base interface. To correct this problem perform an inventory for the DS4000 or FAStT and repeat. Refer to 5.4, Performing volume inventory on page 194.

Figure 5-55 Unassign host ports from DS4000 or FAStT

TotalStorage Productivity Center for Fabric is not called to perform zoning clean up in Version 2.1. This functionality is planned in a future release.

214

Managing Disk Subsystems using IBM TotalStorage Productivity Center

5.8 Event Action Plan Builder


The IBM Director includes sophisticated event-handling support. Event Action Plans can be set up that specify what steps, if any, should be taken when particular events occur in the environment.

Understanding Event Action Plans


An Event Action Plan associates one or more event filters with one or more actions. For example, an Event Action Plan can be created to send a page to the network administrator's pager if an event with a severity level of critical or fatal is received by the IBM Director Server. You can include as many event filter and action pairs as needed in a single Event Action Plan. An Event Action Plan is activated only when you apply it to a managed system or group. If an event targets a system to which the plan is applied and that event meets the filtering criteria defined in the plan, the associated actions are performed. Multiple event filters can be associated with the same action, and a single event filter can be associated with multiple actions. The list of action templates you can use to define actions are listed in the Actions pane of the Event Action Plan Builder window (see Figure 5-56).

Figure 5-56 Action templates

Creating an Event Action Plan


Event Action Plans are created in the Event Action Plan Builder window. To open this window from the Director Console, click the Event Action Plan Builder icon on the toolbar. The Event Action Plan Builder window is displayed (see Figure 5-57 on page 216).

Chapter 5. TotalStorage Productivity Center common base use

215

Figure 5-57 Event Action Plan Builder

Here are the tasks to create an Event Action Plan. 1. To begin do one of the following: Right-click Event Actions Plan in the Event Action Plans pane to access the context menu, and then select New. Select File New Event Action Plan from the menu bar. Double-click the Event Action Plan folder in the Event Action Plans pane (see Figure 5-58).

Figure 5-58 Create Event Action Plan

2. Enter the name you want to assign to the plan and click OK to save the new plan. The new plan entry with the name you assigned is displayed in the Event Action Plans pane. The plan is also added to the Event Action Plans task as a child entry in the Director Console (see Figure 5-59 on page 217). Now that you have defined an event action plan, you can assign one or more filters and actions to the plan.

216

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Figure 5-59 New Event Action Plan

Note: You can create a plan without having defined any filters or actions. The order in which you build a filter, action, and Event Action Plan does not matter. 3. Assign at least one filter to the Event Action Plan using one of the following methods: Drag the event filter from the Event Filters pane to the Event Action Plan in the Event Action Plans pane. Highlight the Event Action Plan, then right-click the event filter to display the context menu and select Add to Event Action Plan. Highlight the event filter, then right-click the Event Action Plan to display the context menu and select Add Event Filter (see Figure 5-60 on page 218).

Chapter 5. TotalStorage Productivity Center common base use

217

Figure 5-60 Add events to the action plan

The filter is now displayed as a child entry under the plan (see Figure 5-61).

Figure 5-61 Events added to action plan

4. Assign at least one action to at least one filter in the Event Action Plan using one of the following methods: Drag the action from the Actions pane to the target event filter under the desired Event Action Plan in the Event Action Plans pane. Highlight the target filter, then right-click the desired action to display the context menu and select Add to Event Action Plan. Highlight the desired action, then right-click the target filter to display the context menu and select Add Action. The action is now displayed as a child entry under the filter (see Figure 5-62 on page 219).

218

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Figure 5-62 Action as child of Display Events Action Plan

5. Repeat the previous two steps for as many filter and action pairings as you want to add to the plan. You can assign multiple actions to a single filter and multiple filters to a single plan. Note: The plan you have just created is not active because it has not been applied to a managed system or a group. In the next section we explain how to apply an Event Action Plan to a managed system or group. For information about editing or deleting a plan, refer to Appendix C, Event management on page 511.

5.8.1 Applying an Event Action Plan to a managed system or group


An Event Action Plan is activated only when it is applied to a managed system or group. To activate a plan: Drag the plan from the Tasks pane of the Director Console to a managed system in the Group Contents pane or to a group in the Groups pane. Drag the system or group to the plan. Select the plan, right-click the system or group, and select Add Event Action Plan (see Figure 5-63 on page 220).

Chapter 5. TotalStorage Productivity Center common base use

219

Figure 5-63 Notification of Event Action Plan added to group/system(s)

Repeat this step for all associations you want to make. You can activate the same Event Action Plan for multiple systems (see Figure 5-64).

Figure 5-64 Director with Event Action Plan - Display Events

Once applied, the plan is activated and displayed as a child entry of the managed system or group to which it is applied when the Associations - Event Action Plans item is checked.

Message Browser
When an event occurs, the Message Browser (see Figure 5-65 on page 221) pops up on the server console.

220

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Figure 5-65 Message Browser

If the message has not yet been viewed, then that Status for that message will be blank. When viewed, a checked envelope icon will appear under the Status column next to the message. To see greater detail on a particular message, select the message in the left pain and click the Event Details button (see Figure 5-66).

Figure 5-66 Event Details window

5.8.2 Exporting and importing Event Action Plans


With the Event Action Plan Builder, you can import and export action plans to files. This enables you to move action plans quickly from one IBM Director Server to another or to import action plans that others have provided.

Export
Event Action Plans can be exported to three types of files: Archive: Backs up the selected action plan to a file that can be imported into any IBM Director Server.
Chapter 5. TotalStorage Productivity Center common base use

221

HTML: Creates a detailed listing of the selected action plans, including its filters and actions, in an HTML file format. XML: Creates a detailed listing of the selected action plans, including its filters and actions, in an XML file format. To export an Event Action Plan, do the following: 1. Open the Event Action Plan Builder. 2. Select an Event Action Plan from those available under the Event Action Plan folder. 3. Select File Export, then click the type of file you want to export to (see Figure 5-67). If this Event Action Plan will be imported by an IBM Director Server, then select Archive.

Figure 5-67 Archiving an Event Action Plan

4. Name the archive and set a location to save in the Select Archive File for Export window as shown in Figure 5-68 on page 223.

222

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Figure 5-68 Select destination and file name

Tip: When you export an action plan, regardless of the type, the file is created on a local drive on the IBM Director Server. If an IBM Director Console is used to access the IBM Director Server, then the file could be saved to either the Server or the Console by selecting Server or Local from the Destinations pull-down. It cannot be saved to a network drive. Use the File Transfer task if you want to copy the file elsewhere.

Import
Event Action Plans can be imported from a file. The file must be an Archive export of an action plan from another IBM Director Server. The steps to import an Event Action Plan are as follows: 1. Transfer the archive file to be imported to a drive on the IBM Director Server. 2. Open the Event Action Plan Builder from the main Console window. 3. Click File Import Archive (see Figure 5-69 on page 224).

Chapter 5. TotalStorage Productivity Center common base use

223

Figure 5-69 Importing an Event Action Plan

4. From the Select File for Import window (see Figure 5-70), select the archive file and location. The file must be located on the IBM Director Server. If using the Console, you must transfer the file to the IBM Director Server before it can be imported.

Figure 5-70 Select file for import

5. Click OK to begin the import process. The Import Action Plan window opens, displaying the action plan to import (see Figure 5-71 on page 225). If the action plan had been assigned previously to systems or groups, you will be given the option to preserve associations during the import. Select Import to complete the import process.

224

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Figure 5-71 Verifying import of Event Action Plan

Chapter 5. TotalStorage Productivity Center common base use

225

226

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Chapter 6.

TotalStorage Productivity Center for Disk use


This chapter provides a step-by-step guide to configuring and using the Performance Manager functions provided by the TotalStorage Productivity Center for Disk.

Copyright IBM Corp. 2004, 2005. All rights reserved.

227

6.1 Performance Manager GUI


The Performance Manager Graphical User Interface can be launched from IBM Director console Interface. After logging on to IBM Director, you will see a window as in Figure 6-1. On right most tasks pane, you will see Manage Performance launch menu. It is highlighted and expanded in the figure shown.

Figure 6-1 IBM Director Console with Performance Manager

6.2 Exploiting Performance Manager


You can use the Performance Manager component of TotalStorage Productivity Center for Disk to manage and monitor the performance of the storage devices that TotalStorage Productivity Center for Disk supports. Performance Manager provides the following functions: Collecting data from devices Performance Manager collects data from the IBM TotalStorage Enterprise Storage Server (ESS) and IBM TotalStorage SAN Volume Controller in the first release. Configuring performance thresholds

228

Managing Disk Subsystems using IBM TotalStorage Productivity Center

You can use the Performance Manager to set performance thresholds for each device type. Setting thresholds for certain criteria allows Performance Manager to notify you when a certain threshold has been crossed, thus enabling you to take action before a critical event occurs. Viewing performance data You can view performance data from the Performance Manager database using the gauge application programming interfaces (APIs). These gauges present performance data in graphical and tabular forms. Using Volume Performance Advisor (VPA) The Volume performance advisor is an automated tool that helps you select the best possible placement of a new LUN from a performance perspective. This function is integrated with Device Manager so that, when the VPA has recommended locations for requested LUNs, the LUNs can ne allocated and assigned to the appropriate host without going back to Device Manager. Managing Workload Profile You can use Performance Manager to select a predefined workload profile or to create a new workload profile that is based on historical performance data or on an existing workload profile. Performance Manager uses these profiles to create a performance recommendation for volume allocation on an IBM storage server. The installation of the Performance Manager component onto an existing TotalStorage Productivity Center for Disk server provides a new Manage Performance task tree (Figure 6-2) on the right-hand side of the TotalStorage Productivity Center for Disk host. This task tree includes:

Figure 6-2 New Performance Manager tasks

6.2.1 Performance Manager data collection


To collect performance data for the Enterprise Storage Server (ESS), Performance Manager invokes the ESS Specialist server, setting a particular performance data collection frequency and duration of collection. Specialist collects the performance statistics from an ESS, establishes a connection, and sends the collected performance data to Performance Manager. Performance Manager then processes the performance data and saves it in Performance Manager database tables. From this section you can create data collection tasks for the supported, discovered IBM storage devices. There are two ways to use the Data Collection task to begin gathering device performance data. 1. Drag and drop the data collection task option from the right-hand side of the Multiple Device Manager application, onto the Storage Device you want to create the new task for.

Chapter 6. TotalStorage Productivity Center for Disk use

229

2. Or, right-click a storage device in the center column, and select the Performance Data Collection Panel menu option as shown in Figure 6-3.

Figure 6-3 ESS tasks panel

Either operation results in a new window named Create Performance Data Collection Task (Figure 6-4). In this window you will specify: A task name A brief description of the task The sample frequency in minutes The duration of data collection task (in hours)

Figure 6-4 Create Performance Data Collection Task for ESS

230

Managing Disk Subsystems using IBM TotalStorage Productivity Center

In our example, we are setting up a data collection task on an ESS with Device ID 2105.16603. After we have created a task name Cottle _ESS with sample frequency of 5 minutes and duration is 1 hour. It is possible to add more ESSs to the same data collection task, by clicking the Add button on the right-hand side. You can click individual devices, or select multiples by making use of the Ctrl key. See Figure 6-5 for an example of this panel. In our example, we created task for ESS with device ID 2105.22513.

Figure 6-5 Adding multiple devices to a single task

Once we have established the scope of our data collection task and have clicked the OK button, we see our new data collection task available in the right-hand task column (see Figure 6-6 on page 232). We have created task Cottle_ESS in the example. Tip: When providing a description for a new data collection task, you may elect to provide information about the duration and frequency of the task.

Chapter 6. TotalStorage Productivity Center for Disk use

231

Figure 6-6 A new data collection task

In order to schedule it, right-click the selected task (see Figure 6-7 on page 233).

232

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Figure 6-7 Scheduling new data collection task

You will see another window as shown in Figure 6-8.

Figure 6-8 Scheduling task

You have the option to use the job scheduling facility of TotalStorage Productivity Center for Disk, or to execute the task immediately. If you elect Execute Now, you will see a panel similar to the one in Figure 6-9 on page 234, providing you with some information about task name, task status, including the time it was initialized.

Chapter 6. TotalStorage Productivity Center for Disk use

233

Figure 6-9 Task progress panel

If you would rather schedule the task to occur at a future time, or to specify additional parameters for the job schedule, you will walk through the panel in Figure 6-10. You may provide scheduled job description for the scheduled job. In our example, we created a job 24March Cottle ESS.

Figure 6-10 New scheduled job panel

234

Managing Disk Subsystems using IBM TotalStorage Productivity Center

6.2.2 Using IBM Director Scheduler function


You may specify additional scheduled job parameters by using the Advanced button. You will see the panel in Figure 6-11. You can also launch this panel from IBM Director Console Tasks Scheduler File New Job. You can also setup the repeat frequency of the task.

Figure 6-11 New scheduled job, advanced tab

Once you are finished customizing the job options, you may save it using either the File Save as menu, or by clicking on the diskette icon in the top left corner of the advanced panel. When you save with advanced job options, you may provide descriptive name for the job as shown in Figure 6-12 on page 236.

Chapter 6. TotalStorage Productivity Center for Disk use

235

Figure 6-12 Save job panel with advanced options

You should receive a confirmation that your job has been saved as shown in Figure 6-13.

Figure 6-13 scheduled job is saved

6.2.3 Reviewing Data collection task status


You can review the task status using Task Status under the rightmost column Tasks. See Figure 6-14 on page 237.

236

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Figure 6-14 Task Status

Upon double-clicking Task Status it launches following panel as shown in Figure 6-15 on page 238.

Chapter 6. TotalStorage Productivity Center for Disk use

237

Figure 6-15 Task Status Panel

For reviewing the task status, you can click the task shown under the Task name column. For example, we selected the task FCA18P which was aborted, as shown in Figure Figure 6-16 on page 239. Subsequently, it will show the details with Device ID, Device status and Error Message ID in Device status box. You can click the entry in the device status box. It will further show up the Error message in the Error message box.

238

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Figure 6-16 Task status details

6.2.4 Managing Performance Manager Database


The collected performance data is stored in backend DB2 database. This database needs to be maintained in order to keep only relevant data in the database. Your may decide freqency for purging old data based on your organizations requirements. The performance database panel can be launched by clicking Performance Database as shown in Figure 6-17 on page 240. It will show Performance Database Properties panel as shown in Figure 6-18 on page 241.

Chapter 6. TotalStorage Productivity Center for Disk use

239

Figure 6-17 Launch Performance Manager database

You can use performance database panel to specify properties for a performance database purge task. The sizing function on this panel shows used space and free space in the database. You can choose to purge performance data based on age of the data,the type of the data and the storage devices associated with the data.

240

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Figure 6-18 Properties of Performance database

The Performance database properties panel shows following: Database name The name of the database Database location The file system on which the database resides. Total file system capacity The total capacity available to the file system, in Gigabytes. Space currently used on file system It is shown in Gigabytes and also in percentage. Performance Manager database full The amount of space used by Performance Manager. The percentage shown is the percentage of available space (total space - currently used space) used by Performance Manager database. The following formula is used to derive the percentage of disk space full in the Performance Manager database: a= total capacity of the file system b= total allocated space for Performance Manager database on the file system c= the portion of the allocated space that is used by the Performance Manager database

Chapter 6. TotalStorage Productivity Center for Disk use

241

For any decimal amount over a particular number, the percentage is rounded up to the next largest integer. For example, 5.1% is rounded to and displayed as 6%. Space status advisor The Space status advisor monitors the amount of space used by the Performance Manager database and advises you as to whether you should purge data. The advisor levels are: Low: You do not need to purge data now. High: You should purge data soon. Critical: You need to purge data now. Disk space thresholds for status categories: low if utilization <0.8, high if 0.8 <= utilization <0.9 and critical otherwise. The delimiters between low/high/critical are 80% and 90% full. Purge database options Groups the database purge information. Name Type A name for the performance database purge task. The maximum length for a name can be from 1 to 250 characters. Description (optional) Type a description for the performance database purge task. The maximum length for a description can be from 1 to 250 characters. Device type Select one or more storage device types for the performance database purge. Options are SVC, ESS, or All. (Default is All.) Purge performance data older than Select the maximum age for data to be retained when the purge task is run. You can specify this value in days (1-365) or years (1-10). For example, if you select the Days button and a value of 10, the database purge task will purge all data older than 10 days when it is run. Therefore, if it has been more than 10 days since the task was run, all performance data would be purged. Defaults are 365 days or 10 years. Purge data containing threshold exception information Deselecting this option will preserve performance data that contains information about threshold exceptions. This information is required to display exception gauges. This option is selected by default. Save as task button When you click Save as task, the information you specified is saved and the panel closes. The newly created task is saved to the IBM Director Task pane under the Performance Manager Database. Once it is saved, the task can be scheduled using the IBM Director scheduler function.

6.2.5 Performance Manager gauges


Once data collection is complete, you may use the gauges task to retrieve information about a variety of storage device metrics. Gauges are used to tunnel down to the level of detail necessary to isolate performance issues on the storage device. To view information collected by the Performance Manager, a gauge must be created or a custom script written to access the DB2 tables/fields directly.

242

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Creating a gauge
Open the IBM Director and do one of the following tasks: Right-click the storage device in the center pane and select Gauges (see Figure 6-19).

Figure 6-19 Right-click gauge opening

You can click Gauges on panel shown and it will produce Job Status window as shown in Figure Figure 6-21 on page 244. It is also possible to launch Gauge creation by expanding Multiple Device Manager - Manage Performance in the rightmost column. Drag the Gauges item to the storage device desired and drop to open the gauges for that device (see Figure 6-20 on page 244).

Chapter 6. TotalStorage Productivity Center for Disk use

243

Figure 6-20 Drag-n-drop gauge opening

This will produce the Job status window (see Figure 6-21) while the Performance gauges window opens. You will see the Job status window while other selected windows are opening.

Figure 6-21 Opening Performance gauges job status

The Performance gauges window will be empty until a gauge is created for use. We have created three gauges (see Figure 6-22).

Figure 6-22 Performance gauges

Clicking on the Create button to the left brings up the Job status window while the Create performance gauge window opens.

244

Managing Disk Subsystems using IBM TotalStorage Productivity Center

The Create performance gauge window changes values depending on whether the cluster, array, or volume items are selected in the left pane. Clicking on the cluster item in the left pane produces a window as seen in Figure 6-23.

Figure 6-23 Create performance gauge - Performance

Under the Type pull-down, select Performance or Exception.

Performance
Cluster Performance gauges provide details on the average cache holding time in seconds as well as the percent of I/O requests that were delayed due to NVS memory shortages. Two Cluster Performance gauges are required per ESS to view the available historical data for each cluster. Additional gauges can be created to view live performance data. Device: Select the storage device and time period from which to build the performance gauge. The time period can be changed for this device within the gauge window thus allowing an overall or detailed view of the data. Name: Enter a name that is both descriptive of the type of gauge as well as the detail provided by the gauge. The name must not contain white space, special characters, or exceed 100 characters in length. Also, the name must be unique on the TotalStorage Productivity Center for Disk Performance Manager Server. If test were used as a gauge name, then it cannot be used for another gauge - even if another storage device were selected - as it would not be unique in the database. Example names: 28019P_C1H would represent the ESS serial number (28019), the performance gauge type (P), the cluster (C1), and historical (H) while 28019E would

Chapter 6. TotalStorage Productivity Center for Disk use

245

represent the exception (E) gauge for the same ESS. Gauges for the clusters and arrays would build on that nomenclature to group the gauges by ESS on the Gauges window. Description: Use this space to enter a detailed description of the gauge that will appear on the gauge and in the Gauges window. Metric(s): Click on the metric(s) that will be displayed by default when the gauge is opened for viewing. Those metrics with the same value under the Units column in the Metrics table can be selected together using either Shift mouse-click or Ctrl mouse-click. The metrics in this field can be changed on the historical gauge after the gauge has been opened for viewing. In other words, a historical gauge for each metric or group of metrics is not necessary. However, these metrics cannot be changed for live gauges. A new gauge is required for each metric or group of metrics desired. Component: Select a single device from the Component table. This field cannot be changed when the gauge is opened for viewing. Data points: Selecting this radio button enables the gauge to display most recent data being obtained from currently running performance collectors against the storage device. One most recent performance data gauge is required per cluster and per metric to view live collection data. The Device pull-down displays text informing the user whether or not a performance collection task is running against this Device. You can select no. of datapoints as per your requirement to display the last x data points from the date of the last collection. The data collection could be currently running or most recent one. Date Range: Selecting this radio button presents data over a range of dates/times. Enter the range of dates this gauge will use as a default for the gauge. The date and time values may be adjusted within the gauge to any value before or after the default values and the gauge will display any relevant data for the updated time period. Display gauge: Checking this box will display the newly created gauge after clicking the OK button. Otherwise, if left blank, the gauge will be saved without displaying. Click the OK button when ready to save the performance gauge (see Figure 6-24 on page 247). In the example shown inFigure 6-24 on page 247, we have created gauge with name 22513C1H with description as average cache holding time. We selected starting and ending date as 11-March-2005. This corresponds with our data collection task schedule.

246

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Figure 6-24 Ready to save performance gauge

The gauge appears after clicking the OK button with the Display gauge box checked or when the Display button is clicked after selecting the appropriate gauge on the Performance gauges window (see Figure 6-26 on page 248). If you decide not to display gauge and save only, then you will see panel as shown in Figure 6-25.

Figure 6-25 Saved performance gauges

Chapter 6. TotalStorage Productivity Center for Disk use

247

Figure 6-26 Cluster performance gauge - upper

The top of the gauge contains the following labels: Graph Name Description Device Component level Component ID Threshold The Name of the gauge The Description of the gauge The storage device selected for the gauge Cluster, Array, Volume The ID # of the component (Cluster, Array, Volume) The thresholds that were applied to the metrics

Time of last data collection Date and time of the last data collection The center of the gauge contains the only fields that may be altered in the Display Properties section. The Metrics may be selected either individually or in groups as long as the data types are the same (for example, seconds with seconds, milliseconds with milliseconds or percent with percent). Click the Apply button to force a Performance Gauge section update with the new y-axis data. The Start Date:, End Date:, Start Time:, and End Time: fields may be varied to either expand the scope of the gauge or narrow it for a more granular view of the data. Click the Apply button to force a Performance Gauge section update with the new x-axis data.For example, we applied Total I/O Rate metric to the saved gauge and resultant graph is as shown in Figure 6-27 on page 249. The Performance Gauge section of the gauge displays graphically, the information over time selected by the gauge and the options in the Display Properties section (see Figure 6-27 on page 249).

248

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Figure 6-27 Cluster performance gauge with applied I/O rate metric

Click the Refresh button in the Performance Gauge section to update the graph with the original metrics and date/time criteria. The date and time of the last refresh appear to the right of the Refresh button. The date and time displayed update first followed by the contents of the graph which can be up to several minutes later. Finally, the data used to generate the graph are displayed at the bottom of the window (see Figure 6-28 on page 250). Each of the columns in the data section can be sorted up or down by clicking on the column heading (see Figure 6-32 on page 253). The sort reads the data from left to right so the results may not be as expected. The gauges for the array and volume components function in the same manner as the cluster gauge created above.

Chapter 6. TotalStorage Productivity Center for Disk use

249

Figure 6-28 Create Performance Gauge- Lower

Exception
Exception gauges display data only for those active thresholds that were crossed during the reporting period. One Exception gauge displays threshold exceptions for the entire storage device based on the thresholds active at the time of collection. To create an exception gauge, select Exception from the Type pull-down menu (see Figure 6-29 on page 251).

250

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Figure 6-29 Create performance gauge - Exception

By default, the Cluster will be highlighted in the left pane and the metrics and component sections will not be available. Device: Select the storage device and time period from which to build the performance gauge. The time period can be changed for this device within the gauge window thus allowing an overall or detailed view of the data. Name: Enter a name that is both descriptive of the type of gauge as well as the detail provided by the gauge. The name must not contain white space, special characters, or exceed 100 characters in length. Also, the name must be unique on the TotalStorage Productivity Center for Disk Performance Manager Server. Description: Use this space to enter a detailed description of the gauge that will appear on the gauge and in the Gauges window Date Range: Selecting this radio button presents data over a range of dates/times. Enter the range of dates this gauge will use as a default for the gauge. The date and time values may be adjusted within the gauge to any value before or after the default values and the gauge will display any relevant data for the updated time period. Display gauge: Checking this box will display the newly created gauge after clicking the OK button. Otherwise, if left blank, the gauge will be saved without displaying. Click the OK button when ready to save the performance gauge. We created exception gauge as shown in Figure 6-30 on page 252.

Chapter 6. TotalStorage Productivity Center for Disk use

251

Figure 6-30 Ready to save exception gauge

The top of the gauge contains the following labels: Graph Name Description Device Threshold The Name of the gauge The Description of the gauge The storage device selected for the gauge The thresholds that were applied to the metrics

Time of last data collection Date and time of the last data collection The center of the gauge contains the only fields that may be altered in the Display Properties section. The Start Date: and End Date: fields may be varied to either expand the scope of the gauge or narrow it for a more granular view of the data. Click the Apply button to force an Exceptions Gauge section update with the new x-axis data. The Exceptions Gauge section of the gauge displays graphically, the information over time selected by the gauge and the options in the Display Properties section (see Figure 6-31 on page 253).

252

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Figure 6-31 Exceptions gauge - upper

Click the Refresh button in the Exceptions Gauge section to update the graph with the original date criteria. The date and time of the last refresh appear to the right of the Refresh button. The date and time displayed update first followed by the contents of the graph which can be up to several minutes later. Finally, the data used to generate the graph are displayed at the bottom of the window. Each of the columns in the data section can be sorted up or down by clicking on the column heading (see Figure 6-32).

Figure 6-32 Data sort options

Chapter 6. TotalStorage Productivity Center for Disk use

253

Display Gauges
To display previously created gauges, either right-click the storage device and select gauges (see Figure 6-19 on page 243) or drag and drop the Gauges item on the storage device (see Figure 6-20 on page 244) to open the Performance gauges window (see Figure 6-33).

Figure 6-33 Performance gauges window

Select one of the gauges and then click Display.

Gauge Properties
The Properties button allows the the following fields/choices to be modified:

Performance
Description Metrics Component Data points Date range - date and time ranges You can change the data displayed in the gauge from Data points with an active data collection to Date range (see Figure 6-34 on page 255). Selecting Date range allows you to choose the Start date and End Date using the performance data stored in the DB2 database.

254

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Figure 6-34 Performance gauge properties

Exception
You can change the Type property of the gauge definition from Performance to Exception. For a gauge type of Exception you can only choose to view data for a Date range.(see Figure 6-35 on page 256).

Chapter 6. TotalStorage Productivity Center for Disk use

255

Figure 6-35 Exception gauge properties

Delete a gauge
To delete a previously created gauge, either right-click the storage device and select gauges (see Figure 6-19 on page 243) or drag and drop the Gauges item on the storage device (see Figure 6-20 on page 244) to open the Performance gauges window (see Figure 6-33 on page 254). Select the gauge to remove and click Delete. A pop-up window will prompt for confirmation to remove the gauge (see Figure 6-36).

Figure 6-36 Confirm gauge removal

To confirm, click Yes and the gauge will be removed. The gauge name may now be reused, if desired.

256

Managing Disk Subsystems using IBM TotalStorage Productivity Center

6.2.6 ESS thresholds


Thresholds are used to determine watermarks for warning and error indicators for an assortment of storage metrics, including: Disk Utilization Cache Holding Time NVS Cache Full Total I/O Requests Thresholds are used either by: 1. Right-clicking on a storage device in the center panel of TotalStorage Productivity Center for Disk, and selecting the thresholds menu option (Figure 6-37) 2. Or, by dragging and dropping the thresholds task from the right tasks panel in Multiple Device Manager, onto the desired storage device, to display or modify the thresholds for that device

Figure 6-37 Opening the thresholds panel

Upon opening the thresholds submenu, you will see the following display, which shows the default thresholds in place for ESS as shown in Figure 6-38 on page 258.

Chapter 6. TotalStorage Productivity Center for Disk use

257

Figure 6-38 Performance Thresholds main panel

On the right-hand side, there are buttons for Enable, Disable, Copy Threshold Properties, Filters, and Properties. If the selected task is already enabled, then the Enable button will appear greyed out, as in our case. If we attempt to disable a threshold that is currently enabled, by clicking on the disable button, a message will be displayed as shown in Figure 6-39.

Figure 6-39 Disabling threshold warning panel

You may elect to continue, and disable the selected threshold, or to cancel the operation by clicking Dont disable threshold. The copy threshold properties button will allow you to copy existing thresholds to other devices of similar type (ESS, in our case). The window in Figure 6-40 on page 259 is displayed.

258

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Figure 6-40 Copying thresholds panel

Note: As shown in Figure 6-40 the copying threshold panel is aware that we have registered on our ESS CIM agent host both clusters of our model 800 ESS, as indicated by the semicolon delimited IP address field for the device ID 2105.22219. The Filters window is another available thresholds option. From this panel, you can enable, disable and modify existing filter values against selected thresholds as shown in Figure 6-41.

Figure 6-41 Threshold filters panel

Finally, you can open the properties panel for a selected threshold, and are shown the panel in Figure 6-42 on page 260. You have options to acknowledge the values at their current settings, or modify the warning or error levels, or select the alert level (none, warning only, and warning or error are the available options).

Chapter 6. TotalStorage Productivity Center for Disk use

259

Figure 6-42 Threshold properties panel

6.2.7 Data collection for SAN Volume Controller


Performance Manager uses an integrated configuration assistant tool (ICAT) interface of a SAN Volume Controller (SVC) to start and stop performance statistics collection on an SAN Volume Controller device. The process for performing data collection on SAN Volume Controller is similar to that of ESS. You will need to setup a new Performance Data Collection Task for the SAN Volume Controller device. Figure 6-43 is an example of the panel you should see when you drag the Data Collection task onto the SAN Volume Controller device, or right-click the device and left-click Data Collection. As with the ESS data collection task: Define a task name and description. Select sample frequency and duration of the task and click OK. Note: The SAN Volume Controller can perform data collection at a minimum 15 minute interval. You may use the Add button to include additional SAN Volume Controller devices in the same data collection task, or use the Remove button to exclude SAN Volume Controllers from an existing task. In our case we are performing data collection against a single SAN Volume Controller.

260

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Figure 6-43 The SVC Performance Data Collection Task

As long as at least one data collection task has been completed, you are able to proceed with the steps to create a gauge to view your performance data.

6.2.8 SAN Volume Controller thresholds


To view the available Performance Manager Thresholds, you can right-click the SAN Volume Controller device and click Thresholds, or drag the Threshold task from the right-hand panel onto the SAN Volume Controller device you want to query. A panel like the one in Figure 6-44 appears.

Figure 6-44 The SVC performance thresholds panel

SVC has following thresholds with their default properties: VDisk I/Os rate Total number of virtual disk I/Os for each I/O group. SAN Volume Controller defaults: Status: Disabled Warning: None Error: None VDisk bytes per second Virtual disk bytes per second for each I/O group. SAN Volume Controller defaults: Status: Disabled Warning: None Error: None MDisk I/O rate

Chapter 6. TotalStorage Productivity Center for Disk use

261

Total number of managed disk I/Os for each managed disk group. SAN Volume Controller defaults: Status: Disabled Warning: None Error: None MDisk bytes per second Managed disk bytes per second for each managed disk group. SAN Volume Controller defaults: Status: Disabled Warning: None Error: None You may only enable a particular threshold once minimum values for warning and error levels have been defined. If you attempt to select a threshold and enable it without first modifying these values, you will see a notification like the one in Figure 6-45 on page 262.

Figure 6-45 SAN Volume Controller threshold enable warning

Tip: In TotalStorage Productivity Center for Disk, default threshold warning or error values of -1.0 are indicators that there is no recommended minimum value for the threshold and are therefore entirely user defined. You may elect to provide any reasonable value for these thresholds, keeping in mind the workload in your environment. To modify the warning and error values for a given threshold, you may select the threshold, and click the Properties button. The panel in Figure 6-46 will be shown. You can modify the threshold as appropriate, and accept the new values by selecting the OK button.

Figure 6-46 Modifying threshold warning and error values

262

Managing Disk Subsystems using IBM TotalStorage Productivity Center

6.3 Exploiting gauges


Gauges are a very useful tool and help in identifying performance bottlenecks. In this section we will show the drill down capabilities of gauges. The purpose of this section in not to cover performance analysis in detail for a specific product, but to highlight capabilities of the tool. You may adopt and use similar approach for the performance analysis.

6.3.1 Before you begin


Before you begin with customizing gauges, ensure that there are enough and correct samples of data are collected in performance database. This is true for any performance analysis. The data samples you collect must cover appropriate time period which corresponds with high / low of I/O workload. Also it should also cover sufficient iterations of the peak activity to perform analysis over a period of time. This is true for analyzing a pattern. You may use advanced scheduler function of IBM director to configure a repetitive task. If you plan to perform analysis for one specific instance of activity, then you may ensure that performance data collection task covers the specific time period.

6.3.2 Creating gauges example


In this example, we will cover creation and customization of gauges for ESS. First of all, we scheduled an ESS performance data collection task at every three hours interval using IBM Director scheduler function for 8 days. For details on using IBM Director scheduler you may refer to 6.2.2, Using IBM Director Scheduler function on page 235. For creating the gauge, we launched the performance gauges panel as shown in Figure 6-47 by right-clicking on the ESS device.

Figure 6-47 Gauges panel

Click on Create button to create a new gauge. You will see a panel similar to Figure 6-48.

Chapter 6. TotalStorage Productivity Center for Disk use

263

Figure 6-48 Create performance gauge

We selected Cluster in top left corner, Total I/O Rate metric in the metrics box and selected Cluster 1 in component box. Also, we entered following parameters: Name: 22219P_drilldown_analysis Description: Eiderdown analysis for 22219 ESS For the Date range, we selected our historical data collection sampling period and clicked on Display gauge. Upon clicking OK button, we got the next panel as shown in Figure 6-49.

264

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Figure 6-49 Gauge for ESS 22219 Cluster performance

6.3.3 Zooming in on the specific time period


The previous chart shows some peaks of high cluster I/O rate between the period 6th to 8th April. We decided to zoom into the peak activity and hence selected a more narrow time period as shown in Figure 6-50 on page 266 and clicked Apply button.

Chapter 6. TotalStorage Productivity Center for Disk use

265

Figure 6-50 Zooming on specific time period for Total IO rate metric

6.3.4 Modify gauge to view array level metrics


For the next chart, we decided to have array level metric for the same time period as previous. Hence, we selected the gauge, which we created earlier and clicked on properties as shown in Figure 6-51.

Figure 6-51 Properties for a defined gauge

The subsequent panel is shown in Figure 6-52 on page 267. We selected array level metric for Cluster 1, Device Adapter 1, Loop A and disk group 2 for Avg. Response time as circled in the figure.

266

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Figure 6-52 Customizing gauge for array level metric

The resultant chart is shown in Figure 6-53 on page 268.

Chapter 6. TotalStorage Productivity Center for Disk use

267

Figure 6-53 Modified gauge with Avg. response time chart

6.3.5 Modify gauge to review multiple metrics in same chart


Next, we decided to review Total I/O, read/sec and writes/sec in the same chart for comparison purposes. We selected these three metrics in the gauge properties panel and clicked Apply. The resultant chart is shown in Figure 6-54 on page 269. Tip: For selecting multiple metrics in the same chart, click the first metric, hold the Shift key and click the last metric. If metrics you plan to choose are not on in the continuous list, but separated, then hold Ctrl key instead of Shift key.

268

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Figure 6-54 Viewing multiple metrics in the same chart

The chart Writes and Total I/O are shown as overlapping and Reads are shown zero. Tip: If you select the multiple metrics which do not have the same units for y-axis, then the error is displayed as shown in Figure 6-55 on page 269.

Figure 6-55 Error displayed if there are no common units

6.4 Performance Manager command line interface


The Performance Manager module includes a command line interface known as perfcli, located in the directory c:\Program Files\IBM\mdm\pm\pmcli. In its present release, perfcli utility includes support for ESS and SAN Volume Controller data collection task creation and management (starting and stopping data collection tasks). There

Chapter 6. TotalStorage Productivity Center for Disk use

269

are also executables that support viewing and management of task filters, alert thresholds, and gauges. There is detailed help available at the command line, with information about syntax and specific examples of usage.

6.4.1 Performance Manager CLI commands


The Performance Manager Command Line Interface (perfcli) includes the following commands shown in Figure 6-56.

Figure 6-56 Directory listing of the perfcli commands

startesscollection/startsvccollection: These commands are used respectively to build and run data collection against ESS or SAN Volume Controller. lscollection: This command is used to list the running, aborted or finished data collection tasks on the Performance Management server. stopcollection: This command may be used to stop data collection against a specified task name. lsgauge: You can use the lsgauge command to display a list of existing gauge names, types, device types, device IDs, modified dates and description information. rmgauge: Use this command to remove existing gauges. showgauge: This command is used to display performance data output using an existing defined gauge. setessthresh/setsvcthresh: These two commands are respectively used to set ESS and SAN Volume Controller performance thresholds. cpthresh: You can use the cpthresh command to copy threshold properties from one selected device to one or more other devices. setfilter: You can use setfilter to set or change the existing threshold filters. lsfilter: This command may be used to display the threshold filter settings for all devices specified. setoutput: This command may be used to view or modify the existing data collection output formats, including settings for paging, row printing, format (default, xml, or character delimited), header printing, and output verbosity. lsdev: This command can be used list the storage devices that are used by TotalStorage Productivity Center for Disk. lslun: This command can be used list the LUNs or Performance Manager volumes associated with storage devices. lsthreshold: This command can be used to list the threshold status associated with storage devices. 270
Managing Disk Subsystems using IBM TotalStorage Productivity Center

lsgauge: This command can be used list the existing gauge names, gauge type, device name, device ID, date modified and optionally device information. showgauge: Use this command to display performance output by triggering existing gauge. showcapacity: This command displays managed capacity, the sum of managed capacity by device type and total of all ESS and SAN Volume Controller managed storage. showdbinfo: This command displays percent full, used space and free space of Performance Manager database. lsprofile: Use this command to display Volume Performance Advisor profiles. cpprofile: Use this command to copy Volume Performance Advisor profiles. mkprofile: Use this command to create a Workload Profile that you can use later with mkrecom command to create a performance recommendation for ESS volume allocation. mkreom: Use this command to generate and optionally, apply a performance LUN advisor recommendation for ESS volumes. lsdbpurge: This command can be used to display the status of database purge tasks running in TotalStorage Productivity Center for Disk. tracklun: This command can be used to obtain historical performance statistics used to create a profile. startdbpurge: Use this command to start a database purge task. showdev: Use this command to display device properties. setoutput: This command sets output format for administrative command-line interface. cpthresh: This command can be used to copy threshold properties from one device to other devices that are of the same type. rmprofile: Use this command to remove delete performance LUN advisor profiles.

6.4.2 Sample command outputs


We have shown some sample commands as in the Figure 6-57. It shows invoking perfcli commands from Windows command line interface.

Figure 6-57 Sample perfcli command from Windows command line interface

The Figure 6-58 on page 272 and Figure 6-59 on page 272 show perfcli sample commands within the perfcli tool.

Chapter 6. TotalStorage Productivity Center for Disk use

271

Figure 6-58 perfcli sample command within perfcli tool.

Figure 6-59 perfcli lslun sample command within perfcli tool

6.5 Volume Performance Advisor (VPA)


The Volume Performance Advisor (VPA) is designed to be an expert advisor that recommends allocations for storage space based on considerations of the size of the request, an estimate of the performance requirement and type of workload, as well as the existing load on an ESS that might compete with the new request. The Volume Performance Advisor will then make a recommendation as to the number and size of Logical Unit Numbers (logical volumes or LUNs) to allocate, and a location within the ESS which is a good placement with respect to the defined performance considerations. The user is given the option of implementing the recommendation (allocating the storage), or obtaining subsequent recommendations.

6.5.1 VPA introduction


Data placement within a large, complex storage subsystem has long been recognized as a storage and performance management issue. Performance may suffer if done casually or carelessly. It can also be costly to discover and correct those performance problems, adding to the total cost of ownership. Performance Manager is designed to contain an automated approach for storage allocation through the functions of a storage performance advisor. It is called the Volume Performance Advisor (VPA). The advisor is designed to automate decisions that could be achieved by an

272

Managing Disk Subsystems using IBM TotalStorage Productivity Center

expert storage analyst given the time and sufficient information. The goal is to give very good advice by allowing VPA to consider the same factors that an administrator would in deciding where to best allocate storage. Note: At this point in time VPA tool is available for IBM ESS only.

6.5.2 The provisioning challenge


You want to allocate a specific amount of storage to run a particular workload. You could be a storage administrator interacting through a user interface, or the user could be another system component (such as a SAN management product, file system, DataBase Management System (DBMS), or logical volume manager) interacting with the VPA Application Programming Interface (API). A storage request is satisfied by selecting some number of logical volumes (Logical Unit Numbers (LUNs). For example, if you ask for 400 GB of storage, then a low I/O rate, cache-friendly workload could be handled on a single 400 GB logical disk residing on a single disk array; whereas a cache-unfriendly, high-bandwidth application might need several logical volumes allocated across multiple disk arrays, using LVM, file system, or database striping to achieve the required performance. The performance of those logical disks depends on their placement on physical storage, and what other applications might be sharing the arrays. The job of the Volume Performance Advisor (VPA) is to select an appropriate set (number and placement) of logical disks that: Consider the performance requirements of the new workload Balance the workload across the physical resources Consider the effects of the other workloads competing for the resources Storage administrators and application developers need tools that pull together all the components of the decision process used for provisioning storage. They need tools to characterize and manage workload profiles. They need tools to monitor existing performance, and tools to help them understand the impact of future workloads on current performance. What they need is a tool that automates this entire process, which is what VPA for ESS does.

6.5.3 Workload characterization and workload profiles


Intelligent data placement requires a rudimentary understanding of the application workload, and the demand likely to be placed on the storage system. For example, cache-unfriendly workloads with high I/O intensity require a larger number of physical disks than cache-friendly or lightweight workloads. To account for this, the VPA requires specific workload descriptions to drive its decision-making process. These workload descriptions are precise, indicating I/O intensity rates; percentages of read, write, random, and sequential content; cache information; and transfer sizes. This workload-based approach is designed to allow the VPA to correctly match performance attributes of the storage with the workload attributes with a high degree of accuracy. For example, high random-write content workloads might best be pointed to RAID10 storage. High cache hit ratio environments can probably be satisfied with fewer numbers of logical disks. Most users have little experience or capability for specifying detailed workload characteristics. The VPA is designed to deal with this problem in three ways: Predefined workload definitions based on characterizations of environments across various industries and applications. They include standard OLTP type workloads, such as OLTP High, and Batch Sequential.

Chapter 6. TotalStorage Productivity Center for Disk use

273

Capturing existing workloads by observing storage access patterns in the environment. The VPA allows the user to point to a grouping of volumes and a particular window of time, and create a workload profile based on the observed behavior of those volumes. Creation of hypothetical workloads that are similar to existing profiles, but differ in some specific metrics. The VPA has tools to manage a library of predefined and custom workloads, to create new workload profiles, and to modify profiles for specific purposes.

6.5.4 Workload profile values


It is possible to change many specific values in the workload profile. For example, the access density may be high because a test workload used small files. It can be adjusted to a more accurate number. Average transfer size always defaults to 8KB, and should be modified if other information is available for the actual transfer size. The peak activity information should also be adjusted. It defaults to the time when the profile workload was measured. In an existing environment it should specify the time period for contention analysis between existing workloads and the new workload. Figure 6-60 on page 275 shows a user defined VPA workload profile.

274

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Figure 6-60 User defined workload profile details example

6.5.5 How the Volume Performance Advisor makes decisions


As mentioned previously, the VPA is designed to take several factors into account when recommending volume allocation: Total amount of space required Minimum and maximum number of volumes, and sizes of volumes Workload requirements Contention from other workloads VPA tries to allocate volumes on the least busy resources, at the same time balancing workload across available resources. It uses the workload profile to estimate how busy internal ESS resources will become if that workload is allocated on those resources. So it estimates how busy the raid arrays, disk adapters, and controllers will become. The workload profile is very important in making that decision. For example, cache hit ratios affect the activity on the disk adapters and raid arrays. When creating a workload profile from existing data, it's important for you to pick a representative time sample to analyze. Also you should examine the IO/sec per GB. Many applications have

Chapter 6. TotalStorage Productivity Center for Disk use

275

access density in the range of 0.1 to 3.0. If it is significantly outside this range, then this might not be an appropriate sample. The VPA will tend to utilize resources that can best accommodate a particular type of workload. For example, high write content will make Raid 5 arrays busier than RAID 10 and VPA will therefore bias to RAID 10. Faster devices will be less busy, so VPA biases allocations to the faster devices. VPA also analyzes the historical data to determine how busy the internal ESS components (arrays, disk adapters, clusters) are due to other workloads. In this way, VPA tries to avoid allocating on already busy ESS components. If VPA has a choice among several places to allocate volumes, and they appear to be about equal, it is designed to apply a randomizing factor. This keeps the advisor from always giving the same advice, which might cause certain resources to be overloaded if everyone followed that advice. This also means that several usages of VPA by the same user may not necessarily get the same advice, even if the workload profiles are identical. Note: VPA tries to allocate the fewest possible volumes, as long as it can allocate on low utilization components. If the components look too busy, it will allocate more (smaller) volumes as a way of spreading the workload.It will not recommend more volumes than the maximum specified by the user. VPA may however be required to recommend allocation on very busy components. A utilization indicator in the user panels will indicate whether allocations would cause components to become heavily utilized. The I/O demand specified in the workload profile for the new storage being allocated is not a Service Level Agreement (SLA). In other words, there is no guarantee that the new storage, once allocated, will perform at or above the specified access density. The VPA will make recommendations unless the available space on the target devices is exhausted. An invocation of VPA can be used for multiple recommendations. To handle a situation when multiple sets of volumes are to be allocated with different workload profiles, it is important that the same VPA wizard be used for all sets of recommendations. Select the "Make additional recommendations" on the View Recommendations page, as opposed to starting a completely new sequence for each separate set of volumes to be allocated. VPA is designed to remember each additional (hypothetical) workload when making additional recommendations. There are, of course, limitations to the use of an expert advisor such as VPA. There may well be other constraints (like source and target Flashcopy requirements), which must be considered. Sometimes these constraints can be accommodated with careful use of the tool, and sometimes they may be so severe that the tool must be used very carefully. That is why VPA is designed as an advisor. In summary, the Volume Performance Advisor (VPA) provides you a tool to help automate complex decisions involved in data placement and provisioning. It short, it represents a future direction of storage management software! Computers should monitor their resources and make autonomic adjustments based on the information. The VPA is an expert advisor which provides you a step in that direction.

6.5.6 Enabling the Trace Logging for Director GUI Interface


Enabling GUI logging can be a useful for troubleshooting GUI problems, however unlikely they may occur, which you may encounter while using VPA. Since this function requires a server reboot where TotalStorage Productivity Center for Disk is running, you may consider doing this prior to engaging in use of the VPA. 276
Managing Disk Subsystems using IBM TotalStorage Productivity Center

On the Windows platform, follow these steps: 1. Start Run regedit.exe 2. Open the HKEY_LOCAL_MACHINE SOFTWARE Tivoli Director CurrentVersion file. 3. Modify the LogOutput. Set the value to be equal to 1. 4. Reboot the server The output log location from the instructions above is X:/program files/ibm/director/log (where X is the drive where the Director application was installed). The log file for the Director is com.tivoli.console.ConsoleLauncher.stderr. On the Linux platform, TWGRas.properties sets the output logging on. You need to remove the comment from the last line in the file (twg.sysout=1) and ensure that you have set TWG_DEBUG_CONSOLE as an environment variable. For example in bash: $export TWG_DEBUG_CONSOLE=true

6.6 Getting started


In this section, we provide detailed steps of using VPA with predefined performance parameters (workload profile) you can utilize for advice in optimal volume placement in your environment. For detailed steps on creating customized workload profiles, you may refer to 6.7, Creating and managing Workload Profiles on page 303. To use VPA with customized workload profile, the major steps are: Create a data collection task in Performance Manager In order to utilize the VPA, you must first have a useful amount of performance data collected from the device you want to examine. Refer to Performance Manager data collection on page 229 for more detailed instructions regarding use of the Performance data collection feature of the Performance Manager. Schedule and run a successful performance data collection task It is important to have an adequate amount of historical to provide you a statistically relevant sampling population. Create or use a user-defined workload profile Use the Volume Performance Advisor to: Add Devices Specify Settings Select workload profile (predefined or user defined) View Profile Details Choose Candidate Location Verify Settings Approve Recommendations or restart VPA process with different parameters)

6.6.1 Workload profiles


The basic VPA concept, and the storage administrators goal, is to balance the workload across all device components. This requires detailed ESS configuration information including all components (clusters, device adapters, logical subsystems, ranks, and volumes)

Chapter 6. TotalStorage Productivity Center for Disk use

277

To express the workload represented by the new volumes, they are assigned a workload profile. A workload profile contains various performance attributes: I/O demand, in I/O operations per second per GB of volume size Average transfer size, in KBs per second Percentage mix of I/O - sequential or random, and read or write Cache utilization - percent of: cache hits for random reads, cache misses for random writes Peak activity time - the time period when the workload is most active You can create your own workload profile definitions in two ways By copying existing profiles, and editing their attributes By performing an analysis of existing volumes in the environment This second option is known as a Workload Analysis. You may select one or more existing volumes, and the historical performance data for these volumes retrieved, to determine their (average) performance behavior over time.

6.6.2 Using VPA with predefined Workload profile


This section describes a VPA example using a default workload profile. The purpose of this section to help you familiarize for using VPA tool. Although, it is recommended to generate and use your customized workload profile after gathering performance data. The customized profile will be realistic in terms of your application performance requirements. The VPA provides five predefined (canned) Workload Profile definitions. They are: 1. OLTP Standard - for general Online Transaction Processing Environment (OLTP) 2. OLTP High - for higher demand OLTP applications 3. Data Warehouse - for data warehousing applications 4. Batch Sequential - for batch applications accessing data sequentially 5. Document Archival - for archival applications, write-once, read-infrequently Note: Online Transaction Processing (OLTP) is a type of program that facilitates and manages transaction-oriented applications. OLTP is frequently used for data entry and retrieval transactions in a number of industries, including banking, airlines, mail order, supermarkets, and manufacturers. Probably the most widely installed OLTP product is IBM's Customer Information Control System (CICS).

6.6.3 Launching VPA tool


The steps to utilize a default workload profile to have the Volume Performance Advisor examine and advise you on volume placement are: 1. In the IBM Director Task pane, click Multiple Device Manager. 2. Click Manage Performance. 3. Click Volume Performance Advisor. 4. You can choose two methods to launch VPA: a. Drag and Drop the VPA Icon to the storage device to be examined (see Figure 6-61). 278
Managing Disk Subsystems using IBM TotalStorage Productivity Center

b. Select storage device right-click the device select Volume Performance Advisor (see Figure 6-62 on page 280).

Figure 6-61 Drag and Drop the VPA icon to the storage device

Chapter 6. TotalStorage Productivity Center for Disk use

279

Figure 6-62 Select ESS and right Click for VPA

If a storage device is selected for the drag and drop step, that is not in the scope of the VPA, the following message will open (see Figure 6-63). Devices such as a CIMOM or an SNMP device will generate this error. Only ESS is supported at this time.

Figure 6-63 Error launching VPA example

6.6.4 ESS User Validation


If this is the first time your are using VPA tool for the selected ESS device, then the ESS User Validation panel will display as shown in Figure 6-64 on page 281. Otherwise, if you have already validated the ESS user for VPA usage, then it will skip this panel and it will launch the VPA setting default panel as shown in Figure 6-69 on page 283.

280

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Figure 6-64 ESS User validation window example

In the ESS User Validation panel, specify the user name, password, and port for each of the IBM TotalStorage Enterprise Storage Servers (ESSs) that you want to examine. During the initial setup of the VPA, on the ESS User Validation window, you need to first select the ESS (as shown in Figure 6-65 on page 282) and then input correct username, correct password and password verification.

You must click Set after you have input the correct username, password, and password verification in the appropriate fields (see highlighted portion with circle in Figure 6-66 on page 282). When you click Set, the application will populate the data you input (masked) into the correct Device Information fields in the Device Information box (see Figure 6-67 on page 282).
If you do not click Set, before selecting OK, the following error(s) will appear depending on what data needs to be entered. BWN005921E (ESS Specialist username has not been entered correctly or applied) BWN005922E (ESS Specialist password has not been entered correctly or applied) If you encounter these errors, ensure you have correctly input the values in the input fields in the lower part of the ESS user validation window and then retry by clicking OK. The ESS user validation window contains the following fields:; Devices table - Select an ESS from this table. It includes device IDs and device IP addresses of the ESS devices on which this task was dropped. ESS Specialist username - Type a valid ESS Specialist user name and password for the selected ESS. Subsequent displays of the same information for this ESS show the user name and password that was entered. You can change the user name by entering a new user name in this field. ESS Specialist password - Type a valid ESS Specialist password for the selected ESS. Any existing password entries are removed when you change the ESS user name. Confirm password - Type the valid ESS Specialist password again exactly as you typed it in the password field. ESS Specialist port - Type a valid ESS port number. The default is 80. Remove button - Click to remove the selected information.

Set button - Click to set names, passwords, and ports without closing the panel.

Chapter 6. TotalStorage Productivity Center for Disk use

281

Add button - Click to invoke the Add devices panel. OK button - Click to save the changes and close the panel.

Figure 6-65 ESS User validation - select ESS

Figure 6-66 Apply ESS Specialist user defined input

Figure 6-67 Applied ESS Specialist user defined input

282

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Click OK button to save the changes and close the panel. The application will attempt to access the ESS storage device. The error message in Figure 6-68 can be indicative of use of an incorrect username or password for authentication. Additionally, If you have a firewall and are not adequately authenticating to the storage device, the error may appear. If this does occur, check to ensure you are using the correct username and password for the authentication and have firewall access and are properly authenticating to establish storage device connectivity.

Figure 6-68 Authentication error example

6.6.5 Configuring VPA settings for the ESS diskspace request


After you have successfully completed the User Validation step, the VPA Settings window will open (see Figure 6-69).

Figure 6-69 VPA Settings default panel

Chapter 6. TotalStorage Productivity Center for Disk use

283

You use the Volume performance advisor - Settings window to identify your requirements for host attachment and the total amount of space that you need. You can also use this panel to specify volume number and size constraints, if any. We will begin with our example as shown in Figure 6-70.

Figure 6-70 VPA settings for example

The following are the fields in this window: Total space required (GB) - Type the total space required in gigabytes. The smallest allowed value is 0.1 GB. We requested 3 GB for our example.

284

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Note: You cannot exceed the available volume space available for examination on the server(s) you select. To show the error, in this example we selected host Zombie and Total required space as 400 Gb. We got error as shown in Figure 6-71 on page 285. Action: Retry with different values and look at the server log for details. Solution(s): Select a smaller maximum Total (volume) Space required Gb and retry this step. Select more hosts which will include adequate volume space for this task. You may want to select the box entitled Consider volumes that have already been allocated but not assigned in the performance recommendation. Director log file enabling will generate logs for troubleshooting Director GUI components including the PM coswearer. In this example, the file we reference is; com.tivoli.console.ConsoleLauncher.stderr. (com.tivoli.console.ConsoleLauncher.stdout is also useful) The sample log is shown in Figure 6-72 on page 285.

Figure 6-71 Error showing exceeded the space requested

Figure 6-72 Director GUI console errorlog

Specify a volume size range button - Click the button to activate the field, then use the Minimum size (GB) spinner and the Maximum size (GB) spinner to specify the range. In this example, we selected 1 GB as minimum and 3 GB as maximum. Specify a volume quantity range button - Click the button to activate the field, then use the Minimum number spinner and the Maximum number spinner to specify the range. Consider volumes that have already been allocated but not assigned to hosts in the performance recommendation. If you check this box, VPA will use these types of volumes in the volume performance examination process.
Chapter 6. TotalStorage Productivity Center for Disk use

285

When this box (Consider volumes...) is checked and you click Next, the VPA wizard will open the following warning window (see Figure 6-73).

Figure 6-73 Consider volumes - warning window example

Note: The BWN005996W message is a warning (W). You have selected to reuse unassigned existing volumes which could potentially cause data loss. Go "Back" to the VPA Settings window by clicking OK if you do not want to consider unassigned volumes. Press the "Help" button for more information. Explanation: The Volume Performance Advisor will assume that all currently unassigned volumes are not in use, and may recommend the reuse of these volumes. If any of these unassigned volumes are in use, for example as replication targets or other Data Replication purposes, and these volumes are recommended for reuse, the result could be potential data loss. Action: Go back to the Settings window and unselect "Consider volumes that have already been allocated but not assigned to hosts in the performance recommendation" if you do not want to consider volumes which may potentially be used for other purposes. If you want to continue to consider unassigned volumes in your recommendations then continue. Host Attachments table - Select one or more hosts from this table. This table lists all hosts (by device ID) known to the ESS that you selected for this task. It is important to only choose hosts for volume consideration that are the same server type. It is also important to note that the VPA takes into consideration the maximum volume limitations of server type such as (Windows 256 volumes maximum) and AIX (approximately 4000 volumes). If you select a volume range above the server limit, VPA will display an error. In our example we used the host Zombie. Next button - Click to invoke the Choose workload profile window. You use this window to select a workload profile from a list of existing profile templates. 5. Click Next, after inputting your preferred parameters, and the Choose workload profile window will display (see Figure 6-74 on page 287).

286

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Figure 6-74 VPA Choose workload profile window example

6.6.6 Choosing Workload Profile


You can use the Choose workload profile window to select a workload profile from a list of existing profiles. The Volume performance advisor uses the workload profile and other performance information to advise you about where volumes should be created. For our example we have selected the OLTP Standard default profile type. Workload profiles table - Select a profile from this table to view or modify. The table lists predefined or existing workload profile names and descriptions. Predefined workload profiles are shipped with Performance Manager. Workload profiles that you previously created, if any, are also listed. Manage profiles button - Click to invoke the Manage workload profile panel. Profile details button - Click to see details about the selected profile in the Profile details panel as shown in Figure 6-75 on page 288. Details include the following types of information: Total I/O per second per GB Random read cache hits Sequential and random reads and writes Start and end dates Duration (days)

Chapter 6. TotalStorage Productivity Center for Disk use

287

Note: You cannot modify the properties of the workload profile from this panel. The panel options are greyed out (inactive). You can make changes to a workload profile from Manage Profile Create like panel.

Next button - Click to invoke the Choose candidate locations window. You can use this panel to select volume locations for the VPA to consider. C

Figure 6-75 Properties for OLTP Standard profile

6. After reviewing the properties for predefined workload profiles, you may select a workload profile from the table which closely resemble your workload profile requirements. For our scenario, we have selected the OLTP Standard workload name from the Choose workload profile window. We are going to use this workload profile for the LUN placement recommendations. Name - Shows the default profile name. The following restrictions apply to the profile name. 288 The workload profile name must be between 1 to 64 characters. Legal characters are A-Z, a-z, 0-9, "-", "_", ".", and ":"

Managing Disk Subsystems using IBM TotalStorage Productivity Center

First character cannot be "-" or "_". Spaces are not an acceptable character.

Description - Shows the description of workload profile. Total I/O per second per GB - Shows the values for the selected workload profile Total I/O per second rate. Average transfer size (KB) - Shows the values for the selected workload profile. Caching information box - Shows the cache hits and destage percentages: Random read cache hits Range from 1 - 100%. The default is 40%. Random write destage Range from 1 - 100%. The default is 33%. Read/Write information box - Shows the read and write values. The percentages for the four fields must equal 100% Sequential reads - The default is 14%. Sequential writes - The default is 23%. Random reads - The default is 36%. Random writes - The default is 32%.

Peak activity information box Since currently we are only viewing properties of an existing profile, the parameters for this box are not selectable. But as reference for subsequent usage, you may review this box. After you review properties for this box, you may Click on Close button. While creating new profile, this box will allow you to input following parameters: Use all available performance data radio button. You can select this option if you want to include all available performance data previously collected in consideration for this workload profile. Use the specified peak activity period radio button. You can select this button as an alternate option (instead of using the Use all available performance data option) for consideration in this workload profile definition. Time setting drop-down menu. Select from the following options for the time setting you want to use for this workload profile. - Device time - Client time - Server time - GMT Past days to analyze spinner. Use this (or manually enter the number) to select the number of days of historical information you want to consider for this workload profile analysis Time Range drop-down lists. Select the Start time and End time to consider using the appropriate fields.

Close button - Click to close the panel. You will be returned to the Choose workload profile window.

Chapter 6. TotalStorage Productivity Center for Disk use

289

6.6.7 Choosing candidate locations


Select the name of the profile you want to use from the VPA Choose workload profile window and then the Choose Candidate Locations window will open (see Figure 6-76). We chose our OTLP Standard workload profile for the VPA analysis.

Figure 6-76 Choose candidate locations window

You can use the Choose candidate locations page to select volume locations for the performance advisor to consider. You can choose to either include or exclude the selected locations for the advisor's consideration. The VPA uses historical performance information to advise you about where volumes should be created. The Choose candidate locations page is one of the panels the performance advisor uses to collect and evaluate the information. Device list - Displays device IDs or names for each ESS on which the task was activated (each ESS on which you dropped the Volume advisor icon). Component Type tree - When you select a device from the Device list, the selection tree opens on the left side of the panel. The ESS component levels are shown in the tree. The following objects might be included: ESS Cluster

290

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Device adapter Array Disk group

The component level names are followed by information about the capacity and the disk utilization of the component level. For example, we used System component level. It shows Component ID - 2105-F20-16603,Type- System, Description 2105-F20-16603-IBM, Available capacity - 311GB, Utilization - Low.(see Figure 6-76 on page 290). Tip: You can select the different ESS component types and the VPA will reconsider the volume placement advise based on that particular select. To familiarize yourself with the options, select each component in turn to determine which component type centric advise you prefer before proceeding to the next step. Select a component type from the tree to display a list of the available volumes for that component in the Candidates table (see Figure 6-76 on page 290). We chose system for this example. It represents entire ESS system in this case. Click Add button to add the component selected in the Candidates table to the Selected candidates table. See Figure 6-77. It shows Selected candidate as 2105-F20-16603.

Figure 6-77 VPA Chose candidate locations Component Type tree example (system)

Chapter 6. TotalStorage Productivity Center for Disk use

291

6.6.8 Verify settings for VPA


Click the Next button to invoke the Verify Settings window (see Figure 6-78).

Figure 6-78 VPA Verify settings window example

You can use the Verify settings panel to verify the volume settings that you specified in the previous panels of the VPA.

6.6.9 Approve recommendations


After you have successfully completed the Verify Settings step click the Next button, the Approve Recommendations window opens (see Figure 6-79 on page 293).

292

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Figure 6-79 VPA Recommendations window example

You use the Recommendations window to first view the recommendations from the VPA and then to create new volumes based on the recommendations. In this example, VPA also recommends the location of volume as 16603:2:4:1:1700 in the Component ID column. This means recommended volume location is at ESS with ID 16603, Cluster 2, Device Adapter 4, Array 1 and volume ID1700. With this information, it is also possible to create volume manually via ESS specialist Browser interface or use VPA to create the same. In the Recommendations window of the wizard, you can choose whether the recommendations are to be implemented, and whether to loop around for another set of recommendations. At this time, you have two options (other than to cancel the operation). Make your final selection to Finish or return to the VPA for further recommendations. a. If you do not want to assign the volumes using the current VPA advice, or want the VPA to make another recommendation, check only the Make Additional Recommendations box. b. If you want to use the current VPA recommendation and make additional volume assistants at this time, select both the Implement Recommendations and Make Additional Recommendations check boxes. If you choose both options, you must
Chapter 6. TotalStorage Productivity Center for Disk use

293

first wait until the current set of volume recommendations are created, or created and assigned, before continuing. If you make this type of selection, a secondary window will appear which runs synchronously within the VPA. Tip: Stay in the same VPA session if you are going to implement volumes and add new volumes. This will enable VPA to provide advice for your current selections, checking for previous assignments, and verifying that no other VPA is processing the same volumes.

6.6.10 VPA loopback after Implement Recommendations selected


In the following example, we show the results of a VPA session. 1. In this example, we decided to Implement recommendations and also Make additional recommendations. Hence we selected both check boxes. (see Figure 6-80).

Figure 6-80 VPA Recommendation selected check box

2. Click the continue button to proceed with VPA advice (see Figure 6-80).

294

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Figure 6-81 VPA results - in progress panel

3. In Figure 6-81, we see that the volumes are being created on the server we selected previously. This process takes a little time so be patient. 4. Figure 6-82 indicates that the volume creation and assigning to ESS has completed. Be patient and momentarily, the VPA loopback sequence will continue.

Figure 6-82 VPA final results

5. After the volume creation step has successfully completed, the following Settings window will again open so that you may add more volumes (see Figure 6-83 on page 296).

Chapter 6. TotalStorage Productivity Center for Disk use

295

Figure 6-83 VPA settings default

For the additional recommendations, we decided to use same server. But, we specified the Volume quantity range instead of Volume size range for the requested space of 2 GB. See Figure 6-84 on page 297.

296

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Figure 6-84 VPA additional space request

After clicking Next, Choose Profile Panels opens. We selected same profile as before OLTP Standard. See Figure 6-85 on page 298.

Chapter 6. TotalStorage Productivity Center for Disk use

297

Figure 6-85 Choose Profile

After clicking Next, Choose candidate locations panel opens. We selected Cluster from Component Type drop-down list. See Figure 6-86 on page 299.

298

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Figure 6-86 Choose candidate location

The Component Type Cluster shows Component ID as 2105-F20-16603:2, Types as Cluster, Descriptor as 2, Available capacity as 308GB and Utilization as Low. This indicates that VPA plans to provision additional capacity on this Cluster 2 of ESS. After clicking Add button Cluster 2 is a selected candidate for new volume. See Figure 6-87 on page 300.

Chapter 6. TotalStorage Productivity Center for Disk use

299

Figure 6-87 Choose candidate location - select cluster

Upon clicking Next, Verify settings panel open as shown in Figure 6-88 on page 301.

300

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Figure 6-88 Verify settings

After verifying settings and clicking Next, the VPA recommendations window opens. See Figure 6-89 on page 302.

Chapter 6. TotalStorage Productivity Center for Disk use

301

Figure 6-89 VPA Recommendations

Since, the purpose of this example is to show our readers the VPA looping only, we decided to un-check both check box for Implement Recommendations and Make additional recommendations. Clicking Finish completed the VPA example (Figure 6-90 on page 303).

302

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Figure 6-90 Finish VPA panel

6.7 Creating and managing Workload Profiles


The VPA makes decisions based on characteristics of the Workload Profile to decide volume placement recommendations. VPA decisions will not be accurate if improper workload profile is chosen and it may cause future performance issues for application. It is a must to have a valid and appropriate workload profile created prior to using VPA for any application. Therefore, creating and managing workload profile is an important task, which involves regular upkeep of workload profiles for each application disk I/O served by ESS. The Figure 6-91 on page 304 shows a typical sequence of managing workload profiles.

Chapter 6. TotalStorage Productivity Center for Disk use

303

M A N A G I N G P R O F I L E S

D eterm in e I/O w o rk lo a d typ e o f ta rg e t a p p lic a tio n

C re a te I/O p erfo rm a n c e da ta c o lle c tio n ta s k

N o m atch w ith p re-d efined p ro file

C lo s e m a tc h w ith pre d e fine d p ro file

C C o o s e e P re -dfin e d d h h o s P re -d e e fin e o C re a a lik e o r r C retete lik e p ro file p ro file

In itia te I/O p e rfo rm a n c e d a ta c o lle c tio n c ove rin g p e a k lo ad tim e s an d g a th e r s u fficie n t s a m p le s

S p e c ify tim e p e rio d o f p ea k a c tiv ity

C h o o s e C reate p ro file

V a lid ate an alysis res u lts

If res u lts n o t a c ce p tab le , re-v a lid a te d a ta c o lle c tio n p a ra m e te rs

R es u lts ac c ep ted

S a ve P ro file

Figure 6-91 Typical sequence for managing workload profiles

Before using VPA for any additional disk space requirement for an application, you will need to: Determine typical I/O workload type of that application And, have performance data collected which covers peak load time periods You will need to determine the broad category in selected I/O workload fits in, e.g., whether it is OLTP high, OLTP standard, Data Warehouse, Batch sequential or Document archival. This is shown as highlighted box in the diagram. The TotalStorage Productivity Center for Disk provides predefined profiles for these workload types and it allows you to create additional similar profiles by choosing Create like profiles. If do not find any match with predefined profiles, then you may prefer to Create a new profile. While choosing Create like or Create profiles, you will also need to specify historical performance data samples covering peak load activity time period. Optionally, you may specify additional I/O parameters. Upon submitting the Create or Create like profile, the performance analysis will performed and results will be displayed. Depending upon the outcome of the results, you may need to re-validate the parameters for data collection task and ensure that peak load samples are taken correctly. If the results are acceptable, you may Save the profile. This profile can be referenced for future usage by VPA. In the next 6.7.1, Choosing Workload Profiles on page 304, we will cover step-by-step tasks using an example.

6.7.1 Choosing Workload Profiles


You can use Performance Manager to select a predefined workload profile or to create a new workload profile that is based on historical performance data or on an existing workload profile. 304
Managing Disk Subsystems using IBM TotalStorage Productivity Center

Performance Manager uses these profiles to create a performance recommendation for volume allocation on an IBM storage server. You can also use a set of Performance Manager panels to create and manage the workload profiles. There a three methods to choose workload profile as shown in Figure 6-92.

Figure 6-92 Choosing workload profiles

Note: Using predefined profile does not require pre-existing performance data, but other two methods require historical performance data from target storage device. You can launch workload profiles management tool using drag and drop method from IBM Director console GUI interface. Drag Manage Workload Profile task to target storage device as shown inFigure 6-93.

Figure 6-93 Launch Manage Workload Profile

If you are using Manage Workload Profile or VPA tool for first time of the selected ESS device, then you will need to authorize ESS user validation. This has been described in detail 305

Chapter 6. TotalStorage Productivity Center for Disk use

in 6.6.4, ESS User Validation on page 280. The ESS User Validation is same for VPA and Manage Workload Profile tools. After the successful ESS User validation, it will open Manage Workload Profile panel as shown in Figure 6-94.

Figure 6-94 Manage workload profiles

You can create or manage workload profile using following three methods: 1. Selecting a predefined workload profile Several predefined workloads are shipped with Performance Manager. You can use the Choose workload profile panel to select the predefined workload profile that most closely matches your storage allocation needs. The default profiles shipped with Performance Manager as shown in Figure 6-95.

Figure 6-95 Default workload profiles

You can select properties panel of respective predefined profile to verify the profile details. Sample profile for OLTP Standard is shown in Figure 6-75 on page 288. 2. Creating a workload profile similar to another profile You can use the Create like panel to modify the details of a selected workload profile.You can then save the changes and assign a new name to create a new workload profile from the existing profile. To Create like profile, following are the task involved: a. Create a performance data collection task for target storage device -You may need to include multiple storage devices based on your profile requirements for the application.

306

Managing Disk Subsystems using IBM TotalStorage Productivity Center

b. Schedule data collection task - You may need to ensure data collection tasks runs over a sufficient period of time, which truly represents typical I/O load of the respective application. The key is to have sufficient historical data. Tip: Best practice is to schedule frequency of performance data collection task in such as way that it covers peak load periods of I/O activity and it has atleast few samples of peak loads. The number of samples depends on I/O characteristics of the application.

c. Determine the closest workload profile match - Determine whether new workload profile matches w.r.t existing or predefined profiles. Note that it may not be the exact fit,but should be of somewhat similar type. d. Create the new similar profile - Using Manage Workload Profile task, create new profile. You will need to select appropriate time period for historical data, which you have collected earlier. In our example, we created similar profile using Batch Sequential predefined profile. First, we select Batch Sequential profile and click Create like button as shown in Figure 6-96.

Figure 6-96 Manage workload profile - create like

It opens properties panel for Batch Sequential as shown in Figure 6-97 on page 308.

Chapter 6. TotalStorage Productivity Center for Disk use

307

Figure 6-97 Properties for Batch sequential profile

We changed following values for our new profile: a. Name: ITSO_Batch_Daily b. Description: For ITSO batch applications c. Average transfer size: 20KB d. Sequential reads: 65% e. Random reads: 10% f. Peak Activity information: We used time period as past 24 days from 12AM to 11PM. We saved our new profile. (see Figure 6-98 on page 309).

308

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Figure 6-98 New Profile

This new profile - ITSO_Daily_batch is now available in Manage workload profile panel as shown in Figure 6-99 on page 310. This profile can now be used for VPA analysis. This completes our example.

Chapter 6. TotalStorage Productivity Center for Disk use

309

Figure 6-99 Manage profile panel with new profile

3. Creating a new workload profile from historical data You can use the Manage workload profile panel to create a workload profile based on historical data about existing volumes. You can select one or more volumes as the base for the new workload profile. You can then assign a name to the workload profile, optionally provide a description, and finally create the new profile. To create a new workload profile click the Create button as shown in Figure 6-100.

Figure 6-100 Create a new workload profile

It will launch a new panel for Creating workload profile as shown in Figure 6-101 on page 311. At this stage, you will need to specify the volumes for performance data analysis. In our example, we selected all volumes. For selecting multiple volumes but not all, click the first volume, hold the Shift key and click the last volume in the list. After all the required volumes are selected (shown as dark blue), click the Add button. See Figure 6-101 on page 311. 310

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Note: The ESS volumes you specify should be representative of I/O behavior of the application, for which you are planning to allocate space using the VPA tool.

Figure 6-101 Create new profile and add volumes

Upon Clicking Add button all the selected volumes will be moved to selected volumes box as shown in Figure 6-102 on page 312.

Chapter 6. TotalStorage Productivity Center for Disk use

311

Figure 6-102 Selected volumes and performance period for new workload profile

In the Peak activity information box, you will need to specify activity sample period for Volume performance analysis. You can select option Use all available performance data or select Use the specified peak activity period. Based on your application peak I/O behavior, you may specify the sample period with Start date, Duration in days and Start / End time. For time setting, you can choose the drop-down box: Device time or Client time or Server time or GMT

After you have entered all the fields, you can click Next. You will see Review panel as shown in Figure 6-103 on page 313.

312

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Figure 6-103 Review new workload profile parameters

You can specify a Name for new workload profile and Description. It is advised that you may put detailed description which covers: What is Application name for which profile is being created? What is the application I/O activity does peak activity sample represents? When it was created? Optionally, you may specify who has created? Any other relevant information as per your organization requirements

In our example, we created profile named New_ITSO_app1_profile. At this point you may click Finish. At this point, the TotalStorage Productivity Center for Disk will begin Volume performance analysis based on the parameters you have provided. This process may take some time depending upon number of volumes and sampling time period. Hence, be patient. Finally, it will show the outcome of the analysis. In our example, we got results notification message as shown in Figure 6-104 on page 314. Analysis yielded that results are not statistically significant, as shown message: BWN005965E: Analysis results are not significant. This may indicate that:
Chapter 6. TotalStorage Productivity Center for Disk use

313

a. There is not enough I/O activity on selected volumes b. Or, time period chosen for sampling is not correct c. Or, correct volumes were not chosen You have a option of Save or Discard the profile. We decided to save the profile.

Figure 6-104 Results for Create Profile

Upon saving the profile, it is now listed in the Manage workload profile panel as shown in Figure 6-105.

Figure 6-105 Manage workload profile with new saved profile

The new profile can now be referenced by VPA for future usage.

6.8 Remote Console installation for TotalStorage Productivity Center for Disk - Performance Manager
It is possible to install a TotalStorage Productivity Center for Disk console on a server other than the one the TotalStorage Productivity Center for Disk code is installed on. This allows you to manage TotalStorage Productivity Center for Disk from a secondary location. Having a secondary TotalStorage Productivity Center for Disk console will offload workload from the TotalStorage Productivity Center for Disk server.

314

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Note: You are only installing the IBM Director and TotalStorage Productivity Center for Disk console code. You do not need to install any other code for the remote console.

In our lab we installed the remote console on a dedicated Windows 2000 server with 2 GB
RAM. You must install all the consoles and clients on the same server. The steps are:

1. Install the IBM Director console. 2. Install the TotalStorage Productivity Center for Disk console. 3. Install the Performance Manager client if the Performance Manager component is installed.

6.8.1 Installing IBM Director Console


You use the IBM Director product CD to install the IBM Director console. When you get to the IBM Director Installation window as shown in Figure 6-106 choose Install IBM Director.

Figure 6-106 IBM Director installation

Next, you will panel will be similar to Figure 6-107 on page 316.

Chapter 6. TotalStorage Productivity Center for Disk use

315

Figure 6-107 IBM Director Console Installation

Choose IBM Director Console Installation. You will see similar to Figure 6-108. Click Next, click for Accept the terms of the Licence Agreement as shown in Figure 6-109 on page 317.

Figure 6-108 IBM Director Console InstallShield Wizard

316

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Figure 6-109 License agreement

Click Next, you will see panel similar to Figure 6-110. Click Next for choosing default program features and program file location as shown in Figure 6-111 on page 318.

Figure 6-110 Server Plus pack information panel

Chapter 6. TotalStorage Productivity Center for Disk use

317

Figure 6-111 Program file location

Click Install as shown in Figure 6-112.

Figure 6-112 Ready to Install Program

The installation is completed as shown in Figure 6-113 on page 319. Now, you may proceed for base remote console installation of TotalStorage Productivity Center for Disk.

318

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Figure 6-113 Install completed

6.8.2 Installing TotalStorage Productivity Center for Disk Base Remote Console
After installing IBM Director console, you will need to install TotalStorage Productivity Center for Disk common base package. Insert the CD-ROM which contains the package or choose the directory if you have downloaded the code. We show a window of our download directory as shown in Figure 6-114 on page 320. Click on Setup.exe to begin install process.

Chapter 6. TotalStorage Productivity Center for Disk use

319

Figure 6-114 Install directory location for our lab setup

Next, you will see panel similar to Figure 6-115. Click Next to install TotalStorage Productivity Center for Disk base package.

Figure 6-115

320

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Next, you will see Software License Agreement window as shown in Figure 6-116. Select for accepting the terms of license agreement and click Next.

Figure 6-116 Accept License agreement terms

Next, choose default destination directory as shown in Figure 6-117 and click Next.

Figure 6-117 Choose default destination directory

You will see panel similar to Figure 6-118 on page 322. Select to Install a Console and Click Next.
Chapter 6. TotalStorage Productivity Center for Disk use

321

Figure 6-118 Install TotalStorage Productivity Center for Disk Console

Next, you will see a preview panel as shown in Figure 6-119.

Figure 6-119 Install preview panel

Next you will see panel similar to Figure 6-120 on page 323. Click Finish to complete the installation process.

322

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Figure 6-120

This completes the common base installation of TotalStorage Productivity Center for Disk console. Next, you will need to install the console for the Performance Manager function of TotalStorage Productivity Center for Disk.

6.8.3 Installing Remote Console for Performance Manager function


After installing IBM Director Console and TotalStorage Productivity Center for Disk base console, you will need to install the remote console for the Performance Manager function. For this, insert the CD-ROM which contains the code for TotalStorage Productivity Center for Disk and click setup.exe. In our example, we used the downloaded code as shown in the window in Figure 6-121 on page 324.

Chapter 6. TotalStorage Productivity Center for Disk use

323

Figure 6-121 Windowof our lab download directory location

Next, you will see Welcome panel similar to Figure 6-122 on page 325. Click Next.

324

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Figure 6-122 Welcome panel from TotalStorage Productivity Center for Disk installer

Next, click to select for accepting terms of license agreement and click Next.

Figure 6-123 Accept the terms of license agreement. Chapter 6. TotalStorage Productivity Center for Disk use

325

Next, choose default destination directory as shown in Figure 6-124 and click Next.

Figure 6-124 Choose default destination directory

Next, choose to install Productivity Center for Disk Client and click Next as shown in Figure 6-125 on page 327.

326

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Figure 6-125 Select Product Type

Next, Select both check boxes for products, if you want to install the console and command line client for the Performance Manager function. See Figure 6-126. Click Next.

Figure 6-126 TotalStorage Productivity Center for Disk features selection

Next, click Finish to complete the install process.


Chapter 6. TotalStorage Productivity Center for Disk use

327

Figure 6-127 TotalStorage Productivity Center for Disk finish panel

6.8.4 Launching Remote Console for TotalStorage Productivity Center


Now, you can launch remote console from the TotalStorage Productivity Center desktop icon from the remote console. Subsequently, you will see window similar to Figure 6-128.

Figure 6-128 TotalStorage Productivity Center launch window

You may click Manage Disk Performance and Replication as highlighted in the figure This will launch IBM director remote console. You may logon to the director server and start using remote console functions except for Replication Manager.

328

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Note: At this point, you have installed the remote console for the Performance Manager function only and not for Replication Manager. You may install the remote console for the Replication Manager if you require.

Chapter 6. TotalStorage Productivity Center for Disk use

329

330

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Chapter 7.

TotalStorage Productivity Center for Fabric use


This chapter provides information about how to install the TotalStorage Productivity Center for Fabric remote console. It also gives information about how it interacts with the TotalStorage Productivity Center for Disk to create SAN zones when storage LUNs are added. A basic knowledge of Storage Area Network (SAN) zoning concepts is necessary to understand this chapter. TotalStorage Productivity Center for Fabric overview Zoning overview Supported switches for zoning Enabling zone control with TotalStorage Productivity Center for Disk Installing eFix Installing TotalStorage Productivity Center for Fabric remote console Zone control integration with TotalStorage Productivity Center for Disk

Copyright IBM Corp. 2004, 2005. All rights reserved.

331

7.1 TotalStorage Productivity Center for Fabric overview


TotalStorage Productivity Center for Fabric (formally known as IBM Tivoli SAN Manager (TSANM)) is a SAN Fabric management tool focused on discovering and monitoring the health of SAN islands that exist within an organization. A SAN island is a group of SAN switches, storage devices and hosts that are connected together to form one network. Multiple SAN switches that are not connected to each other are considered separate islands. TotalStorage Productivity Center for Fabric uses a combination of agents installed on SAN connected servers and SNMP calls directly to SAN switch hardware to discover and monitor SAN health and operations. TotalStorage Productivity Center for Fabric also has the ability to view, change and create SAN fabric zones on supported switches. Zoning functions are available as a GUI based user interface for direct manipulation of fabric zoning or as an API functions for external software to call. TotalStorage Productivity Center for Disk integrates with TotalStorage Productivity Center for Fabric in two ways. TotalStorage Productivity Center for Fabric can be launched from the TotalStorage Productivity Center for Disk Director console by using a task option. TotalStorage Productivity Center for Disk integrates with the TotalStorage Productivity Center for Fabric zoning API to enhance the functions of storage volume management. In practice this mean that when you use TotalStorage Productivity Center for Disk to create or change disk volumes on a supported subsystem, TotalStorage Productivity Center for Fabric will be called through the API to check for existing valid zones or help you make the necessary zoning changes to allow the selected host port(s) to access the new disk. This can speed up the end to end time needed to present new storage to a host and reduces the opportunity of zoning errors as the administrator is walked through the necessary steps in a controlled way. Note: TotalStorage Productivity Center for Disk V 2.1 does not make cleanup zoning adjustments when storage LUNs are removed or unassigned from a host. This function is expected in a future release. IBM Tivoli SAN Manager (TSANM) is not compatible with TotalStorage Productivity Center for Disk to perform zoning functions. TSANM will need to be upgraded to TotalStorage Productivity Center for Fabric.

7.1.1 Zoning overview


There are two common types of fabric zoning in use today. They are known by various names and often called hard zoning and soft zoning. Hard zoning also known as port zoning is a method by which an administrator chooses which physical switch ports can talk to each other creating a hardware zone. Soft zoning also known as World Wide Name (WWN) zoning is a method that logically associates switch connected devices together irrespective of the physical port they are connected to or moved to. Attention: TotalStorage Productivity Center for Fabric zone control function supports the soft zoning method only. If you plan to use hard zoning in your SAN implementation you will not be able to make use of this function.

332

Managing Disk Subsystems using IBM TotalStorage Productivity Center

7.1.2 Supported switches for zoning


For TotalStorage Productivity Center for Fabric to be able to perform zone changes to a SAN fabric the switch hardware will need to be supported. The following link takes you to the latest device compatibility page for devices supported by TotalStorage Productivity Center for Fabric:
http://www-306.ibm.com/software/sysmgmt/products/support/IBM_TSANM_Device_Compatibility.html

Figure 7-1 TotalStorage Productivity Center for Fabric device compatibility Web page

Either scroll down this page to the Switches section or jump directly to it as shown in Figure 7-1. If the switch you plan to use is not listed or is not a re-branded equivalent to one listed then it is not supported by TotalStorage Productivity Center for Fabric. Important: Do not assume that a switch will support zoning changes just because it appears on the device compatibility Web page. Not all switches support all functions that TotalStorage Productivity Center for Fabric can provide so it is important to look at the specific details of the device you plan to use. Locate the switch you plan to use and click on it to view the detailed breakdown of its functional support. Figure 7-2 on page 335 shows a device support details page. If you see a Yes either in Zone Control Supported (either In-band or Out-of-band) then TotalStorage Productivity Center for Fabric will be able to work with TotalStorage Productivity Center for Disk to perform zone administration at LUN allocation and assignment time.

Chapter 7. TotalStorage Productivity Center for Fabric use

333

There are two methods by which TotalStorage Productivity Center for Fabric can communicate with a switch to effect zone changes. The method used is determined by the switch vendor. In-band method: This means that a switch accepts zone change control information through instructions sent to it through the fibre channel network (in-band). Out-of-band method: This means that a switch accepts zone change information through instructions sent to it on its IP network interface know as out-of-band. Important: For switches that use the in-band zone control method you will need to deploy at least one TotalStorage Productivity Center for Fabric agent on a server that is fibre connected to the SAN fabric you want to control. For switches that use the out-of-band zone control method the TotalStorage Productivity Center for Fabric manager machine talks directly to the switch over TCPIP and no agents are required to implement this function. The out-of-band method will require SNMP network access between the TotalStorage Productivity Center for Fabric manager machine and the SAN switch. It may be necessary for an organization to make firewall or network changes to allow this to take place. Out-of-band is the simpler than In-band to setup because it does not require and agent on a SAN connected host. If you establish that an in-band agent will be required to perform zoning change with your switch and you currently dont have in-band agents there are a number of things to consider. A TotalStorage Productivity Center for Fabric agent will need more IP ports than SNMP to communicate with the Fabric server and will require CPU, RAM and disk resources on the SAN connected server. To learn how to deploy TotalStorage Productivity Center for Fabric agents refer to the redbook IBM TotalStorage Productivity Center - Getting Started, SG24-6490.

334

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Figure 7-2 Device support Web page for IBM 2109-F08 and 2109-F16 switch

7.1.3 Deployment
TotalStorage Productivity Center for Fabric can run as a stand alone fabric manager or integrate with TotalStorage Productivity Center for Disk. It runs on the same common infrastructure of DB2 and WebSphere as TotalStorage Productivity Center for Disk and TotalStorage Productivity Center for Replication. You can choose to have TotalStorage Productivity Center for Fabric installed on the same server as your TotalStorage Productivity Center for Disk installation or a separate one. A key consideration for this choice will be the amount of RAM installed in the systems hosting the service.

Chapter 7. TotalStorage Productivity Center for Fabric use

335

Tip: To install TotalStorage Productivity Center for Disk, TotalStorage Productivity Center for Replication and TotalStorage Productivity Center for Fabric on a single server machine you will need at least 2.5 GB RAM. Consider installing 4 GB. If TotalStorage Productivity Center for Fabric is installed on a separate machine to your TotalStorage Productivity Center for Disk system you will need to install the TotalStorage Productivity Center for Fabric remote console on the TotalStorage Productivity Center for Disk system to allow it to be launched from the Director console and used on the same machine. Note: The remote console agent is not needed for the zoning API function to operate. The API function calls are made from TotalStorage Productivity Center for Disk directly to the TotalStorage Productivity Center for Fabric manager machine once configured. It is therefore not necessary to install TotalStorage Productivity Center for Fabric remote console if you do not plan to use it. Installing the remote console on the same machine allows fabric and disk management from a single console.

7.1.4 Enabling zone control


To configure TotalStorage Productivity Center for Disk to use TotalStorage Productivity Center for Fabric zone control functions add the location details of the Fabric manager machine into the TotalStorage Productivity Center for Disk configuration panel. Note that in Version 2.1 this panel (Figure 7-4 on page 338) still carries the old product names. Multiple Device Manager is now TotalStorage Productivity Center for Disk and Tivoli SAN Manager is now TotalStorage Productivity Center for Fabric. Launch the configuration panel by selecting the Configure MDM as shown in Figure 7-3 on page 337.

336

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Figure 7-3 Launching MDM configuration

Fill in the following entries to enable communications with TotalStorage Productivity Center for Fabric. TSNAM host - Enter the IP address or hostname of the server where TotalStorage Productivity Center for Fabric manager is installed. Note: If TotalStorage Productivity Center for Fabric is installed on the same machine as TotalStorage Productivity Center for Disk you still need to enter its hostname or IP address in the box. TSANM Port - The port number that TotalStorage Productivity Center for Fabric is using for communications. The default is 9550. It is only necessary to change this value if the default port was changed when TotalStorage Productivity Center for Fabric manager was installed. TSANM password - Enter the password used to communicate with TotalStorage Productivity Center for Fabric. This password would have been set when TotalStorage Productivity Center for Fabric was installed.

Chapter 7. TotalStorage Productivity Center for Fabric use

337

Figure 7-4 Configure TotalStorage Productivity Center for Fabric communications

7.1.5 TotalStorage Productivity Center for Disk eFix


Before you attempt to integrate the zoning functions of TotalStorage Productivity Center for Disk and TotalStorage Productivity Center for Fabric you must install an eFix. Important: At the time of writing this book you must install this eFix for zoning integration with TotalStorage Productivity Center for Fabric to function. This is not optional. It will not function at all without this being performed. To obtain this eFix contact your support center. The eFix is held in the level two support database. Apar#C44702 (defect#14925). This eFix cannot be downloaded from an Internet site.

7.1.6 Installing the eFix


Once you have obtained the eFix package use these instructions to install it on the TotalStorage Productivity Center for Disk server. The following instructions apply to the system running TPC, specifically, TPC for Disk and Replication Base Server 2.1.0. TPC for Disk and Replication must be fully installed before applying this fix. Extract the efix zip file (for Windows) or efix tar file (for Linux) containing one ear file and four TWGExt files to a directory accessible by the TPC system (<efix directory>). If it is not already running, start the IBM Director Server. On Windows, start the IBM Director Support Program service or run the command net start twgipc. On Linux, run the command twgstart. Since twgstart returns asynchronously, wait until twgstat returns Active status. WebSphere should be running as a result of starting IBM Director Server. Launch the WebSphere Administrative console either by choosing Start Programs IBM WebSphere Application Server v5.1 Administrative Console (Windows only); or by entering the following internet address into your browser: http://localhost:9090/admin 338
Managing Disk Subsystems using IBM TotalStorage Productivity Center

Note to Linux users: If you have difficulty using the administrative console on Linux, try using the Netscape Communicator 7.1 browser based on Mozilla 1.0. The browser release is not officially supported by the WebSphere Application Server product but users have been able to access the console successfully with it. Else, try running the WebSphere Administrative console remotely from Internet Explorer running on a Windows system. Log in to the WebSphere Administrative Console using the WebSphere username and password. In the WebSphere Administrative Console, expand the Applications menu and choose the Enterprise Applications link. In the Enterprise Applications table, choose the checkbox to select only the DMCoserver application and choose the Update push button, opening the Preparing for the application update panel. Enter the full pathname of the ear file, "<efix directory>\DMCoserver.ear" into the Path text field and choose the Next button. NOTE1: IF you choose the cancel button, you wont be able to complete the install. For the second panel in "Preparing for the application update", accept the defaults and choose the Next button (you may have to scroll the wizard panel down to see the Next button). For each of the "Install New Application" wizard panels, Steps 1 through 13, accept the defaults and choose the Next button (you may have to scroll the wizard panel down to see the Next button). In Step 14, accept the defaults and choose the Finish button. The "Installing" panel should open. Once the install is complete, the panel will display the similar to the following: If there are EJB's in the application, the EJB Deploy process may take several minutes. Please do not save the configuration until the process is complete. Check the SystemOut.log on the Deployment Manager or Server where the application is deployed for specific information about the EJB Deploy process as it occurs. ADMA5106I: Application DMCoserver uninstalled successfully. ADMA5016I: Installation of DMCoserver started. ADMA5005I: Application DMCoserver configured in WebSphere repository .... .... ADMA5013I: Application DMCoserver installed successfully. Application DMCoserver installed successfully. If you want to start the application, you must first save changes to the master configuration. Save to Master Configuration If you want to work with installed applications, then click Manage Applications. Manage Applications Choose the Save Master Configuration link, opening the Save panel.

Chapter 7. TotalStorage Productivity Center for Fabric use

339

In the Save panel, choose the Save button. When the save operation is complete (the Web browser logo in the top right hand corner will stop moving), the home page of the Administrative Console will open. Choose the Logout button to log out of the Administrative Console. Close the IBM Director Console if it is running. Stop the IBM Director Server. On Windows, stop the IBM Director Support Program service or run the command net stop twgipc. On Linux, run the command twgstop. This should also stop WebSphere. Copy the four TWGExt files into the folder <IBM Director root>\ classes\extensions (for example, on Windows, C:\Program Files\IBM\Director\classes\extensions or, on Linux, /opt/IBM/director/classes/extensions). Confirm that the existing files will be overwritten. Start the IBM Director Server. On Windows, start the IBM Director Support Program service or run the command net start twgipc. On Linux, run the command twgstart. Since twgstart returns asynchronously, wait until twgstat returns Active status. Start the IBM Director Console. The efix has now been applied.

7.2 Installing Fabric remote console


This section walks through installing the TotalStorage Productivity Center for Fabric remote console onto the server that has TotalStorage Productivity Center for Disk installed. This allows you to launch and use its functions when TotalStorage Productivity Center for Fabric server is installed on another machine. It has been executed from an install image extracted onto the network shared drive. Important: When installing any of the TotalStorage Productivity Center software from a disk image be sure to use short simple path names that do not contain spaces. Notice that in Figure 7-5 the image is contained in a single level directory. The console directory forms part of the extract image. To launch the remote console install navigate to the console directory directly under the area used to extract the software image into as shown in Figure 7-5.

Figure 7-5 Launch TotalStorage Productivity Center for Fabric remote console install

340

Managing Disk Subsystems using IBM TotalStorage Productivity Center

The InstallShield Wizard panel will display for a few moments while the installer loads (see Figure 7-6).

Figure 7-6 InstallShield Wizard

Select the install language as in Figure 7-7 and click OK.

Figure 7-7 Select install language

Figure 7-8 confirms that you are installing the TotalStorage Productivity Center for Fabric Console V2.1.0. Click Next.

Figure 7-8 Welcome panel

If you accept the license terms select the radio button and click Next as shown in Figure 7-9 on page 342.

Chapter 7. TotalStorage Productivity Center for Fabric use

341

Figure 7-9 License agreement

Select the preferred install directory or accept the default as in Figure 7-10 and click Next.

Figure 7-10 Fabric install directory

Enter the fully qualified Host Name or IP address of the server where TotalStorage Productivity Center for Fabric server is installed as seen in Figure 7-11 on page 343. The Port Number will default to 9550. Only change this if the default was changed on the TotalStorage Productivity Center for Fabric server at install time.

342

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Figure 7-11 TotalStorage Productivity Center for Fabric host name

Figure 7-12 shows the default base Port Number is 9560. TotalStorage Productivity Center for Fabric console requires 25 additional ports starting from this number. We recommend you do not change this value.

Figure 7-12 Enter base port number

Enter the password that the remote console will use to authenticate with the TotalStorage Productivity Center for Fabric manager server (Figure 7-13 on page 344). This password would have been set when the TotalStorage Productivity Center for Fabric manager server was installed and it cannot be changed in this panel. Click Next to continue.

Chapter 7. TotalStorage Productivity Center for Fabric use

343

Figure 7-13 Host authentication password

Figure 7-14 shows the next panel to select the Drive name that will be used to install NetView. TotalStorage Productivity Center for Fabric uses NetView as its primary interface and it will be installed in the background as part of this install process. Choose a local drive only.

Figure 7-14 NetView installation drive

Enter a password for NetView as shown in Figure 7-15 on page 345. The installer will create a NetView user for the NetView service to run under. This is a new password and does not need to match any others previously entered.

344

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Figure 7-15 NetView password

The panel shown in Figure 7-16 confirms the location for the TotalStorage Productivity Center for Fabric code install and the disk space required. Click Next to start the installation process.

Figure 7-16 Install confirmation

Figure 7-17 on page 346 shows the installation progress. This can take around 15 minutes to complete. Attention: A reboot is required to complete the installation process.

Chapter 7. TotalStorage Productivity Center for Fabric use

345

Figure 7-17 Installation progress panel

7.3 TotalStorage Productivity Center for Disk integration


This section describes how zoning configuration is performed when a new LUN is created and assigned to a host using TotalStorage Productivity Center for Disk. The configuration and eFix steps must have been completed before this feature will become available. See 7.1.4, Enabling zone control on page 336 and 7.1.6, Installing the eFix on page 338 for details. The following example shows zone control working in conjunction with LUN assignment on a DS4000 (FAStT) disk subsystem. However the zoning panels will be the same for all other supported disk subsystems. From the TotalStorage Productivity Center for Disk console (see Figure 7-18 on page 347) right-click the device and select Volumes to invoke the volume management panel for this device.

346

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Figure 7-18 Launch volume management for disk subsystem

Note: Before DS4000 or FAStT volume properties can be displayed, as with other storage devices managed by Productivity Center common base, an initial inventory must be completed. Refer to Performing volume inventory on page 194 for details.

Figure 7-19 on page 348 shows the Volumes management panel for all current assignments. Select the Create button to start the volume creation and assignment process for a new LUN.

Chapter 7. TotalStorage Productivity Center for Fabric use

347

Figure 7-19 Volume management panel

Specify the volume characteristics as seen in Figure 7-20. From the list of Defined host ports select the host(s) that will use this LUN. You can select multiple hosts using the <Ctrl> key. For more detailed information about using the panel refer to 5.7.3, Creating DS4000 or FAStT volumes on page 212.

Figure 7-20 Create volume panel

Important: Only hosts previously defined to the DS4000 (FAStT) subsystem will be visible in the Create volume panel. Use the IBM DS4000 Storage Manager to define hosts World Wide Names (WWNs) and names before starting this process. You will need to run Perform Inventory Collection for TotalStorage Productivity Center for Disk to recognize new WWNs created in the IBM DS4000 Storage Manager. See 5.4, Performing volume inventory on page 194 for more details.

348

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Figure 7-21 appears for a few seconds while TotalStorage Productivity Center for Disk communicates with TotalStorage Productivity Center for Fabric to retrieve current zone information from the SAN. The appearance of this panel indicates that the configuration of TotalStorage Productivity Center for Fabric and the efix have been successful.

Figure 7-21 Loading zone panel

Attention: If you see the message panel as seen in Figure 7-22 the TotalStorage Productivity Center for Fabric server is communicating but not managing the SAN in which the host and/or disk subsystem reside. You can continue to perform the LUN creation process but the zoning action will not take place. Click OK if you want to continue creating the LUN or Cancel if you want to stop the process.

Figure 7-22 Unmanaged SAN warning

Now specify the zone properties you want to create as seen in Figure 7-23 on page 350. This panel is shows: Active ZoneSet - This is the ZoneSet that is currently running on the switch fabric you are working with. This value is for information only and cannot be changed on this panel. ZoneSets to verify Zoning - This is the ZoneSet that TotalStorage Productivity Center for Disk will check against to see if a valid zone for this host/disk combination exists. Select the ZoneSet you want to work with using the drop-down arrow. If this SAN fabric only has one ZoneSet defined then it will appear here by default. Host ports - This box lists the WWN of the host port(s) selected to be assigned to the new LUN. You cannot change this value on the panel. If it is not showing the intended WWN click Cancel to return to the Create volume panel to reselect the WWN (Figure 7-20 on page 348). Storage device ports - This box lists the WWN ports that the disk subsystem is presenting to the SAN. Select the ports to be zoned to the host. Select multiple ports using the <Ctrl> key. Click OK to continue. TotalStorage Productivity Center for Disk will now check for an existing zone in the specified ZoneSet that meets the requirement.

Chapter 7. TotalStorage Productivity Center for Fabric use

349

Figure 7-23 Specify zone properties

If a valid zone already exists then no additional zoning changes are needed and you will see an information panel as in Figure 7-24. This is not an error message as it might look at first glance. You will see this if you have existing LUNs from the selected disk subsystem already defined to this host. Click OK to continue.

Figure 7-24 Zone already exists

If a new host zone needs to be created for the host/disk combination the Create a new zone for the selected volume panel will appear (Figure 7-25 on page 351). The majority of this panel is to confirm the zoning action that is about to be executed. Provide the following information: Zone name - Enter the name that you want to assign to this zone. Zone set actions - Choose to make this zoning change effective immediately or only update the ZoneSet for future activation.

350

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Figure 7-25 Create a new zone for the selected volume

Click OK to continue. The Creating zone panel (Figure 7-26) will display while the action takes place.

Figure 7-26 Creating zone

A panel will appear to show the zone has been created as in Figure 7-27. Click OK to continue and perform volume creation.

Figure 7-27 Zone created successfully

Figure 7-28 on page 352 will appear and show the volume creation results as they happen.

Chapter 7. TotalStorage Productivity Center for Fabric use

351

Figure 7-28 Volume creation results

The final panel to appear will be the results of the zone creation as in Figure 7-29.

Figure 7-29 Create zone results

The zoning and LUN creation task is now complete.

7.4 Launching TotalStorage Productivity Center for Fabric


TotalStorage Productivity Center for Fabric can be launched from the TotalStorage Productivity Center for Disk Director console by selecting Launch Tivoli SAN Manager as seen in Figure 7-30 on page 353.

352

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Figure 7-30 Launch Tivoli SAN Manager

Note: For this function to work you either need to have installed TotalStorage Productivity Center for Fabric manager or TotalStorage Productivity Center for Fabric remote console on the same machine as the TotalStorage Productivity Center for Disk installation. See 7.2, Installing Fabric remote console on page 340 for details of installing TotalStorage Productivity Center for Fabric remote console. Figure 7-31 on page 354 shows an example of the TotalStorage Productivity Center for Fabric console that will appear when launched. The top left of the four panels shows the four SAN islands that it is managing. The bottom two panels show switch to hosts connects for two of the switches.

Chapter 7. TotalStorage Productivity Center for Fabric use

353

Figure 7-31 TotalStorage Productivity Center for Fabric console

From this interface you can display detailed information about SAN elements such as switches and hosts. You can view zoning information and view host to device relationships. The GUI uses color coded icons to indicate which SAN elements are OK or in error. You can also launch the fabric zoning tool to view and change fabric zones using the standards based zoning functions (on a support switch).

354

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Chapter 8.

TotalStorage Productivity Center for Replication use


This chapter provides information for configuring and using the TotalStorage Productivity Center for Replication component of the TotalStorage Productivity Center. In this chapter, we describe: Concepts and terminology of replication Step-by-step instructions for creating groups, pools, and replication sessions Session managing using GUI and CLI

Copyright IBM Corp. 2004, 2005. All rights reserved.

355

8.1 TotalStorage Productivity Center for Replication overview


Data replication is the core function required for data protection and disaster recovery. It provides advanced copy services functions for supported storage subsystems on the SAN. TotalStorage Productivity Center for Replication administers and configures the copy services functions and monitors the replication actions. TotalStorage Productivity Center Version 2.1 manages two types of copy services: the Continuous Copy (also known as Peer-to-Peer Remote Copy, PPRC, or Remote Copy), and the Point-in-Time Copy (also known as FlashCopy). TotalStorage Productivity Center for Replication includes support for replica sessions, which ensures that data on multiple related heterogeneous volumes is kept consistent, provided that the underlying hardware supports the necessary copy services operations. Multiple pairs are handled as a consistent unit, Freeze-and-Go functions can be performed when errors in mirroring occur. TotalStorage Productivity Center for Replication is designed to control and monitor the copy services operations in large-scale client environments. TotalStorage Productivity Center for Replication is implemented by applying predefined policies to Groups and Pools, which are groupings of LUNs. It provides the ability to copy a Group to a Pool, in which case it creates valid mappings for source and target volumes and optionally presents them to the user for verification that the mapping is acceptable. In this case, it manages Pool membership by removing target volumes from the pool when they are used, and by returning them to the pool only if the target is specified as being discarded when it is deleted.

8.1.1 Supported Copy Services


TotalStorage Productivity Center for Replication V2.1 supports FlashCopy and Synchronous PPRC for ESS. Future releases will add other copy services functions and supports additional storage devices. Check the current TotalStorage Productivity Center for Replication documentation for the required ESS LIC, ESS CLI, and CIM Agent levels. The supported products list can be found at the following Web sites:
http://www-1.ibm.com/servers/storage/support/software/tpcrep/installing.html http://www.ibm.com/storage/support

then select Storage software in Product family select TPC for Replication select the Install and use tab. The ESS Copy Services supported with TotalStorage Productivity Center for Replication V2.1 include: ESS PPRC Synchronous remote copy Add / delete volume pairs Full background copy Freeze / Run Suspend / resume Query status of the session, paths, and pairs

ESS FlashCopy Full background copy

356

Managing Disk Subsystems using IBM TotalStorage Productivity Center

PPRC
PPRC is a function of a storage server that constantly updates a secondary copy of a volume to match changes made to a primary volume. The primary and the secondary volumes can be on the same storage server or on separate storage servers. PPRC differs from FlashCopy in two essential ways. First, as the name implies, the primary and secondary volumes can be located at some distance from each other. Second, and more significantly, PPRC is not aimed at capturing the state of the source at some point in time, but rather aims at reflecting all changes made to the source data at the target. PPRC is application independent. Because the copying function occurs at the disk subsystem level, the hosts operating system or application has no knowledge of its existence. In contrast to that, host-based mirroring is controlled by software at the operating system or file system level: The storage subsystem does not know about that. Table 8-1 summarizes characteristics of both approaches.
Table 8-1 Comparison of PPRC and host-based mirroring Peer-to-Peer Remote Copy Operation is performed by storage subsystem, transparent for host operating system. The functionality is the same for all operating systems and applications. Read and write operations are sent to the primary volume only. There is an unidirectional relationship from the primary to the secondary volume. Failure recovery is different for the primary and secondary volume. Host-based mirroring Operation is performed by host software or host bus adapter, transparent for storage subsystem. The functionality depends on capabilities of the operating system or host bus adapter. Write operations are sent to both volumes. Read operations are sent to any volume, depending on read policy. The relationship between the volumes is symmetric. Failure recovery is identical for both volumes.

FlashCopy
FlashCopy makes a single point-in-time copy of a LUN. This is also known as a time-zero copy. The target copy is available once the FlashCopy command has been processed. FlashCopy provides an instant or point-in-time copy of an ESS logical volume. Point-in-time copy functions give you an instantaneous copy, or view of what the original data looked like at a specific point-in-time.The point-in-time copy created by FlashCopy is typically used where you need a copy of production data to be produced with minimal application downtime. It can be used for backup, testing of new applications, or for copying a database for data mining purposes. The copy looks exactly like the original source volume and is an instantly available. TotalStorage Productivity Center for Replication provides a user interface for creating, maintaining, using volume groups and for scheduling copy tasks. The User Interface populates lists of volumes using the Device Manager interface. TotalStorage Productivity Center for Replication uses different names for copy services than ESS: Point-in-Time Copy is equivalent to FlashCopy on ESS Continuous Synchronous Remote Copy is equivalent to Peer to Peer Remote Copy on ESS

Chapter 8. TotalStorage Productivity Center for Replication use

357

Figure 8-1 TotalStorage Productivity Center for Replication - manager tasks

Figure 8-1 illustrates a list of the tasks you can perform from Manage Replication group, which represents TotalStorage Productivity Center for Replication: Create and manage groups, which are collections of volumes grouped together so that they can be managed concurrently. Check status of paths between storage subsystems which are required for remote copy functionality. Create and manage pools which are collections of target volumes. Run the wizard for creating a session: Select copy type Select source group Select target pool Save session or start a replication session

Monitor, terminate or suspend running sessions. A user can also perform these tasks with the TotalStorage Productivity Center for Replication command-line interface which is described in 8.3, Using Command Line Interface (CLI) for replication on page 407.

8.1.2 Replication session


A replication session is a set of copy relationships which are maintained as a unit in a manner which provides consistency; especially across box or other hardware boundaries. The replication session, then, associates a pool with a group and gives them a particular copy relationship - either a continuous synchronous remote copy or a point-in-time copy TotalStorage Productivity Center for Replication supports the session concept in which multiple pairs are handled as a consistent unit. You can create and manage copy

358

Managing Disk Subsystems using IBM TotalStorage Productivity Center

relationships between source and target volume pairs or source volume groups, and among target pools through a Replication Manager copy session. The Replication Manager Sessions panel shows sessions and their associated status. The status indicates if the volume is a source, target, or both; and it shows the copy mode of the volume. You can also use this panel to assess if current replication activities are proceeding normally or abnormally. When you are creating a replication session, you can select source and target volume pairs or volume groups, then establish a continuous synchronous remote copy (remote copy) or point-in-time copy (flash copy) relationship between them. The Sessions panel includes the following options: Create - Invokes the Create Session wizard, which you can use to create copy relationships for a new session. Delete - Deletes an existing session. Flash - Starts a created or terminated session (for Point-in-Time only). Start - Starts a created, suspended or terminated session (for Remote Copy only). Properties - Displays the Session Properties panel for an existing session. Suspend (consistent) - Suspends an existing session, which results in a consistent target copy if there are no errors. Suspend (immediate) - Stops an existing session with no guarantee of consistency. Terminate - Stops an existing session and withdraws the relationships.

8.1.3 Storage group


A storage group is a collection of storage units that jointly contain all the data for a specified set of storage units, such as volumes. The storage units in a group must be from storage devices of the same type. Groups can be created to identify sets of volumes that need to be managed as a consistent unit. A general purpose group can be used as a container for volumes that share some association, for example a group of volumes that are all associated with a specific application. After a storage group is created, you can perform the following tasks: Add volumes Delete volumes Change the description of the group A storage group is managed by a Replication Manager session and used as a collection of source volumes for a copy.

8.1.4 Storage pools


A storage pool is an aggregation of storage resources on a storage area network (SAN) that you have set aside for a particular purpose. For example, you could use a storage pool for targets of copy operations that a collection of storage devices on the SAN can use. The storage devices can be from different vendors but must be a type that TotalStorage Productivity Center for Replication supports.

Chapter 8. TotalStorage Productivity Center for Replication use

359

8.1.5 Relationship of group, pool, and session


This section illustrates the interdependency between a replication group, pool, and session in the context of the Replication Manager. To review, the definitions are: Group: A set of volumes containing related data, which are managed as a unit they are managed concurrently. Pool: Volumes set aside for copy services targets these must not be in use by any other application. Session: A set of copy relationships which are maintained as a unit to provide consistency, across storage and server hardware boundaries. TotalStorage Productivity Center for Replication provides the ability to copy a group to a pool, in which case it creates the valid mappings for source and target volumes and optionally presents them to the user for verification that the mapping is acceptable. Sessions are a set of multiple pairs that are managed as a consistent unit from which freeze and run functions can be performed when errors occur. The session can also be viewed as a consistency group. The following chart (see Figure 8-2) graphically depicts the interactions of groups, pools and a session. It shows one group of related volumes on a source ESS (volumes S1 and S2) that we want to copy to another target pool of volumes (the T volumes). Once we have identified and created the source volumes in the group and the target volumes in a pool, we can then establish the relationship.

Group (source volumes)

Pool (target volumes)

S1
Remote or Flash Copy

T1

T3

S2

T2

T4

Session
Figure 8-2 Relationship of a group, pool and session

Our example session shows that S1 is associated with T1, similarly S2 with T2. The T1 and T2 volumes are now persistently bound to the relationship whereas T3 and T4 are still available for use. TotalStorage Productivity Center for Replication can automatically create the source to target relationship on your behalf. Once created, these volumes are now part of a session or consistency group. This means again that any error on any of the volumes in this session could trigger a suspend across all the volumes to ensure data consistency. Events such as loss of access to a source subsystem or the loss of the PPRC links could be examples of such conditions to trigger a freeze event.

360

Managing Disk Subsystems using IBM TotalStorage Productivity Center

8.1.6 Copyset and sequence concepts


A copyset incorporates all the volumes that make up an instance of a given copy type. In other words, it comprises the source volume, target volume, and the copy relationships. With Replication Manager you can manually select target volumes, or have Replication Manager select them for you. Subsequent releases of TotalStorage Productivity Center for Replication will enhance the number of volumes in a copy set, and a session will be able to manage one to thousands of copy sets. A sequence includes the set of all copy relationships at any given stage of a copy operation. For Continuous Synchronous Remote Copy and Point-in-Time Copy there is only one sequence. A sequence will share the same pool criteria policy. Using Figure 8-3, in a copy relationship, S1, S2 are members of the same group but different copysets. You can visualize this as two copysets, along with their target volumes. When you create the copy session, Replication Manager automatically maps the disk in each copyset to appropriate available disks, or you can choose the targets manually if you want to.

Sequence Copyset S1 Copyset S2


Remote or Flash Copy Remote or Flash Copy

T1

T2

Figure 8-3 Replication manager sequence relationship example

Sequences will be further utilized in subsequent releases of TotalStorage Productivity Center as more complex copy types are supported.

8.2 Exploiting TotalStorage Productivity Center for Replication


This section describes how to setup and use the TotalStorage Productivity Center for Replication component. To create and start a session for remote copy or point-in-time copy you have to perform steps as shown in Figure 8-4 on page 362

Chapter 8. TotalStorage Productivity Center for Replication use

361

Create a Session

Manage Replication

Create a Pool Verify a Session Check Paths

Figure 8-4 Steps for creating replication copy session

8.2.1 Before you start


Before you start using Replication Manager make sure that: CIMOM for ESS is operational and you have registered all ESSs you want to manage. You have access to ESS from CIMOM server. Run the following command located in the ESS CLI folder.
rsTestConnection.exe

The ESS Copy Services servers are defined to the CIMOM using the addserver command. Each ESS cluster which acts as copy services server must be defined to the ESS CIMOM. Refer to Register ESS server for Copy services on page 141. Verify the ESSs you will use are at the required LIC level. TotalStorage Productivity Center for Replication V.2.1 requires for ESS 750 and ESS 800 LIC level 2.4.1 or above. ESS models F10 and F20 require LIC of 2.3.256 or above. The paths between ESSs you want to replicate are defined using the ESS Specialist.

8.2.2 Creating a storage group


TotalStorage Productivity Center for Replication uses groups and sessions you define to manage the replication process. Perform the following steps to create a Replication Manager group: 1. In the IBM Director Task panel, click Multiple Device Manager. 2. Click Manage Replication (see Figure 8-1 on page 358). 3. Double-click Groups, the Groups panel opens (see Figure 8-5 on page 363).

362

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Start a Session

Create a Group

Figure 8-5 Replication Manager groups

4. Click Create. The Create Group wizard opens (see Figure 8-6 on page 364). 5. Click on the device shown in Device Components pane and select logical storage subsystem (LSS). Note: Device Components shown in the group window do not use the same names defined in the Group Contents pane of the IBM Director console (see Figure 8-1 on page 358). The Device Component pane uses the format: device_type.serial_number. In our example Device Component ESS.2105-16603 in Figure 8-6 indicates ESS 2105 F20 16603 in Figure 8-1 on page 358. 6. Select one or more volumes (press Ctrl and click for multiple volumes selection) from the Available Volumes pane of the Create group pane. Click Add (see Figure 8-6 on page 364). You can also click Select all if you want to add all available volumes to a group. In our example we chose two volumes from ESS F20 (16603) and two volumes from ESS 800 (22513).

Chapter 8. TotalStorage Productivity Center for Replication use

363

Figure 8-6 Select volumes for the new group example

Note: Although you can only select volumes from one LSS at a time, you can select different LSSes within the same Create Group session. As you select each LSS, the Available volumes pane updates the list of volumes that are available for the selected device. 7.

If you want to remove a volume from the Selected volumes panel, select it, and then click Remove.

8. Click Next. The Save group window opens (see Figure 8-7 on page 365).

364

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Figure 8-7 Save group setup example

9. Enter a name for the new group in the Name field. The name is required and must not exceed 250 characters and may not contain special characters such as spaces. 10.Enter a description for the new group in the Description field. The Description is optional and can be 0 - 250 characters. 11.Click Finish to save the new group and close the wizard.

Result
The new group appears in the Groups window (see Figure 8-8). In our example we created two groups which will be used for Point-in-time copy and Remote Copy.

Figure 8-8 Groups window example

Chapter 8. TotalStorage Productivity Center for Replication use

365

8.2.3 Modifying a storage group


Use the Group properties panel to modify one or more properties of a Replication Manager group of source volumes, for example to add or remove volumes from a group. Perform the following steps to modify a Replication Manager group: 1. In the IBM Director Console Tasks pane, expand the Multiple Device Manager tab (see Figure 8-1 on page 358). 2. Click Manage Replication. 3. Double-click Groups. The Groups window opens. 4. Select the group to be modified from the Groups list. 5. Click Properties. The Group Properties window opens (see Figure 8-9).You can edit the text in the Description window.

Figure 8-9 Group properties

6. To change volumes which belong to the group, click Update. The Group properties window with volumes opens (similar to the one shown in Figure 8-10 on page 367).

366

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Figure 8-10 Group properties for a selected group panel

7. To add volumes to the group: Select one or more volumes (using Ctrl) in the Available volumes panel. Click Add. 8. To remove volumes from the group: Select one or more volumes (using Ctrl) in Selected volumes panel. Click Remove. . Attention: Check if existing defined sessions use volumes which you want to remove from the group you are updating.

9. Click OK to submit your changes and close the window.

8.2.4 Viewing storage group properties


You can use the Replication Manager group properties panel to view properties for a selected group. Note: You must have created and saved a group before you can view its properties. Perform the following steps to view the properties of an Replication Manager group: 1. Expand Multiple Device Manager in the IBM Director Console Tasks pane.

Chapter 8. TotalStorage Productivity Center for Replication use

367

2. Click Manage Replication. 3. Click Groups, the Groups panel opens. 4. In the Groups table, select the group that you want to view (see Figure 8-8 on page 365). 5. Click Properties. The Properties panel opens for the selected group. You can view the following information: Group name Description of the group The table of the volumes that are managed by the group which shows: volume ID device (for example ESS.2105-16603) volume location - logical storage subsystem volume type (FB for open systems) volume size

8.2.5 Deleting a storage group


You can use this procedure to delete a selected Replication Manager group from the Groups list. Note: Before you delete a group make sure that no session uses the group for replication. Perform the following steps to delete a Replication Manager group: 1. Expand the Multiple Device Manager tab in the IBM Director Console Tasks pane. 2. Click Manage Replication. 3. Click Groups (see Figure 8-1 on page 358). The Groups panel opens. In the Groups list, select the group that you want to delete (see Figure 8-11).

Figure 8-11 Groups window

4. Click Delete. A window opens asking to verify the delete request (see Figure 8-12).

368

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Figure 8-12 Delete Group confirmation

5. Click Yes to delete the group. Alternatively, click No to cancel the delete.

Result of Delete Group


The selected group is no longer displayed in the list of groups in the Groups window.

8.2.6 Creating a storage pool


You can perform this task to create pools of volumes which will be used as a set for target volumes for copy operations. Perform the following steps to create a storage pool: 1. From the IBM Director Console Tasks pane, expand the Multiple Device Manager tab. 2. Click Manage Replication. 3. Click Pools. The Pools panel opens (see Figure 8-13).

Figure 8-13 Replication manager Pools panel example

4. Click Create. The Create Pool Wizard opens (see Figure 8-14 on page 370).

Chapter 8. TotalStorage Productivity Center for Replication use

369

Figure 8-14 Select volumes example for creating a pool

5. Click on device shown in Device Component pane and select a logical storage subsystem (LSS). Note: Device Components shown in the Group window do not use the same names defined in the Group Contents panel in the IBM Director Console (see Figure 8-1 on page 358). The Device Component pane uses the format device_type.serial_number. In our example Device Component ESS.2105-16603 in Figure 8-6 indicates ESS 2105 F20 16603 in Figure 8-1 on page 358.

6. Select one or more volumes (press Ctrl and click for multiple selection) in the Available volumes pane and click Add. You can also click Select all if you want to add all available volumes to a pool. In our example we chose two volumes from ESS F20 (16603) and two volumes from ESS 800 (22513). Important: The size of a source and target volume of a copy relationship has to be equal. 7.

If you want to remove a volume from the Selected volumes panel, select it, and then click Remove.

8. Click Next. The Save pool window opens (see Figure 8-15 on page 371). 9. Enter a name (required), description (optional) and location (optional). Note: We recommend you enter a Location name, which helps in automatic allocation of target volumes during creating a session. 10.Click Finish to save the new pool. 370
Managing Disk Subsystems using IBM TotalStorage Productivity Center

Figure 8-15 Save pool window

Result of creating a storage pool


The new pool is added to the Pools table as shown in Figure 8-16.

Figure 8-16 Created pool

Chapter 8. TotalStorage Productivity Center for Replication use

371

Note: You do not have to use all volumes of pool when you create a session. Additionally, a pool and even the same volume from a pool can be defined as a target for multiple sessions.

8.2.7 Modifying a storage pool


You can use the Pool properties panel to modify one or more properties of a Replication Manager pool of target volumes. Perform the following steps to modify a Replication Manager pool: 1. In the IBM Director Console Tasks pane, expand the Multiple Device Manager tab. 2. Click Manage Replication. 3. Click Pools. The Pools panel opens (see Figure 8-16 on page 371). 4. Select the pool to be modified in the Pools table and click Properties. The Pool properties window opens (see Figure 8-17).

Figure 8-17 Pool properties

5. You can change text in the Description panel and Location. Attention: Changing the Location name can destroy a session which uses the pool you are modifying. 6. To change volumes which belong to the pool, click Update. The Pool properties window with volumes opens (similar to the one shown in Figure 8-14 on page 370). 7. To add volumes to the group: Select one or more volumes (using Ctrl) in the Available volumes panel.

372

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Click Add. 8. To remove volumes from the group: Select one or more volumes (using Ctrl) in Selected volumes panel. Click Remove. Attention: Check if any defined sessions use volumes which you want to remove from the pool you are modifying. 9. Click OK to commit changes and close the window or click Cancel if you want to cancel the modifications.

8.2.8 Deleting a storage pool


Perform this task to delete a selected Replication Manager storage pool from the Pools table. Perform the following steps to delete a Replication Manager pool: 1. In the IBM Director Console Tasks pane, expand the Multiple Device Manager tab. 2. Click Manage Replication. 3. Click Pools. The Pools panel opens (see Figure 8-18). In the Pools list, select the pool that you want to delete.

Figure 8-18 Pools window

4. Click Delete. A window with the message Are you sure you want to delete pool pool_name? opens as shown in Figure 8-19 on page 374.

Chapter 8. TotalStorage Productivity Center for Replication use

373

Figure 8-19 Delete a pool confirmation

5. Click Yes to delete the pool or No to cancel.

Result of deleting a storage pool


The selected pool is removed from the list of pools in the Pools table.

8.2.9 Viewing storage pool properties


You can view information about a Replication Manager storage pool in the Pool properties window. Note: You must have created and saved a pool before you can view its properties. A storage pool is a predefined set of direct access storage device (DASD) volumes used to store groups of logically related data according to user requirements for service or according to storage management tools and techniques. 1. Expand the Multiple Device Manager in the IBM Director Console Tasks panel. 2. Click Manage Replication. 3. Click Pools. The Pools panel opens. 4. In the Pools table, select the pool that you want to view. 5. Click Properties. The properties window opens for the selected pool (see Figure 8-17 on page 372). You can view the following information: Pool name Description of the pool Location name The table of the volumes that are managed by the group which shows: volume ID device (for example ESS.2105-16603) volume location - logical storage subsystem volume type (FB for open systems) volume size

374

Managing Disk Subsystems using IBM TotalStorage Productivity Center

8.2.10 Storage paths


The TotalStorage Productivity Center for Replication provides a graphical method to view the pre-existing relationships and links between logical storage subsystems. Important: Check path availability before starting remote copy sessions. The ability to create paths is not supported within TotalStorage Productivity Center for Replication V.2.1. You must use the Copy Services function launched from the ESS Specialist to create paths. At the time a Replication Manager session is initiated, the paths in effect at the time are retained, and restored on subsequent restarts of the session. To view the paths created: 1. From the IBM Director Console Tasks pane, expand the Multiple Device Manager tab. 2. Click Manage Replication. 3. Click Paths. The Paths panel opens (see Figure 8-20). In our example we use highlighted ESCON connection between ESS 800 (22513) and ESS F20 (16603).

Figure 8-20 Paths between ESSs used for remote copy

8.2.11 Point-in-Time Copy: Creating a session


If you created a Replication Manager group and pool you can define a session which will run the copy task. This section describes Point-in-Time copy which creates an instant copy of a volume on the same storage server. However, with TotalStorage Productivity Center for Replication you can define a set of many instant copy tasks in the same session which will run all tasks at the same time. This provides consistent data spread on many volumes on different storage servers. Perform the following steps to create Point-in-Time Copy session: 1. In the IBM Director Console Tasks pane, expand the Multiple Device Manager tab. 2. Click Manage Replication.

Chapter 8. TotalStorage Productivity Center for Replication use

375

3. Double-click Groups. The Groups window opens (see Figure 8-8 on page 365). 4. Select the group which you want to copy and click Replicate. The Create Session wizard opens for the group you chose (see Figure 8-22). Or: 1. In the IBM Director Console Tasks panel, expand the Multiple Device Manager tab. 2. Click Manage Replication. 3. Double-click Sessions. The Session window opens (see Figure 8-21).

Figure 8-21 Session window

4. Select Create session action. The Create session window opens (see Figure 8-22). Choose Point-in-Time Copy and click Next.

Figure 8-22 Create session window with Point-in-Time Copy selection

Note: You can define another session which uses the same group. 5. The Choose source group window opens (see Figure 8-23 on page 377). Choose the Group name which you want to copy and click Next. If you ran the wizard from the Groups window you can see only one Group which you selected before.

376

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Figure 8-23 Choosing source group for replication

6. The Choose a target pool window opens as shown in Figure 8-24.

Figure 8-24 Choosing target pool for point in time copy

7. In the Location filter field enter the name of the location of the target pool. You can enter an asterisk (*) as a wildcard for the first or last character of the filter. 8. Click Apply to see volumes of all locations which meet the criteria. 9. Select the All listed locations radio button if you want to use volumes from more than one location or the Select single location radio button then select the location from the Location pane and click Next. Note: We recommend you enter the entire location name in the Location filter field instead of using wildcards. Remember, location name is case sensitive.

10.Enter the session Name and Description in the Create session - Set session settings panel (see Figure 8-25 on page 378).

Chapter 8. TotalStorage Productivity Center for Replication use

377

Figure 8-25 Set session settings window

Select one of the following options in Session approval pane: Automatic - indicates that you allow Replication Manager to automatically create relationship between source and target volume Manual - indicated that you want to select volumes and approve relationships 11.Click Next, the Review session properties window opens. Verify your input and click Finish to submit (see Figure 8-26).

Figure 8-26 Review session properties panel

12.The session will be created and a new window opens with a message that the command completed successfully. If you get a message as shown in Figure 8-29 on page 380 refer to 8.2.12, Creating a session: Verifying source-target relationship on page 379. 13.In the Sessions pane you can see the newly created session (see Figure 8-27 on page 379).

378

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Figure 8-27 Sessions window with created session.

If the session was created successfully, select Flash from the Session actions pull-down to run a Point-in-time copy session (see Figure 8-28). We recommend you verify the source-target volumes before running a session. To verify relationships refer to 8.2.12, Creating a session: Verifying source-target relationship on page 379. .

Figure 8-28 Running Point-in-Time copy

14.Now you can see in the ESS Specialist interface that FlashCopy is running as shown in Figure 8-38 on page 384. In our example there are two pairs of FlashCopy on two different ESS devices running in the same session.

8.2.12 Creating a session: Verifying source-target relationship


When you create a session, Replication Manager can automatically create relationships of source volumes in a group and target volumes in a pool (if you chose Automatic Session Approval as in Figure 8-25 on page 378). If Replication Manager could not set the relationships, you get a message like the one in Figure 8-27. Tip: We recommend you check pairs of source and target volumes before starting a session. Though you chose Automatic Session Approval and got a message that the session was created successfully, you should check if relationships are set correctly.

Chapter 8. TotalStorage Productivity Center for Replication use

379

Figure 8-29 Creating session - error message

Perform the following steps to verify source-volume pairs. 1. If you got a message that creating command was completed with errors, click Details (see Figure 8-29). The window with messages opens, and you can see detailed messages (see Figure 8-30).

Figure 8-30 Detailed messages

Close both windows. You can see the created session in the Sessions pane. In our example in Figure 8-31 on page 381, we created a session named FC_F20_800.

380

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Figure 8-31 Sessions window with created session.

2. Click on a session you want to verify in Sessions panel. 3. Click the Please select drop-down and choose Properties. The window Session properties opens. Click the Copyset tab. See Figure 8-32. 4. The number under Non approved copysets: indicates that during creating a session or relationships they could not be created automatically. In our example, we chose the Automatic Session Approval method; two pairs were set automatically, however the next two were not approved (see Figure 8-32). Click Copyset details. The Copyset window opens as shown in Figure 8-33 on page 382.

Figure 8-32 Session properties window, Copyset tab

5. Select the Invalid Copyset to see details of the last result and click Modify copyset target. In our example two pairs are approved and two are not valid and should be modified as shown in Figure 8-33 on page 382.

Chapter 8. TotalStorage Productivity Center for Replication use

381

Figure 8-33 Sessions copysets

Tip: Copyset ID is related to the source volume of copy pair. 6. The Choose Target window opens. Select target volumes to create copy pair with source volume and click Next. In our example (see Figure 8-34) source volume is 1300 and we have two available targets 1304 and 1305.

Figure 8-34 Choose Target window

7. The Choose Target Verify window opens. If it shows the correct target volume for modifying the copyset click Finish to approve. 8. Perform Steps 5 - 7 for all copysets which are invalid, which means that source-target pairs were not set and approved. 9. If all copysets are correct you will see status as shown in Figure 8-35 on page 383. Select modified copyset to verify the last result says that the relationship was successfully created. 382
Managing Disk Subsystems using IBM TotalStorage Productivity Center

Figure 8-35 Approved copysets

10.Go back to Session properties windows, Copyset tab (see Figure 8-36) and click Refresh. If you modified all copysets correctly you should get result as shown in Figure 8-36.

Figure 8-36 Session properties window - status of corrected copysets

11.Go back to the main Sessions window. Select Session pull-down actions and click Flash to run a Point-in-time copy session as shown in Figure 8-37 on page 384. The Confirmation window opens, click Yes to run or No to cancel.

Chapter 8. TotalStorage Productivity Center for Replication use

383

Figure 8-37 Running Point-in-Time copy

12.You can see in the ESS Specialist interface that FlashCopy is running as shown in Figure 8-38. In our example there are two pairs of FlashCopy on two different ESS devices s running in the same session.

Figure 8-38 FlashCopy pairs created and run by TotalStorage Productivity Center for Replication

384

Managing Disk Subsystems using IBM TotalStorage Productivity Center

8.2.13 Continuous Synchronous Remote Copy: Creating a session


If you created a Replication Manager group and pool you can define a session which will run the copy task. This section describes Remote Copy which creates a synchronous copy of a volume on another or the same storage server. You can define a set of many pairs of mirroring volumes in the same session which runs all tasks at the same time to have consistent data spread on many volumes on different storage servers. Perform the following steps to create a Remote Copy session: 1. In the IBM Director Task panel, click Multiple Device Manager. 2. Click Manage Replication. 3. Double-click Groups. The Groups window opens (see Figure 8-8 on page 365). 4. Select the group which you want to copy and click Replicate. Create Session wizard opens for the chosen group (see Figure 8-39).

Figure 8-39 Create session window with Continuous Synchronous Remote Copy selection

- or 1. In the IBM Director Task panel, click Multiple Device Manager. 2. Click Manage Replication. 3. Double-click Sessions. The Session window opens (see Figure 8-40). 4. Select Create session action. The Create session window opens (see Figure 8-39).

Figure 8-40 Session window

5. Choose Continuous Synchronous Remote Copy and click Next. The Choose source group window opens Choose the Group name which you want to copy and click Next (see Figure 8-41 on page 386). If you ran the wizard from Groups window you can see only one Group which you selected before. Choose a target pool window opens as shown in Figure 8-42 on page 386
Chapter 8. TotalStorage Productivity Center for Replication use

385

Figure 8-41 Choosing source group for remote copy replication

6. In the Location filter field enter the name of the location of the target pool. You can enter an asterisk (*) as a wildcard for the first or last character of the filter. 7. Click Apply to see volumes of all locations which meet criteria. 8. Select All listed locations if you want to use volumes from more than one location or select a single location then select correct location and click Next. Note: Remember, location name is case sensitive

Figure 8-42 Choosing target pool for point in time copy

9. The Set session settings window opens. Enter the name and description (see Figure 8-43 on page 387). 10.Select one of the following options in the Session approval panel: Automatic - indicates that you allow Replication Manager to automatically create a relationship between the source and target volume.

386

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Manual - indicates that you want to select volumes and approve the relationship.

Figure 8-43 Set session settings window

11.Click Next, the Review session window opens. Validate the information and click Finish to submit (see Figure 8-44).

Figure 8-44 Creating session review

12.The session will be created and a new window opens with a message that the command completed successfully as shown in Figure 8-45 on page 388. If you get a message as shown in Figure 8-29 on page 380, read 8.2.12, Creating a session: Verifying source-target relationship on page 379.

Chapter 8. TotalStorage Productivity Center for Replication use

387

Figure 8-45 Continuous Synchronous Remote Copy session created successfully

13.In the Sessions window you can see new created session (see Figure 8-46).

Figure 8-46 Sessions window with created Continuous Synchronous Remote Copy session.

14.If session was created successfully, select the session you want to run, select Session actions and click Start to run a Remote Copy session (see Figure 8-47). However, we recommend you verify source-target volumes before running a session. To verify relationships, read 8.2.12, Creating a session: Verifying source-target relationship on page 379.

Figure 8-47 Starting Remote Copy session

15.You can see in the ESS Specialist interface that a Remote Copy is running as shown in Figure 8-48 on page 389. In our example there are two pairs of Remote Copy between volumes on two different ESSs running in the same session. 388
Managing Disk Subsystems using IBM TotalStorage Productivity Center

Figure 8-48 Remote copy pairs created and run by TotalStorage Productivity Center for Replication

8.2.14 Managing a Point-in-Time copy


From the Session window you perform actions for a (the actions are different for Continuous Synchronous Remote Copy): Create new session Delete defined session Flash - start a session Properties - view and change properties of a session Terminate started session Using any copy services requires that you create an accurate plan before running and a detailed plan for future management. Any mistakes can cause loss of data, for example if you use a wrong volume as the target of a copy session. Therefore we recommend you verify all pairs in a session before staring the copy process, see 8.2.12, Creating a session: Verifying source-target relationship on page 379.

Sessions window
When you create, verify and run a session you can monitor its status in the main Session window, which gives you basic information about a given session. Each session can include many pairs of volumes which are in copy relationships and create a consistent group. If the status of a given session is not optimal, you need to review the properties for a given session to check if there is a general problem or if it is related to a certain pair of volumes.

Chapter 8. TotalStorage Productivity Center for Replication use

389

Perform the following steps to check a basic status of a session: 1. Click Multiple Device Manager in the IBM Director Task panel. 2. Click Manage Replication. 3. Click Sessions. The Sessions window opens (also called the main Session window). There are eight fields in a Sessions window: a. Name of a session b. Status field can have one of the following status: Normal (green icon): Point-in-Time Copy was invoked successfully. Medium (yellow icon): A session is not started or was terminated Severe (red icon): An error occurred. Defined - a session is created and not started or was terminated Active - a session is running

c. State field can have one of the following status:

d. Group - name of a Group of volumes which are sources of copy pairs e. Copy Type - Point-in-Time Copy or Continuous Synchronous Remote Copy f. Recoverable - indicates if any sequences in a session are considered recoverable g. Shadowing - indicates if any part of a session is shadowing data h. Volume Exceptions - shows the total number of volumes which are in an exception state. Before starting a created session, you should see the following field values as shown in Figure 8-49: Status - Medium State - Defined Recoverable - No Shadowing - No Volume Exceptions - No

Figure 8-49 Defined state of Point-in-Time Copy

When you successfully flashed a new or terminated session you will see the values for the following parameters shown in Figure 8-58 on page 397: 390

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Status - Normal (green) State - Active (changed from Defined) Recoverable - Yes

Figure 8-50 Flashed Point-in-Time Copy session.

Properties window
The Sessions window shows a status of session as a group of volume pairs. If you want to see details, perform following steps to use Properties window: 1. In the IBM Director Task panel, click Multiple Device Manager. 2. Click Manage Replication. 3. Double-click Sessions. The Session window opens 4. Select a session which you want to manage, select the Properties session action. The Properties window opens. 5. There are three tabs in the Properties window a. General - shows general information about a session. Comparing the information to the Session window you get additional information like Description, number of volumes and approval status. The only parameter you can change is the approval status which can be automatic or manual. b. Copyset - you can check if all pairs are valid and approved. This panel is mostly used during verification of a session, see Creating a session: Verifying source-target relationship on page 379. c. Sequence - this tab is mostly used to see detailed information about the status of a session, especially when used together with the Pairs window. The General tab shows basic information, like the Session window. For example, when you flashed a session, you should see the following values (see Figure 8-51 on page 392): Copy type - Point-in-Time Copy State - Active Status - Normal Group - name of group used for this session Source Volumes - number of volumes in a group Approval status - Automatic or Manual

Chapter 8. TotalStorage Productivity Center for Replication use

391

Figure 8-51 General tab in Properties window for flashed session

The Copyset tab generally does not change while managing a session unless some error occurs. Figure 8-52 shows the status in our environment.

Figure 8-52 Copyset tab for correctly defined session with 4 pairs of volumes

To see more information about copysets, especially If some of them are invalid, click Copyset details. The Copyset window opens, displaying the table of copysets in the session. You can check for problems in the following tables: The Copyset table indicates if the copyset is invalid. The Last Result column displays the latest message issued for a copyset and indicates why it is invalid.

392

Managing Disk Subsystems using IBM TotalStorage Productivity Center

The Last Result column of the Copyset Relationships table displays the last message issued for a copyset pair. If a message ends in E or W, the pair is considered an exception pair. For more details refer to Creating a session: Verifying source-target relationship on page 379. The Sequence tab is the most useful when you manage replication sessions, especially during synchronization. You can see which volume pairs are synchronized and the status of the others. In the Sequence panel the following columns are available: Recoverable - true or false. Indicates if all pairs in a sequence are recoverable Exception - yes or no. Indicates if at least one pair is in exception state. Shadowing - yes or no. Indicates if all pairs are in shadowing state. Exception volumes - shows number of volumes which are in exception state. Recoverable pairs - shows number of volume pairs which are recoverable Shadowing pairs - shows number of volume pairs which are in shadowing state Total pairs - shows total number of pairs in a sequence Recoverable timestamp - shows time when a session was suspended Following is an example from our environment of different states of replication session. In our example, after you created or terminated a session, you will see the Sequence tab as shown in Figure 8-53.

Figure 8-53 Sequence tab in Session properties window for defined Point-in-Time Copy session

When the session is created or terminated, it is in defined state. You can see in the Sequences pane: Name - Local point in time copy sequence Recoverable - false - it is not recoverable Exception - No - there are no exceptions Shadowing - No - sequence is not shadowing Exception volumes - 0 - no volume is in exception state Recoverable pairs - 0 - no pair is recoverable Shadowing pairs - 0 - no pair is shadowing
Chapter 8. TotalStorage Productivity Center for Replication use

393

Total pairs - 4 (in our example) - total number of pairs is two Recoverable timestamp - n/a - is non-available In the Sequence states panel you see that four pairs are in Defined state. To see more details, select Sequence in the Sequences panel and click Pairs. A new window opens as shown in Figure 8-54.

Figure 8-54 Pair of Point-in-Time Copy session in defined state

The Sequence Flashed Target pairs window contains the following information: Source Volume - source volume of a pair, includes type and number of ESS and volume number Target Volume - target volume of a pair State - Defined - means a session is created or terminated but not running Recoverable - No - indicates if a pair is flashed Shadowing - No New - Yes - indicates it is new session Timestamp Last result - the code of the last result, you can see a description in Last result panel if you click on one pair in Pairs panel. When you flash a new or terminated session you will see the Sequence tab as shown in Figure 8-65 on page 402.

394

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Figure 8-55 Sequence tab in Session properties window

Notice the values of the following columns: Recoverable - true Recoverable timestamp - time, when Point-in-Time Copy session was successfully flashed The Sequence Flashed Target pairs window shown in Figure 8-56 shows successfully flashed volumes.

Figure 8-56 Pairs of successfully flashed volumes.

8.2.15 Managing a Continuous Synchronous Remote Copy


From the Session window you can perform several tasks for Continuous Synchronous Remote Copy (the tasks are different for Point-in-Time Copy): Create a new session.
Chapter 8. TotalStorage Productivity Center for Replication use

395

delete a defined session properties (view and change properties of a session) start a session suspend already started and synchronized session terminate a started session Using any copy services requires that you create an accurate plan before running and a detailed plan for managing the copy services. Any mistake can cause loss of data, for example when you use the wrong volume as a target of a copy session. Therefore we recommend you verify all pairs in a session before starting the copy process. Refer to Creating a session: Verifying source-target relationship on page 379.

Sessions window
When you create, verify and run a session you can monitor its status in the main Session window, which gives you basic information about a given session. Each session can include many pairs of volumes which are in copy relationships and create a consistent group. If the status of a given session is not optimal, you need to review the properties for a given session to check if there is a general problem or if it is related to a certain pair of volumes. Perform the following steps to check a basic status of a session: 1. Click Multiple Device Manager in the IBM Director Task panel. 2. Click Manage Replication. 3. Click Sessions. The Sessions window opens (also called as main Session window) 4. There are eight fields in a Sessions window: a. Name of a session b. Status field can have one of the following status: Normal (green icon): All source volumes are replicating in both directions and copy is active. All volumes were established successfully and are synchronized. Medium (yellow icon): A session is not started, was terminated or is synchronizing but at least one volume is not synchronized with a source. Severe (red icon): An error caused a hardware device to respond at multiple addresses or, for a fibre-channel connection, a volume failed to be established. Defined - a session is created and not started or was terminated Active - a session is running

c. State field can have one of the following status:

d. Group - name of a Group of volumes which are sources of copy pairs e. Copy Type - can be a Point-in-Time Copy or Continuous Synchronous Remote Copy (as described in this chapter) f. Recoverable - indicates if any sequences in a session are considered recoverable g. Shadowing - indicated if any part of a session is shadowing data h. Volume Exceptions - shows the total number of volumes which are in an exception state. After you created a session, before starting it you should see the following values for several fields (see Figure 8-57 on page 397): Status - Medium

396

Managing Disk Subsystems using IBM TotalStorage Productivity Center

State - Defined Recoverable - No Shadowing - No Volume Exceptions - No

Figure 8-57 Defined state

When you start a new session or resume suspended session you will see the following values (see Figure 8-58): Status - Medium (is still not optimal) State - Active (changed from Defined) Recoverable - No Shadowing - Yes (changed) Volume Exceptions - No

Figure 8-58 Synchronizing (copy pending) status

If all pairs in a session are synchronized you should see following values (see Figure 8-59 on page 398): Status - Normal (changed, now it is optimal state) State - Active Recoverable - Yes (changed, now you can recover data in case of disaster)
Chapter 8. TotalStorage Productivity Center for Replication use

397

Shadowing - Yes Volume Exceptions - No

Figure 8-59 Synchronized (full-duplex) status

If a session is suspended you should see following values (see Figure 8-60): Status - Normal State - Active Recoverable - Yes Shadowing - No Volume Exceptions - No

Figure 8-60 Suspended status

Properties window
The Session Properties window shows a status of session as a group of volume pairs. If you want to see details, perform following steps to use Properties window: 1. In the IBM Director Task panel, click Multiple Device Manager. 2. Click Manage Replication. 3. Double-click Sessions. The Session window opens. 4. Select a session which you want to manage, select Properties session action. The Properties window opens. 398
Managing Disk Subsystems using IBM TotalStorage Productivity Center

5. There are three tabs in the Properties window. General - shows general information about a session. Comparing to Session window you get additional information such as Description, number of volumes and approval status. The only parameter you can change is the approval status which can be automatic or manual. Copyset - you can check if all pairs are valid and approved. This panel is mostly used during verification of a session, see Creating a session: Verifying source-target relationship on page 379. Sequence - this tab is mostly used to see detailed information about the status of a session, especially together with the Pairs window. The General tab shows basic information, like the Session window. For example, when you create a session, before starting it you should see the values in Figure 8-61: Copy type - Continuous Synchronous Remote Copy State - Defined Status - Medium Group - name of group used for this session Source Volumes - number of volumes in a group Approval status - Automatic or Manual

Figure 8-61 General tab in Properties window for defined session

The Copyset tab information generally does not change while managing a session unless some error occurs. You should see the following status as shown in Figure 8-62 on page 400.

Chapter 8. TotalStorage Productivity Center for Replication use

399

Figure 8-62 Copyset tab in Properties window for correctly defined session

To see more information about copysets, especially If some of them are invalid, click Copyset details. The Copyset window opens, displaying the table of copy sets in the session. You can check for problems in the following tables: The Copyset table indicates if the copy set is invalid. The Last Result column displays the latest message issued for a copyset and indicates why it is invalid. The Last Result column of the Copyset Relationships table displays the last message issued for a copyset pair. If a message ends in E or W, the pair is considered an exception pair. For additional details refer to Creating a session: Verifying source-target relationship on page 379. Sequence tab is the most useful when you manage replication sessions, especially during synchronization you can see which volume pairs are synchronized and the status of the others. In the Sequence panel the following columns are available: Recoverable - true or false. Indicates if all pairs in a sequence are recoverable Exception - yes or no. Indicates if at least one pair is in exception state. Shadowing - yes or no. Indicates if all pairs are in shadowing state. Exception volumes - shows number of volumes which are in exception state. Recoverable pairs - shows number of volume pairs which are recoverable Shadowing pairs - shows number of volume pairs which are in shadowing state Total pairs - shows total number of pairs in a sequence Recoverable timestamp - shows time when a session was suspended The following is an example from our environment showing different states of replication session. If you created session or terminated running session, you will see the Sequence tab as shown in Figure 8-63 on page 401.

400

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Figure 8-63 Sequence tab in Session properties window for defined session

When the session is created or terminated, it is in a defined state. You can see in Sequences panel the following values: Recoverable - false - it is not recoverable Exception - No - there are no exceptions Shadowing - No - sequence is not shadowing Exception volumes - 0 - no volume is in exception state Recoverable pairs - 0 - no pair is recoverable Shadowing pairs - 0 - no pair is shadowing Total pairs - 2 - total number of pairs is two Recoverable timestamp - n/a - is non-available In the Sequence states panel you see that two pairs are in Defined state. For more details, select Sequence in Sequences panel and click Pairs. The Sequence Remote Target pairs window opens as shown in Figure 8-64.

Figure 8-64 Pairs of remote mirror session in defined state

Chapter 8. TotalStorage Productivity Center for Replication use

401

The Sequence Remote Target pairs window contains the following columns: Source Volume - Source volume of a pair, includes type and number of ESS and volume number. Target Volume - Target volume of a pair. State - Defined - Means a session is created or terminated but not running. Recoverable - No - indicates if a pair is synchronized. Shadowing - No New - Yes - Indicates it is new session. Timestamp Last result - The return code of the last result, you can see a description in the Last result panel if you click on one pair in the Pairs panel. When you start a new session or resume suspended session you will see the Sequence tab as shown in Figure 8-65.

Figure 8-65 Sequence tab in Session properties window for just started session

The following columns have changed their state: Shadowing - yes Shadowing pairs - 2 It looks similar to the Sequence Remote Target pairs window as shown in Figure 8-66 on page 403.

402

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Figure 8-66 Pairs of just started remote mirror session

If one volume is synchronized but another is still synchronizing you will see the status as shown in Figure 8-67.

Figure 8-67 Sequence tab in a Session properties window for partially synchronized session

One pair is in Duplex state which means synchronized and another pair is in synchronizing state. Notice that Recoverable state is still false, because not all pairs are synchronized. To see which pair is in full duplex state, click Pairs (see Figure 8-68 on page 404).

Chapter 8. TotalStorage Productivity Center for Replication use

403

Figure 8-68 Pairs of partially synchronized session

In our example, one pair is in Duplex state (volume 1703 on ESS F20 16603 is synchronized with volume 1301 on ESS 800 22513) while the second pair is still synchronizing. When all pairs in a session are synchronized, you will see the status as shown in Figure 8-69.

Figure 8-69 Sequence tab in Session properties window in synchronized state

Notice that Recoverable status is true which means that all pairs are in Duplex state which means synchronized. The same status is shown for all pairs separately in the Sequence Remote Target pairs window as shown in Figure 8-70 on page 405.

404

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Figure 8-70 Pairs of fully synchronized session

When a session is fully synchronized you can suspend a session to have a consistent state of data on remote server. If you successfully suspended a session you will see Sequence tab information as shown in Figure 8-71.

Figure 8-71 Sequence tab in Session properties window in successfully suspended state

You can see in the Sequence tab panel: Recoverable - true - it is recoverable Exception - No - there are no exceptions Shadowing - No - sequence is not shadowing Exception volumes - 0 - no volume is in exception state Recoverable pairs - 2- two pairs are recoverable Shadowing pairs - 0 -no pair is shadowing Total pairs - 2 - total number of pairs is two Recoverable timestamp - time, when a session was successfully suspended In the Sequence states panel you see that two pairs are in Suspended state. To see more details, select Sequence in Sequences panel and click Pairs. A new window opens as shown in Figure 8-72 on page 406.

Chapter 8. TotalStorage Productivity Center for Replication use

405

Figure 8-72 Pairs of successfully suspended session

For a successfully suspended session, you should see following values in Pairs window: State - Suspended Recoverable - Yes Shadowing - No New - No Important: Remember to check the status of a session if it is successfully synchronized before you invoke suspend command. Otherwise you will get invalid and inconsistent data on the remote site. If you suspended a session which was not synchronized you will get the information in the Sequence tab as shown in Figure 8-73.

Figure 8-73 Sequence tab in suspended but not recoverable state

You can see in Sequences panel: Recoverable - false - it is not recoverable Exception - No - there are no exceptions

406

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Shadowing - No - sequence is not shadowing Exception volumes - 0 - no volume is in exception state Recoverable pairs - 0- no pair is recoverable Shadowing pairs - 0 -no pair is shadowing Total pairs - 2 - total number of pairs is two Recoverable timestamp - n/a, recovery is not possible so there is no time information Notice that state is Suspended but a session is not recoverable. In a Sequence states panel you see that two pairs are in Suspended state but the Recoverable value is false. To see more details, select Sequence in Sequences panel and click Pairs. A new window opens as shown in Figure 8-74.

Figure 8-74 Pairs of suspended not synchronized session

When a session was suspended in inconsistent state, you will see the following values in the Pairs pane: State - Suspended Recoverable - No Shadowing - No New - Yes

8.3 Using Command Line Interface (CLI) for replication


This section introduces the command-line interface (CLI) for TotalStorage Productivity Center for Replication. We will focus on main commands used for managing session. See IBM TotalStorage Productivity Center for Disk and Replication: Command-Line Interface Users Guide, SC30-4109, for detailed description of all available commands. Using the CLI you can create, delete sessions, groups, pools and related copy pairs as well as run, suspend and terminate replication sessions. You can use CLI installed together with TotalStorage Productivity Center for Replication on main server or install CLI on other machine and invoke commands remotely. See Installing CIM agent for ESS on page 124 for installation instructions.

Chapter 8. TotalStorage Productivity Center for Replication use

407

repcli utility
To use the CLI you have to run the repcli utility. The default folder location of CLI for Replication Manager is c:\Program Files\IBM\mdm\rm\rmcli. This utility can also run commands in interactive mode, single command or a set of commands from a script. Syntax of repcli command: repcli [ { -ver|-overview|-script file_name|command | - } ] [ { -help|h|-? } ] where -ver Displays the current version -overview Displays the overview information about the repcli utility, including command modes, standard command and listing parameters, syntax diagram conventions, and user assistance. -script filename Runs the set of command strings in the specified file outside of a repcli session. You must specify a file name. The format options specified using the setoutput command apply to all commands in the script. Output from successful commands routes to stdout. Output from unsuccessful commands route to stderr. If an error occurs while one of the commands in the script is running, the script exits at the point of failure and return to the system prompt. example:
repcli -script start_backup.scr

-command_string Runs the specified command string outside of a repcli session. example:
repcli lssess

cli commands for Replication


Following is a list of all commands available in CLI for TotalStorage Productivity Center for Replication V2.1: approvecpset chcpset chgrp chsess chtgtpool exit flashsess generatecpset help lsgrp lslss lspair lspath lsseq lssess lstgtpool lsvol mkcpset mksess mktgtpool quit repcli rmcpset rmgrp rmpath rmsess rmtgtpool showattribute showcpset showdev showgrp showmessage showsess showtgtpool startsess stopflashsess

408

Managing Disk Subsystems using IBM TotalStorage Productivity Center

lscpset lsdev

mkgrp mkpath

setattribute setoutput

stopsess suspendsess

In this section we will focus on a few of them, which are mostly used for managing replication sessions like: flashsess - start Point-in-Time Copy session lspair - show information about a copy pair for a session lsseq - show information about a sequence for a session lssess - show details about all or filtered sessions setoutput - change default format for output showsess - show details about certain session startsess - start Continuous Synchronous Remote Copy session stopflashsess - terminate Point-in-Time Copy session stopsess - terminate Continuous Synchronous Remote Copy session suspendsess - suspend Continuous Synchronous Remote Copy session

8.3.1 Session details


Before you start a session, check its status using lssess or showsess commands. Use the lssess command to get basic or detailed information about all sessions or to find a session which fulfills specific criteria. The showsess command displays detailed information about a given session.

lssess command
lssess [ { -help|-h|-? } ] [ { -l (long)|-s (short) } ] [-fmt default|xml|delim|stanza] [-p on|off] [-delim char] [-hdr on|off] [-r #] [-v on|off] [-cptype flash|pprc] [-state defined|active] [-status norm|warn|sev|unknown] [-recov yes|no] [-shadow yes|no] [-err yes|no] [session_name ... | -] -s An optional parameter that displays only the session name. -l Displays more details - default output plus approval type, pool criteria, copysets, non-approved, invalid, and description. -cptype copytype An optional parameter that displays only the sessions with the copy type specified. -state defined | active Displays only the sessions that are in the state specified. -status norm | warn | sev Displays only the sessions that have the status specified. -recov yes | no

Chapter 8. TotalStorage Productivity Center for Replication use

409

An optional parameter that is set to yes or no to indicate whether the session can be considered recoverable, based on whether any sequences in the session can be considered recoverable. -shadow yes | no An optional parameter that indicates whether any part of the session is shadowing data. -err yes | no An optional parameter that shows sessions that have errors or no errors. session_name [,...] | An optional parameter that displays only the sessions with the session name specified. Separate multiple session names with a comma between each name. If no session name is specified, all sessions are displayed unless another filter is used. In our example you should see following results for created sessions using the lssess command as shown in Example 8-1.
Example 8-1 lssess - defined sessions repcli> lssess Name Status State Group Type Recover Shadow Err ================================================================= PPRC_800_to_F20 warning Defined PPRC_src pprc No No No FC_F20_800 warning Defined FC_src flash No No No

showess command
You can see details about certain sessions using lssess with -l parameter or showsess command. showsess session_name this command shows following information (like lssess -l): Name - Session Name. Copy type - Point-in-Time Copy or Continuous Synchronous Remote Copy. State - Defined or Active. Status - Unknown, Normal, Low, Medium, Severe, or Fatal. Group - Name of group of source volumes Source volumes - Shows number of volumes in the group being replicated by this session. Approval status - Automatic or Manual. Copysets - Shows number of copysets that the session is managing. Non-approved - Indicates the number of copysets that have yet to be verified. Invalid copysets - Indicates the number of copysets that were determined to be invalid. Seq - Valid sequence names are Remote Target for remote copy and Flashed Target for point-in-time copy. Use quotes around the entire flag e.g. Flashed Target:location=RTP. Pool Criteria - Location exact name or filter Shadow - Yes or no. Indicates if a session is shadowing data Recov - Yes or no. Indicates if all pairs in a session are recoverable 410
Managing Disk Subsystems using IBM TotalStorage Productivity Center

Approve - Yes or no. Indicates if all copysets are approved Description - User-defined session description. In Example 8-2 you can see the result of showsess command for defined sessions. You can compare parameters to information you can get using graphical interface as described in Managing a Continuous Synchronous Remote Copy on page 395.
Example 8-2 showsess - defined sessions repcli> showsess PPRC_800_to_F20 Name PPRC_800_to_F20 Type pprc State Defined Status warning Group PPRC_src Source Volumes 2 Approval Status Automatic Copysets 2 Non-approved 0 Invalid 0 Seq "Remote Target" Pool Criteria F20 Shadow No Recover No Err No Approve Yes Description Remote copy of 2 volumes from ESS 800 to F20 AWN007080I Command completed successfully. repcli> showsess FC_F20_800 Name FC_F20_800 Type flash State Defined Status warning Group FC_src Source Volumes 4 Approval Status Automatic Copysets 4 Non-approved 0 Invalid 0 Seq "Flashed Target" Pool Criteria P% Shadow No Recover No Err No Approve Yes Description Point in time copy of 4 volumes, 2 on ESS F20 and 2 on ESS 800 AWN007080I Command completed successfully.

8.3.2 Starting a session


This section shows the commands to start a replication session.

flashess command
To run a created or terminated Point-in-Time Copy session invoke flashsess command: flashsess [-quiet] session_name [. . .]
Chapter 8. TotalStorage Productivity Center for Replication use

411

session_name specifies the session name to be activated. Separate multiple session names with a white space between each name. Alternatively, use the dash (-) to specify that input for this parameter comes from an input stream (STDIN). -quiet An optional parameter that turns off the confirmation prompt for this command. Note: In a batch program use the quiet parameter where available, otherwise the program will wait for your confirmation. Example 8-3 shows an example of the flashess command.
Example 8-3 flashsess command repcli> flashsess -quiet FC_F20_800 AWN007110I Command completed successfully.

To start a created, terminated or suspended Continuous Synchronous Remote Copy session invoke the startsess command as shown in Example 8-4:
Example 8-4 startsess command repcli> startsess PPRC_800_to_F20 AWN007100I Command completed successfully.

Example 8-5 shows the status of started sessions. Point-in-Time Copy was created successfully which confirms normal status and yes value of recover parameter. However, Continuous Synchronous Remote Copy is running (Active State) but not synchronized which show in the Recover and Status parameters.
Example 8-5 lssess - started sessions repcli> lssess Name Status State Group Type Recover Shadow Err ================================================================ PPRC_800_to_F20 warning Active PPRC_src pprc No Yes No FC_F20_800 normal Active FC_src flash Yes Yes No

lsseq command
You should use two additional commands lsseq and lspair to get more details about current state of sessions. lsseq [ { -l |-s } ] [-recov yes|no] [-shadow yes|no] [-err yes|no] session_name -s An optional parameter that displays volumes only. -l An optional parameter that displays all valid output. This is the default. -recov yes | no An optional parameter that indicates whether any sequences in the session can be considered recoverable. -shadow yes | no

412

Managing Disk Subsystems using IBM TotalStorage Productivity Center

An optional parameter that indicates whether or not the sequence is shadowing (copying) the data. -err yes | no An optional parameter that shows sessions that have errors or no errors. session_name Specifies the session name to be activated. In Example 8-6 you can find the time when the Point-in-Time Copy was run in the Timestamp column. For Continuous Synchronous Remote Copy session, one pair is synchronized which shows in the Recov Pairs parameter. The state of Recov parameter will change to yes when all pairs are synchronized.
Example 8-6 lsseq - started sessions
repcli> lsseq FC_F20_800 Name Recov Err Shadow Err Vols Recov Pairs Shadow Pairs Total Pairs Recov Timestamp ===================================================================================================== Flashed Target Yes No Yes 0 4 4 4 2005/04/12 16:34:00 PDT repcli> repcli> lsseq PPRC_800_to_F20 Name Recov Err Shadow Err Vols Recov Pairs Shadow Pairs Total Pairs Recov Timestamp ============================================================================================ Remote Target No No Yes 0 1 2 2 n/a

lspair command
lspair [ { -l |-s } ] { -seq sequence_name|-cpset source_vol_id } [-state defined |active|duplex|suspended|synch|flashed] [-recov yes|no] [-shadow yes|no] [-new yes|no] [-err yes|no] session_name | You can use the lspair command to list the source and target of the copy service pairs and their status. -s An optional parameter that displays information about only pairs. -l An optional parameter that displays the default output, including pairs. -seq sequence_name Displays only pairs of the sequence name specified. Mutually exclusive with -cpset. -cpset source_vol_id Specifies the source volume ID of the copy set on which you want a list of pairs. Mutually exclusive with -seq. -state defined | active | duplex | suspended | synch | flashed An optional parameter that displays the state. The state can be defined , active, duplex, suspended, synch, or flashed. -recov yes | no An optional parameter that displays only pairs in the corresponding recoverable state. -shadow yes | no An optional parameter that displays only pairs that are in the new state. -new yes | no An optional parameter that displays only pairs that are in the new state specified.
Chapter 8. TotalStorage Productivity Center for Replication use

413

-err yes | no An optional parameter that displays only pairs that are in the error state. session_name The session name by which the pairs are identified. In Example 8-7 you can see details about volume pairs. For Continuous Synchronous Remote Copy session one pair of volumes is synchronized which shows Duplex State but the second one is still synchronizing.
Example 8-7 lspair - started sessions
repcli> lspair -seq 'Flashed Target' FC_F20_800 Source Target State Recov Shadow New Copyset Timestamp Last result ==================================================================================================================================== ESS:2105.16603:VOL:1702 ESS:2105.16603:VOL:1706 Flashed Yes Yes No ESS:2105.16603:VOL:1702 2005/04/12 16:34:00 PDT IWNR2016I ESS:2105.16603:VOL:1703 ESS:2105.16603:VOL:1705 Flashed Yes Yes No ESS:2105.16603:VOL:1703 2005/04/12 16:34:00 PDT IWNR2016I ESS:2105.22513:VOL:1300 ESS:2105.22513:VOL:1305 Flashed Yes Yes No ESS:2105.22513:VOL:1300 2005/04/12 16:34:00 PDT IWNR2016I ESS:2105.22513:VOL:1301 ESS:2105.22513:VOL:1304 Flashed Yes Yes No ESS:2105.22513:VOL:1301 2005/04/12 16:34:00 PDT IWNR2016I repcli> repcli> lspair -seq 'Remote Target' PPRC_800_to_F20 Source Target State Recov Shadow New Copyset Timestamp Last result ============================================================================================================================ ESS:2105.22513:VOL:1302 ESS:2105.16603:VOL:1707 Duplex Yes Yes No ESS:2105.22513:VOL:1302 n/a IWNR2011I ESS:2105.22513:VOL:1303 ESS:2105.16603:VOL:1708 SYNCHRONIZING No Yes Yes ESS:2105.22513:VOL:1303 n/a IWNR2011I

When all volume pairs of Continuous Synchronous Remote Copy session are synchronized you should get results as shown in Example 8-8.
Example 8-8 Duplex state of Continuous Synchronous Remote Copy session
repcli> showsess PPRC_800_to_F20 Name PPRC_800_to_F20 Type pprc State Active Status normal Group PPRC_src Source Volumes 2 Approval Status Automatic Copysets 2 Non-approved 0 Invalid 0 Seq "Remote Target" Pool Criteria F20 Shadow Yes Recover Yes Err No Approve Yes Description Remote copy of 2 volumes from ESS 800 to F20 AWN007080I Command completed successfully. repcli> repcli> lsseq PPRC_800_to_F20 Name Recov Err Shadow Err Vols Recov Pairs Shadow Pairs Total Pairs Recov Timestamp ============================================================================================ Remote Target Yes No Yes 0 2 2 2 n/a repcli> repcli> lspair -seq 'Remote Target' PPRC_800_to_F20 Source Target State Recov Shadow New Copyset Timestamp Last result ===================================================================================================================== ESS:2105.22513:VOL:1302 ESS:2105.16603:VOL:1707 Duplex Yes Yes No ESS:2105.22513:VOL:1302 n/a IWNR2011I ESS:2105.22513:VOL:1303 ESS:2105.16603:VOL:1708 Duplex Yes Yes No ESS:2105.22513:VOL:1303 n/a IWNR2011I

8.3.3 Suspending a session


TotalStorage Productivity Center for Replication allows to suspend Continuous Synchronous Remote Copy session if you requires consistent status of data on remote site, which can be 414
Managing Disk Subsystems using IBM TotalStorage Productivity Center

used for example to do a backup. All changes will be registered and when you start a suspended session, only modified data will be copied to remote volume to obtain synchronized state.

suspendsess command
You can use suspendsess to suspend a Continuous Synchronous Remote Copy session. To restart a session, invoke startsess command. Note: To keep data consistency use -type consist parameter for suspendsess command. suspendsess [ { -help|-h|-? } ] [-quiet] -type consist|immed session_name ... | -quiet An optional parameter that turns off the confirmation prompt for this command. -type consist | immed Specifies the type of session to suspend. Specify consist to freeze a PPRC session, or specify immed (for immediately) to stop a session. session_name [...] | Specifies the session name to be suspended. Separate multiple session names with a white space between each name. Alternatively, use the dash (-) to specify that input for this parameter comes from an input stream (STDIN). Example 8-9 shows the suspendsess command.
Example 8-9 suspendsess repcli> suspendsess -quiet -type consist PPRC_800_to_F20 AWN007140I Command completed successfully.

When a Continuous Synchronous Remote Copy session is suspended you should see results as shown in Example 8-10 from the lssess command. Notice that a session is recoverable and is not shadowing.
Example 8-10 lssess - suspended session repcli> lssess PPRC_800_to_F20 Name Status State Group Type Recover Shadow Err ============================================================== PPRC_800_to_F20 normal Active PPRC_src pprc Yes No No

lspair command
Invoke the lspair command to see that all volume pairs are suspended and time, when the session was frozen as shown in Example 8-11.
Example 8-11 lspair - suspended session
repcli> lspair -seq 'Remote Target' PPRC_800_to_F20 Source Target State Recov Shadow New Copyset Timestamp Last result ====================================================================================================================================== ESS:2105.22513:VOL:1302 ESS:2105.16603:VOL:1707 Suspended Yes No No ESS:2105.22513:VOL:1302 2005/04/12 19:43:25 PDT IWNR2015I ESS:2105.22513:VOL:1303 ESS:2105.16603:VOL:1708 Suspended Yes No No ESS:2105.22513:VOL:1303 2005/04/12 19:43:25 PDT IWNR2015I

8.3.4 Terminating a session


This section details the commands used to terminate a replication session.
Chapter 8. TotalStorage Productivity Center for Replication use

415

stopflashsess command
You can use the stopflashsess command at any point during the life of a Point-in-Time Copy session once that session is in active state. This command withdraws all relationships between volumes on the storage subsystem. Example 8-12 shows an example of the stopflashsess command.
Example 8-12 stopflashsess repcli> stopflashsess -quiet FC_F20_800 AWN007150I Command completed successfully. repcli> lssess FC_F20_800 Name Status State Group Type Recover Shadow Err ========================================================== FC_F20_800 warning Defined FC_src flash No No No repcli> showsess FC_F20_800 Name FC_F20_800 Type flash State Defined Status warning Group FC_src Source Volumes 4 Approval Status Automatic Copysets 4 Non-approved 0 Invalid 0 Seq "Flashed Target" Pool Criteria P% Shadow No Recover No Err No Approve Yes Description Point in time copy of 4 volumes, 2 on ESS F20 and 2 on ESS 800 AWN007080I Command completed successfully.

stopsess command
To stop Continuous Synchronous Remote Copy session you can use the stopsess command at any point during the life of a session once that session is in active state. This command withdraws the relationship on the hardware. stopsess [-quiet] session_name [. . .] -quiet An optional parameter that turns off the confirmation prompt for this command. session_name [...] | Specifies the session name to be stopped. Separate multiple session names with a white space between each name. Alternatively, use the dash (-) to specify that input for this parameter comes from an input stream (STDIN). Example 8-13 shows an example of the stopsess command.
Example 8-13 stopsess repcli> stopsess -quiet PPRC_800_to_F20 AWN007120I Command completed successfully. repcli> lssess PPRC_800_to_F20 Name Status State Group Type Recover Shadow Err ================================================================

416

Managing Disk Subsystems using IBM TotalStorage Productivity Center

PPRC_800_to_F20 warning Defined PPRC_src pprc No No No repcli> showsess PPRC_800_to_F20 Name PPRC_800_to_F20 Type pprc State Defined Status warning Group PPRC_src Source Volumes 2 Approval Status Automatic Copysets 2 Non-approved 0 Invalid 0 Seq "Remote Target" Pool Criteria F20 Shadow No Recover No Err No Approve Yes Description Remote copy of 2 volumes from ESS 800 to F20 AWN007080I Command completed successfully.

Output format
This section details the commands to display repcli commands.

setoutput command
You can use the setoutput command to display current output settings for repcli commands. The output format set by this command remains in effect for the duration of a command session or until the options are reset. setoutput [ { -help|-h|-? } ] [-p on|off] [-r #] [-fmt default|xml|delim|stanza] [delim character] [-hdr on|off] [-v on|off]

? | h | help
Displays a detailed description of this command, including syntax, parameter descriptions, and examples. If you specify a help option, all other command options are ignored. fmt Specifies the format of the output. You can specify one of the following values: default Specifies that output is to be displayed in a tabular format using spaces as the delimiter between the columns. This is the default value. delim Specifies that output is to be displayed in a tabular format using the specified character to separate the columns. If you use a shell meta character (for example, * or \t) as the delimiting character, enclose the character in single quotation mark () or double quotation mark (). A blank space is not a valid character. xml Specifies that output is to be displayed using XML format. stanza Specifies that output is to be displayed in rows.

Chapter 8. TotalStorage Productivity Center for Replication use

417

delim character
Specifies character to separate the columns when -fmt delim parameter is used.

p
Specifies whether to display one page of text at a time or all text at once. off Displays all text at one time. This is the default value when the perftool command is run in single-shot mode. on Displays one page of text at time. Pressing any key displays the next page. This is the default value when the repcli command is run in interactive mode. hdr Specifies whether to display the table header. on Displays the table header. This is the default value. off Does not display the table header. r number Specifies the number of rows per page to display when the p parameter is on. The default is 24 rows. You can specify a value from 1 to 100. v Specifies whether to enable verbose mode. off Disables verbose mode. This is the default value. on Enables verbose mode. Example 8-14 shows the different formats of output.
Example 8-14 Default output settings repcli> setoutput Paging Rows Format Headers Verbose Banner ========================================== On 22 Default On Off Off

If you want to use an output format other than the default format only once per repcli session, use the setoutput command. There are output parameters for the commands: lssess lspair lsseq The output parameters for these commands are: [-fmt default|xml|delim|stanza] [delim character] [-hdr on|off] [-v on|off]

418

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Syntax is the same as for setoutput command. See the different formats of output for the lssess command in Example 8-15, Example 8-16, Example 8-17, and Example 8-18 on page 420.
Example 8-15 default output format repcli> lssess PPRC_800_to_F20 Name Status State Group Type Recover Shadow Err ============================================================== PPRC_800_to_F20 normal Active PPRC_src pprc Yes Yes No

Example 8-16 XML output format repcli> lssess -fmt xml PPRC_800_to_F20 <IRETURNVALUE> <INSTANCE CLASSNAME="RM_Session"><PROPERTY NAME="session_name" TYPE="string"><VALUE TYPE="string">PPRC_800_to_F20</VALUE></PROPERTY><PROPERTY NAME="cptype" TYPE="string"><VALUE TYPE="string">pprc</VALUE></PROPERTY><PROPERTY NAME="state" TYPE="string"><VALUE TYPE="string">Active</VALUE></PROPERTY><PROPERTY NAME="status" TYPE="string"><VALUE TYPE="string">normal</VALUE></PROPERTY><PROPERTY NAME="srcgrp" TYPE="string"><VALUE TYPE="string">PPRC_src</VALUE></PROPERTY><PROPERTY NAME="shadow" TYPE="string"><VALUE TYPE="string">Yes</VALUE></PROPERTY><PROPERTY NAME="recov" TYPE="string"><VALUE TYPE="string">Yes</VALUE></PROPERTY><PROPERTY NAME="err" TYPE="string"><VALUE TYPE="string">No</VALUE></PROPERTY></INSTANCE> </IRETURNVALUE>

Example 8-17 stanza output format repcli> lssess -fmt stanza PPRC_800_to_F20 Name PPRC_800_to_F20 Status normal State Active Group PPRC_src Type pprc Recover Yes Shadow Yes Err No repcli> lssess -l -fmt stanza PPRC_800_to_F20 Name PPRC_800_to_F20 Status normal State Active Group PPRC_src Type pprc Recover Yes Shadow Yes Err No Approval Status Automatic Pool Criteria F20 Copysets 2 Non-approved 0 Invalid 0 Description Remote copy of 2 volumes from ESS 800 to F20 Seq "Remote Target" Source Volumes 2 Approve Yes

Chapter 8. TotalStorage Productivity Center for Replication use

419

Example 8-18 delim output format repcli> lssess -fmt delim -delim ',' PPRC_800_to_F20 Name,Status,State,Group,Type,Recover,Shadow,Err =============================================== PPRC_800_to_F20,normal,Active,PPRC_src,pprc,Yes,Yes,No

Note: Use lssess -l command instead of showsess in batch programs, because showsess shows results only in one, stanza format. See Example 8-19.

Example 8-19 is a sample lssess -l command which can be easily used in a batch program.
Example 8-19 Using lssess with -l (long) parameter in delim format
repcli> lssess -l -fmt delim PPRC_800_to_F20 Name,Status,State,Group,Type,Recover,Shadow,Err,Approval Status,Pool Criteria,Copysets,Non-approved,Invalid,Description,Seq,Source Volumes,Approve ================================================================================================================================================== PPRC_800_to_F20,normal,Active,PPRC_src,pprc,Yes,Yes,No,Automatic,F20,2,0,0,Remote copy of 2 volumes from ESS 800 to F20,"Remote Target",2,Yes

420

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Chapter 9.

Problem determination
This chapter provides information that will aid in troubleshooting TotalStorage Productivity Center installation and configuration issues. In this chapter, we describe: Checking the TotalStorage Productivity Center host, including: IBM Director logfiles IBM DB2 database logfiles, health monitoring, and table content checking IBM WebSphere Administrative console message browser usage, and how to enable tracing Checking the CIMOM server (SLP host) Tips and hints for validating that the CIM agents are running on your storage devices Locations for relevant logfiles on the SLP host/CIMOM server

Copyright IBM Corp. 2004, 2005. All rights reserved.

421

9.1 Troubleshooting tips: Host configuration


In this section, we address methods of tracing configuration issues on your TotalStorage Productivity Center host.

9.1.1 IBM Director logfiles


There are extensive logging capabilities within the IBM Director framework that can be used to isolate issues with TotalStorage Productivity Center for Disk CIMOM discovery or other events. To view the IBM Director event logs, click the Events task in the right-hand column, to expand the hierarchy of available eventlog filters (all events, warning, critical, fatal). To view all events, double-click on all events. A second window will appear and should populate with the events that have been logged in the Director console in the past 24 hours.

Figure 9-1 IBM Director Event Log

In our case, note the large quantity of User ID/Password Incorrect from Server on CIMOM messages. For us, these messages were indicative of the CIM agents (not ours) that resided in the local subnet which were not configured to provide data to our TotalStorage Productivity Center for Disk administrative username and password (superuser/password). The Director console reports back that it has been rejected from accessing a CIM agent.

9.1.2 Using Event Action Plans


IBM Director can produce a significant quantity of events in a very brief period of time. This can make searching for specific events difficult. Director supports the creation of Event Action Plans to filter the quantity of events by any number of categories. Further you can apply actions to specific events, for example to generate new logfile outputs, or to send messages to the console, or to email server administrators. In our example, we create an action plan to filter the event log for the discovery of new CIMOMs. Aside from filtering the whole event log

422

Managing Disk Subsystems using IBM TotalStorage Productivity Center

for these type of messages, we create an action to broadcast a message to our team that a new CIMOM has been detected by TotalStorage Productivity Center for Disk. See Event Action Plan Builder on page 215 for detailed information.

9.1.3 Restricting discovery scope in TotalStorage Productivity Center


Refer to the tips provided in SLP configuration recommendation on page 39 to learn more about how to restrict the scope of the discovery on the TotalStorage Productivity Center server. This should result in improved performance (for example, TotalStorage Productivity Center for Disk is not receiving authentication errors from the misconfigured CIMs).

9.1.4 Following discovery using Windows raswatch utility


To follow the process of device discovery, you can use raswatch, an IBM Director executable that can be accessed from the Windows 2000 command prompt by running:
raswatch -dev_mgr -high

Figure 9-2 is an example of the raswatch during the trace of TotalStorage Productivity Center discovery.

Figure 9-2 Using raswatch to trace TotalStorage Productivity Center discovery

The raswatch output can be very verbose and will scroll very quickly off the screen. Consider logging the output into a file using raswatch -dev_mgr -high > c:/testlog.txt. This will allow you to open the raswatch file in the notepad editor and search for IP or hostname strings that will validate the TotalStorage Productivity Center discovery process.

9.1.5 DB2 database checking


To validate that the DB2 databases are functioning as they should, we make use of the DB2 UDB tools to check overall Database health, and to confirm that tables we expect to be populated get populated, for example, following a data collection task. The DB2 UDB tool that can be used to review the logfiles is called the Journal. To open the Journal utility, click start, Programs, IBM DB2, General Administration Tools, Journal. You should see a window like Figure 9-3 on page 424.

Chapter 9. Problem determination

423

Figure 9-3 DB2 Journal message viewer

Additional information about viewing and managing events logged in the DB2 journal can be referenced by accessing the help menus. By default, DB2 instance health checking is disabled. It may be advisable to enable the health monitor. Even the default alert thresholds that are in place when the health monitor is enabled will provide for the TotalStorage Productivity Center administrator at least some idea of any issues with the DB2 instance. You can open the DB2 Health Center by clicking Start, Programs, IBM DB2, Monitoring Tools, Health Center.

424

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Figure 9-4 DB2 Health Center panel

Remember that by default, the Health Center monitoring is disabled. You can tell by the green circle on our DB2 instance and associated databases that we have enabled monitoring. At present we have no issues that have generated any alerts. Figure 9-5 on page 426 shows the typical default threshold settings for each of the TotalStorage Productivity Center for Disk databases.

Chapter 9. Problem determination

425

Figure 9-5 DB2 Object Health Indicator settings panel

Aside from viewing the DB2 events, or ensuring that event monitoring is enabled, we can also review the contents of specific database tables to ensure that we are receiving data and that the appropriate tablespaces are being populated. For example, if we have just performed an ESS data collection task, we should have entries in the following three tables in the PMDATA database: VPCCH - Volume data VPCRK - Array data VPCLUS - Cluster data We can review the contents of these tables from within the DB2 Control Center, by navigating through the tree on the left-hand side, from system, instance, to the database, and tables, as shown in Figure 9-6 on page 427.

426

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Figure 9-6 Viewing the database tables

To view the contents of a specific table, right-click the table you want to view and select Sample Contents. You should see a table like the one in Figure 9-7 on page 428.

Chapter 9. Problem determination

427

Figure 9-7 Sample database contents

The presence of these rows in this table tells us that we have successfully performed a data collection task against the ESS with serial number 22219. We should also see data in the other tables cited above.

9.1.6 IBM WebSphere tracing and logfile browsing


The first WebSphere logfile of interest is the startServer.log. This file is updated each time you perform the startup of the WebSphere application server. Figure 9-8 on page 429 shows the WebSphere start logfile.

428

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Figure 9-8 WebSphere Application Server server start logfile

A considerable amount of application level information can be obtained from within the IBM WebSphere framework, using the Administrative Console.

9.1.7 SLP and CIM Agent problem determination


We have already outlined some procedures for ensuring that the CIM agents are correctly configured in your TotalStorage Productivity Center environment Chapter 4, CIMOM installation and configuration on page 119.

Configuration guideline summary


1. It is advisable that the TotalStorage Productivity Center for Disk host and SLP agent host (if they are on different servers), reside in their own subnet, isolated from other devices. This will reduce the possibility that network traffic generated from CIM agents outside of your control will impact your TotalStorage Productivity Center for Disk host. 2. Make a list of the CIMOMs you intend to have registered on your SLP. The list should include : IP address of CIM Type and version of CIM agent (SAN Volume Controller, ESS, FAStT) This list can be used later as a starting point for creation of your slp.reg file (Persistency of SLP registration on page 175) 3. Test that the CIM agents you intend to register with your SLP host are active. (Confirming the ESS CIMOM is available on page 148). 4. Ensure that the username and password on each CIM agent matches that of your TotalStorage Productivity Center for Disk and TotalStorage Productivity Center for Replication username and password. For ESS devices this should always be the case, since we register each ESS in the CIM using the command setdevice, addess <ip address of ESS> <specialist username> <specialist password> For SVC, the TotalStorage Productivity Center for Disk superuser name and password must be synchronized by creating the same account credentials in the SAN Volume Controller Console GUI.

Chapter 9. Problem determination

429

5. Follow the TotalStorage Productivity Center for Disk discovery process using raswatch (Figure 9-2 on page 423) for evidence that the new CIM: a. Has been detected b. Has allowed TotalStorage Productivity Center for Disk to authenticate correctly The same activities may be traced in the IBM Director logfiles (Figure 9-1 on page 422).

9.1.8 Enabling SLP tracing


It is possible to modify the slp.conf file to enable verbose tracing of SLP registrations and other events of interest during problem determination. The following lines from the slp.conf file can be modified to enable the SLP logging. Simply remove the semicolon from the ;net.slp.traceMsg = true line, and restart the SLP service to invoke the changes.

#---------------------------------------------------------------------------# Tracing and Logging #---------------------------------------------------------------------------# A boolean controlling printing of messages about traffic with DAs. # Default is false. ;net.slp.traceDATraffic = true # A boolean controlling printing of details on SLP messages. The fields in # all incoming messages and outgoing replies are printed. ;net.slp.traceMsg = true # A boolean controlling printing details when a SLP message is dropped for # any reason. Default is false. Default is false.

;net.slp.traceDrop = true # A boolean controlling dumps of all registered services upon registration # and deregistration. If true, the contents of the DA or SA server are Default is false.

# dumped after a registration or deregistration occurs. ;net.slp.traceReg = true


Figure 9-9 Enabling SLP tracing in slp.conf

The more detailed output from SLP tracing is in the slp logfile, located in c:/WINNT/slpd.log. Important: After the required tracing information has been gathered, you should disable slp trace. The logfile can become very large.

430

Managing Disk Subsystems using IBM TotalStorage Productivity Center

9.1.9 ESS registration


At any time where the ESS Specialist username and/or password is changed, the ESS registration to the relevant CIMOM must be updated to reflect the change. Important: If the ESS CIMOM registration change is not done, the TotalStorage Productivity Center for Disk and TotalStorage Productivity Center for Replication functions such as data collection will fail.

9.1.10 Viewing Event entries


The Event Log window enables you to perform the following tasks:

Viewing All Logged Events


When you start the Event Log without specifying a filter or managed system, up to the last 100 events received over the last 24 hours are displayed. To view all logged events, double-click the Event Log in the Tasks pane of the Director Console or right-click and select the Open option. The Event Log is started and displays all logged events.

Viewing Events by Filter Characteristics


Director supplies predefined filters and you can create user-defined filters to reduce the number of displayed events to only those that meet a filtering criterion. To view a filtered list of events from all managed systems, click the sign by the Event Log icon in the Tasks pane to display the event filters (see Figure 9-10), then double-click the event filter you want to apply (see Figure 9-11 on page 432). Note: The day and time of day characteristics of a filter do not apply when used in this context.

Figure 9-10 IBM Director Console - Event Log

Chapter 9. Problem determination

431

Figure 9-11 Minor Events Filter

Viewing Events by System


IBM Director supplies predefined system groups and you can create user-defined groups to limit the number of displayed events to only those that meet a filtering criterion and originate from a specified managed system or group of systems. To view a filtered list of events from a single managed system or group, either drag the icon of the managed system or group from the Groups pane onto the event filter in the Tasks pane, or drag the event filter from the Tasks pane onto the managed system or group in the Groups pane. For example, dragging the Harmless Events to the IBM Director Systems Group produces a window similar to the one in Figure 9-12 on page 433.

432

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Figure 9-12 Harmless events

Deleting Events from the Event Log


To delete an event entry while viewing events in the Event Log, right-click the entry to display the context menu, then select Delete. The user may highlight one or more events and select the trash can icon or use the Edit Delete menu item.

Creating an Event Filter for a Selected Event


To create a filter for a specific event, right-click the event, then select Create a Filter from the context menu (see Figure 9-13). The Event Filter Builder dialog is displayed.

Figure 9-13 Create specific filter

Chapter 9. Problem determination

433

Changing the number of entries viewed in the Event Log


The number of event entries that are displayed can be controlled by specifying: Total number of entries displayed Time interval for entries displayed By default, the Event Log displays the last 100 events over the last 24 hours. To change the number of entries displayed, select Options Set Log View Count from the menu bar. The maximum number of event entries that can be viewed is equal to the maximum size of the event log. To change the time interval for entries displayed, select Options Set Time Range from the menu bar.

9.2 Replication Manager problem determination


In this section, we present some methods and log sources for troubleshooting the Replication Manager component. Debugging for almost all Replication Manager problems requires the WebSphere Application Server trace, and most problems also require an ICAT trace at a minimum. In general, RM provides two major functions: Setting up a copy session Controlling a copy session The majority of Replication Manager problems are symptoms of a problem with the underlying interface to the hardware. Replication Manager communicates with ICAT, which for ESS communicates with ESSNI, which talks to Copy Services, which talks to the actual ESS microcode. Any breakdown in communications along this path causes Replication Manager to behave incorrectly or not function at all. These are the major categories of interface problems: 1. Lack of state changes (indications) coming from the ESS to Replication Manager. Each time a copy relationship on a volume changes state, Replication Manager is notified using an indication. An indication is a CIM event which is delivered asynchronously from the actual underlying event. Replication Manager uses indications to update its knowledge of the physical copy relationships as they change dynamically. Loss of indications causes Replication Manager to appear to be stuck or not to have worked at all. If indications do not arrive after the Start operation is performed on a session, then the session stays in defined state and does not appear to be operating. Another symptom of loss of indications is that the volume relationship states as reported by the Copy Services application on the ESS do not match the states reported by Replication Manager. 2. Unexpected freeze (suspend) operations can be seen when Replication Manager loses connection to the ICAT or ICAT loses connection to the ESS for longer than 90 seconds. Replication Manager periodically checks for all ESSs being alive, and initiates a freeze when there is an active session on an ESS and that ESS does not respond to a presence check. Upon a timeout when waiting for ESS status, Replication Manager performs a freeze operation since the timeout could be the first symptom of a disaster. 3. Extremely long durations can be seen to display the session properties panel or the path status panel. Replication Manager issues hardware queries to display these panels. Under some circumstances when the ICAT or ESSNI does not respond to the query, the user will see an extremely long response time, or the user interface may hang completely. 434
Managing Disk Subsystems using IBM TotalStorage Productivity Center

9.2.1 Diagnosing an indications problem


In the WebSphere Application Server (WAS) trace.log file, look for trace entries similar to this example (the actual trace entry is on one line). The token (HWLAYER) identifies which Replication Manager subcomponent wrote this trace entry. In this example, the HIPPRCLogicalPathEvent item is the specific indication which was received.
[5/28/04 9:55:22:609 MST] 68e5f2b1 HWLAYER > com.ibm.storage.hw.ess.cim.ESSIndicationHandler handleIndication(CIMEvent) (9.11.192.145) CIM_ProcessIndication: NIPPRCLogicalPathEvent has occured

Seeing trace entries of this type shows that Replication Manager is receiving indications properly. If Replication Manager is not receiving indications, then no entries of this type will be seen surrounding the event in question. If Replication Manager is not seeing indications, then usually one or more layers of the software stack need to be restarted.

9.2.2 Restarting the replication environment


An unknown hardware layer error message might appear immediately after installing Replication Manager. You might receive an unknown hardware layer message on the first Start operation for a Continuous Remote Copy session, or first Flash operation for a FlashCopy session, after installing Replication Manager. If this occurs, restart IBM Director and try the operation again. If the problem is still not resolved after restarting all of the system components in turn, then capture problem determination information including the ICAT logs, the WebSphere Application Server logs, and a state save on the ESS.

9.3 Enabling trace logging


To allow logging of the console output, you need to set up the Director Stdout Logging function. This is especially required for problem determination using the GUI. On the Windows platform, follow these steps: 1. Start Run regedit.exe 2. Open the HKEY_LOCAL_MACHINE SOFTWARE Tivoli Director CurrentVersion file. 3. Modify the LogOutput. Set the value to be equal to 1. 4. Reboot the server. The output log location from the instructions above is X:/program files/ibm/director/log (where X is the drive where the Director application was installed). On the Linux platform, TWGRas.properties sets the output logging on. You need to remove the comment from the last line in the file (twg.sysout=1) and ensure that you have set TWG_DEBUG_CONSOLE as an environment variable. For example in bash: $ export TWG_DEBUG_CONSOLE=true

9.3.1 Enabling WebSphere Application Server trace


It is very useful to enable the WebSphere Application Server trace tool when troubleshooting WebSphere Application Server related problems. This section details the steps to make the most use of this tool. To enable the traces of the WebSphere-based components of

Chapter 9. Problem determination

435

TotalStorage Productivity Center, the corresponding logging/trace-settings have to be configured with the WebSphere Administrative Console. Tracing defaults to disabled. Use the following steps to change the logging state: 1. Launch the WebSphere Application Server Administrative console at the following URL:
http://servername:9090/admin

This will redirect the browser to the secure login page, and afterward the login goes to the WebSphere Application Server Administrative root page (see Figure 9-14):

Figure 9-14 WebSphere Application Server Admin root URL example

2. Click Servers (see Figure 9-15 on page 437).

436

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Figure 9-15 WebSphere Application Server trace tool - select servers example

3. Click Application Servers (see Figure 9-16 on page 438).

Chapter 9. Problem determination

437

Figure 9-16 WebSphere Application Server trace tool - select application servers example

4. Click server1 (see Figure 9-17 on page 439).

438

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Figure 9-17 WebSphere Application Server trace tool - select server 1 example

5. Click Logging and Tracing (see Figure 9-18 on page 440).

Chapter 9. Problem determination

439

Figure 9-18 WebSphere Application Server trace tool - select logging and tracing example

6. Click Diagnostic Trace (see Figure 9-19 on page 441).

440

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Figure 9-19 WebSphere Application Server trace tool - select diagnostic trace example

7. Check the enable trace box Enter the required trace entries into the trace specification box, separated by colons. Insert all of the trace specifications in this table which might be used byTotalStorage Productivity Center. The following table provides the default TotalStorage Productivity Center for Replication specifications (see Figure 9-1 on page 442) for the trace.

Chapter 9. Problem determination

441

Table 9-1 MDM default trace specifications Component general format Default Comp=level=state where: Comp is the component to trace Level is the amount of trace *State is enabled or disabled ELEMCAT=all=enabled HWLAYER=all=enabled REPMGR=all=enabled DMINT=all=enabled

Replication Manager Element Catalog Replication Manager Hardware layer Replication Manager Session Manager Replication Manager integration with Device Manager

*This is the value which should be set all the time, unless otherwise specified. For TotalStorage Productivity Center for Replication, the full setting is REPMGR=all=enabled:HWLAYER=all=enabled:DMINT=all=enabled:ELEMCAT=all=enabled The remaining settings on this page are used to control how much trace is captured before it is overwritten. The actual settings to use depend on the actual server configuration, but here are some guidelines to use: Always choose the setting which sends the trace to a file. 20 MB seems to be a good size to use for each trace file. Enable at least one historical file. The more history is available, the better, since many TotalStorage Productivity Center are long-running and may result in a lot of trace data. Be sure to leave sufficient free space. The total trace will take up the number of historical files plus 1 multiplied by the size of each file. Recommended settings 20 MB per file 10 historical files (unless there is not enough disk space on the server) Tip: The default file name is ${SERVER_LOG_ROOT}/trace.log and it is best to keep this default whenever possible. This will ensure that the automated tools to collect log and trace information can find the files. If the log files need to be written to a different location, for example to a different disk to manage free disk space, it is better to change the environment variable SERVER_LOG_ROOT. Refer to the WebSphere Application Server documentation for information about how to change this environment variable. Figure 9-20 on page 443 is a sample window showing several values have been changed.

442

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Figure 9-20 WebSphere Application Server trace tool - several trace values changed

8. After making all changes, press the OK button, and then click save the changes. Tip: To change trace settings immediately without restarting WebSphere Application Server, make the equivalent change in the Runtime tab instead of the Configuration tab. When Apply or OK is clicked from the Runtime tab, the change takes effect immediately.

Chapter 9. Problem determination

443

9.4 Enabling trace logging


To allow logging of the console output, you need to set up the Director Stdout Logging function. This is especially required for problem determination with the GUI. On the Windows platform, follow these steps: 1. Start Run regedit.exe 2. Open the HKEY_LOCAL_MACHINE SOFTWARE Tivoli Director CurrentVersion file. 3. Modify the LogOutput. Set the value to be equal to 1. 4. Reboot the server. The output log location from the instructions above is X:/program files/ibm/director/log (where X is the drive where the Director application was installed). On the Linux platform, TWGRas.properties sets the output logging on. You need to remove the comment from the last line in the file (twg.sysout=1) and ensure that you have set TWG_DEBUG_CONSOLE as an environment variable. For example in bash: $ export TWG_DEBUG_CONSOLE=true IBM WebSphere Application Server: To enable the traces of the WebSphere-based components of TotalStorage Productivity Center, the corresponding logging/trace-settings have to be configured with the WebSphere Administrative Console. Tracing defaults to disabled. To change the logging state: Open the WebSphere Administrative Console and click Servers Application Servers. Select the application server (the default is server1). Click Logging and Tracing in the window that appears on the right-hand side. Click Diagnostic Trace and you will see the current values for the logging settings. To enable tracing, ensure that the Enable trace ... check box is selected.

9.4.1 ESS user authentication problem


While creating workload profile for Volume Performance Advisor (VPA) use, you may encounter problem of ESS user authentication. If you are creating a workload profile for the first time for any ESS, you need to specify ESS specialist user name and password. Upon launching Manage Workload profile for a new ESS from IBM Director Console, you should see panel similar to Figure 9-21 on page 445. This panel allows you to specify ESS specialist username and password for VPA use. If you do not see this panel and instead get an error, you may need to download a patch 15484 from IBM support Web site. This patch is for IBM Director console. To apply this patch: 1. Download the patch, it consists of file mdmpmconsole.jar . 2. If you want to revert back to original state, back up following file: c:\Program Files\IBM\Director\classes\mdm\lib\mdmpmconsole.jar 3. Copy mdmpmconsole.jar to the same directory as previous step. 4. Restart the IBM Director server from the TotalStorage Productivity Center for Disk Server Start Menu Control Panel Administrative tools Services IBM Director Server. 444
Managing Disk Subsystems using IBM TotalStorage Productivity Center

5. You may need to wait for some time till the IBM Director has re-started.

Figure 9-21 ESS user validation panel

9.4.2 SVC Data collection task failure due to previous running task
The Performance Manager data collection task may fail due to a previously running task, since SVC data collection allows only one such task to run at a time. You may need to stop previous data collection tasks. You can stop the task using Performance Manager Command Line Interface perfcli tool or from the SVC Console. To stop the task from CLI tool, go to the C:\Program Files\IBM\mdm\pm\pmcli and run the command: stopsvcollection -devtype svc <task_name> Subsequently, you will be requested to confirm whether to stop the task. You may respond Y (yes). Alternatively, you may launch SVC Console Web browser interface. After logging into the SVC console, choose Clusters under My Work column click in the check box for respective SVC cluster in the Clusters column and click Go. Subsequently, in the next panel select Manage Cluster. You will see panel similar to Figure 9-22 on page 446.

Chapter 9. Problem determination

445

Figure 9-22 Manage Cluster for SVC console

Choose Stop Statistics Collection as shown in the figure. Next panel is shown in Figure 9-23 on page 447.

446

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Figure 9-23 Stopping data collection for SVC

Click Yes. This will stop all the performance data collection for SVC.

Chapter 9. Problem determination

447

448

Managing Disk Subsystems using IBM TotalStorage Productivity Center

10

Chapter 10.

Database management and reporting


This chapter provides information about how to maintain the DB2 database used by the components of the TotalStorage Productivity Center. Topics include deleting old data and importing and exporting the database for backup. Included in this chapter is an example custom report created from TotalStorage Productivity Center for Disk PM tables as well as suggestions for additional report content. You must have performance data collected prior to creating reports.

Copyright IBM Corp. 2004, 2005. All rights reserved.

449

10.1 DB2 database overview


DB2 UDB is a relational database management system (RDBMS) that enables you to create, update, and control relational databases using the Structured Query Language (SQL). The DB2 UDB family of products is designed to meet the information needs of small and large businesses alike. IBM's DB2 database software is the worldwide market share leader in the relational database industry. It is multimedia and Web-ready relational database management system delivering leading capabilities in reliability, performance, and scalability with less skill and fewer resources. DB2 is built on open standards for ease of access and sharing of information and is the database of choice for customers and partners developing and deploying critical solutions. The TotalStorage Productivity Center for Disk utilizes the DB2 UDB as the backbone for its data storage and reporting functions. It is important to understand how TotalStorage Productivity Center for Disk allocates and uses DB2 resources so that you can efficiently customize and use the information provided by the Performance Management function. The TotalStorage Productivity Center for Disk incorporates IBM DB2 Express Version 8.1.2 with Fixpack 2. DB2 Express is a specially tailored database offering for the worldwide small and medium business (SMB) customers.

10.2 Database purging in TotalStorage Productivity Center


Data collected from performance data collection tasks is stored in a TotalStorage Productivity Center DB2 database. Two database functions enable you to manage Performance Manager data: Database-size monitoring The sizing function on this panel shows used space and free space in the database. Space status advisor The Space status advisor monitors the amount of space used by the Performance Manager database and advises you as to whether you should purge data. The advisor levels are: Low: You do not need to purge data now High: You should purge data soon Critical: You need to purge data now Disk space thresholds for status categories: low if utilization <0.8, high if 0.8 <= utilization <0.9, and critical otherwise. The delimiters between low/high/critical are 80% and 90% full. Database purging You use the Performance Manager database panel to specify properties for a performance database purge task. You can purge performance data based on the age of the data, the type of data, and the storage devices associated with the data.

450

Managing Disk Subsystems using IBM TotalStorage Productivity Center

After you specify the database purge information, it is saved as a noninteractive IBM Director task. You schedule all performance data-collection tasks using the IBM Director scheduler function.

10.2.1 Performance Manager database panel


To access the Performance Manager database panel (see Figure 10-2 on page 452) use the path as seen in Figure 10-1. IBM Director Task pane Multiple Device Manager Manage Performance Performance Database Performance Manager Database

Figure 10-1 Accessing Performance Manager Database panel

Chapter 10. Database management and reporting

451

Figure 10-2 Purge database definition example

The current database information is shown. Use this panel to specify the properties for a new performance database purge task. The fields are: Name: Type a name for the performance database purge task, from 1 - 250 characters. Description (optional): Type a description for the performance database purge task, from 1 to 250 characters. Device type: Select one or more storage device types for the performance database purge. Purge performance data older than: Select the maximum number of days or the number of years that you want the performance data to reside in the database before it is purged. Purge data containing threshold exception information: When you select this check box, you choose to purge exception data. Save as task: When you click Save as task, the information you specified is saved and the panel closes. The newly created task is saved as a noninteractive task to the IBM Director Task pane under the Performance Manager Database. All performance database tasks can be scheduled using the IBM Director scheduler function as seen in Figure 10-3 on page 453.

452

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Figure 10-3 Scheduling a database purge task

Right-click the newly created database purge task (Figure 10-3) to schedule it for execution. Execution is either immediate or scheduled as seen in Figure 10-4.

Figure 10-4 Executing a database purge task

10.3 IBM DB2 tool suite


The IBM DB2 tool suite which is installed as a component of TotalStorage Productivity Center provides you a GUI interface to help you define and manage systems and databases. In the TotalStorage Productivity Center context, the tool suite can also be used to view and extract
Chapter 10. Database management and reporting

453

data gathered from the storage devices you are monitoring. A component of the tools suite is an interface to online DB2 Online Support Web site resources. In this section we briefly describe the tools and provide examples of their use with TotalStorage Productivity Center. Tip: For detailed information and usage examples on GUI tools for DB2 UDB Express, see An Introduction to DB2 UDB Express GUI tools (Part 1):
http://www-106.ibm.com/developerworks/db2/library/techarticle/0307chong/0307chong.html

And, An Introduction to DB2 UDB Express GUI tools (Part 2 of 2):


http://www-106.ibm.com/developerworks/db2/library/techarticle/0308chong/0308chong.html

To access the DB2 tool suite use the path: Start Programs IBM DB2 The following main menu options are available to you for use with the TotalStorage Productivity Center databases or any other DB2 database instance you may have on your TotalStorage Productivity Center server. We will be putting more emphasis on the Command Line Tools in a TotalStorage Productivity Center reporting framework. Command Line Tools Development Tools General Administration Tools Information Monitoring Tools Set-up Tools Tip: For detailed information, use the DB2 Tool Suite Help Screens or DB2 Online Information at the following URL:
http://publib.boulder.ibm.com/infocenter/db2help/index.jsp

10.3.1 Command Line Tools


The Command Line Tools options are: Command Center Command Line Processor Command Window

Command line processor


The command line processor (CLP) command (in either case) is typed at the command prompt. The command is sent to the command shell by pressing the Enter key. Output is automatically directed to the standard output device. Piping and redirection are supported. The user is notified of successful and unsuccessful completion. Following execution of the command, control returns to the operating system command prompt, and the user may enter more commands. Before accessing a database, you must perform preliminary tasks, such as starting DB2 with START DATABASE MANAGER. You must also connect to a database before it can be queried. Connect to a database by doing one of the following: Issue the SQL CONNECT TO database statement (see Figure 10-5 on page 455 or Data Extraction using DB2 Command Line Processor Interface on page 487). Establish an implicit connection to the default database defined by the environment variable DB2DBDFT. 454
Managing Disk Subsystems using IBM TotalStorage Productivity Center

If a command exceeds the character limit allowed at the command prompt, a backslash (\) can be used as the line continuation character. When the command line processor encounters the line continuation character, it reads the next line and concatenates the characters contained on both lines. Alternatively, the -t option can be used to set a line termination character. In this case, the line continuation character is invalid, and all statements and commands must end with the line termination character. For more information, use the DB2 UDB online help. In current releases of DB2 UDB CLP starts in the interactive mode, which is indicated by a DOS-looking command prompt db2=>. In this mode, end-users may enter one DB2 UDB command or one SQL statement by typing it in at the prompt, and then pressing the Enter key. Figure 10-5 shows an example query in an IBM DB2 Command Line Processor window.

Figure 10-5 DB2 UDB command line processor example

In this example, a connect DB2 UDB command was executed to connect to the TotalStorage Productivity Center Performance Manager database named PMDATA (the TotalStorage Productivity Center performance database alias). When this command is executed, you can then enter a SELECT SQL statement against any of the PMDATA tables in the database. The commands are not case sensitive but the user ID (MDMSUID) and password (MDMSPW) are case sensitive based on how they were defined in the database setup during installation or thereafter. The interactive mode is exited by typing QUIT and pressing Enter. The DB2 UDB Tool Suite also has another CLP which operates in a non-interactive mode; Command Window. It may be opened up from the path: Start IBM DB2 Command Line Tools Command Window The SQL queries are invoked by starting each SQL statement with the characters db2, for example db2 connect to pmdata. This CLP has the same case sensitivity requirements of the Command Line Processor. For additional examples of the Command Line Processor, refer to Data Extraction using DB2 Command Line Processor Interface on page 487.

10.3.2 Development Tools


To access the DB2 Development Tools use the path:
Chapter 10. Database management and reporting

455

Start Programs IBM DB2 Development Tools The Development Tools options are Development Center Project Deployment Tools DB2 Development Center provides an easy-to-use development environment for creating, installing, and testing stored procedures. It allows you to focus on creating your stored procedure logic rather than the details of registering, building, and installing stored procedures on a DB2 server. Additionally, with Development Center, you can develop stored procedures on one operating system and build them on other server operating systems. Development Center is a graphical application that supports rapid development. Using Development Center, you can perform the following tasks: Create new stored procedures. Build stored procedures on local and remote DB2 servers. Modify and rebuild existing stored procedures. Test and debug the execution of installed stored procedures.

10.3.3 General Administration Tools


Menu Path: Start Programs IBM DB2 General Administration Tools Control Center Journal Replication Center Task Center

Control Center
A GUI for snapshot and event monitoring. For snapshots, it allows you to define performance variables in terms of the metrics returned by the database system monitor and graph them over time. For example, you can request that it take a snapshot and graph the progression of a performance variable over the last eight hours. Alerts can be set to notify the DBA when certain threshold are reached. For event monitors, it allows you to create, activate, start, stop, and delete event monitors. See the online help for the Control Center for more information (also see Control Center on page 482).

Journal
You can start the Journal by selecting the icon from the Control Center toolbar. The Journal allows you to monitor jobs and review results. From the Journal, you can also display the recovery history and DB2 messages. The Journal allows you to monitor pending jobs, running jobs, and job histories; review results; display recovery history and alert messages; and show the log of DB2 messages.

Replication Center
The Replication Center stores the initial information about registered sources, subscription sets, and alert conditions in the control tables. The Capture program, the Apply program, and the Capture triggers update the control tables to indicate the progress of replication and to coordinate the processing of changes. The Replication Alert Monitor reads the control tables that have been updated by the Capture program, Apply program, and the Capture triggers to understand the problems and progress at a server.

Task Center
Use the Task Center to create, schedule, and run tasks. You can create the following types of tasks: 456
Managing Disk Subsystems using IBM TotalStorage Productivity Center

DB2 scripts that contain DB2 commands OS scripts, which has operating system commands MVS shell scripts to run on OS/390 and z/OS operating systems JCL scripts to run in a host environment Grouping tasks, which contain other tasks Task schedules are managed by a scheduler, while the tasks are run on one or more systems, called run systems. You define the conditions for a task to fail or succeed with a success code set. Based on the success or failure of a task, or group of tasks, you can run additional tasks, disable scheduled tasks, and other actions. Tip: You can also define notifications to send after a task completes. You can send an e-mail notification to people in your contacts list, or you can send a notification to the Journal.

10.3.4 Monitoring Tools


To access the DB2 Monitoring Tools use the path: Start Programs IBM DB2 Monitoring Tools The DB2 Monitoring Tools options are: Event Analyzer Health Center Indoubt Transaction Manager Memory Visualizer

Event Analyzer
The Event Analyzer GUI is used for viewing file event monitor traces. Information collected on connections, deadlocks, overflows, transactions, statements, and subsections is organized and displayed in a tabular format. See the online help for the Event Analyzer for more information.

Health Center
Use the Health Center GUI tool to set up thresholds that, when exceeded, will prompt alert notifications, or even actions to relieve the situation. In other words, you can have the database manage itself!

10.4 DB2 Command Center overview


The IBM DB2 Command Center provides you tools to use for database management and SQL capabilities for data compilation and extraction. You can then export the data you have retrieved and use this for the basis of management reporting, SAN environment PD, and host server application performance examination at the storage server level. Any of the query commands used in this book, in addition to your own custom queries can be setup as scripts with the IBM DB2 Utilities Command Center feature. This section describes some of the functions available to you in the Command Center and provides examples. Use the Command Center to execute DB2 commands and SQL statements, to execute z/OS or OS/390 host system console commands, to work with command scripts, and to view a graphical representation of the access plan for explained SQL statements.

Chapter 10. Database management and reporting

457

Working within the DB2 UDB Command Center, you can run SQL statements, DB2 UDB commands, and operating system commands in an Interactive mode. Like with most DB GUI tools, you will first connect to the database that you want to run your queries against. From there, Command Center can display a list of tables to which you have access. Command Center can also assist in writing the query by allowing you to pick table names, column names, filters, conditions, predicates, and other table signifies from its windows. You can also execute a stack of SQL statements within the Script tab portion of the window. Multiple SQL statements can be executed as a unit of work (UOW), which means each statement must complete successfully for the others to complete successfully. If any statement fails, the work done by all previously completed statements will be rolled back. In addition to the Command Center, you may want to use the IBM DB2 Control Center. Each of these tools share much of the same functionality but each has specific feature related capabilities. Which tool you you will depend upon what type of information you want to extract, and what your needs are regarding output of the data; screen output, file output, or both.

10.4.1 Command Center navigation example


To open the Command Center use the path: Start Programs IBM DB2 Command Line Tools Command Center The Command Center opens as shown in Figure 10-6 on page 459.

458

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Figure 10-6 Open Command Center example

Tip: Alternatively, if the Control Center is open, click the Center opens.

icon. The Command

You can use the toolbar icons (see Figure 10-7) to open DB2 tools, view the legend for Command Center objects, and view DB2 information.

Figure 10-7 Command Center toolbar example

The toolbar icons are:

Execute Executes the SQL statements, DB2 CLP commands, scripts, or MVS system commands that you enter on the Interactive or Script page. The results are displayed on the Query Results and the Access Plan pages.

Chapter 10. Database management and reporting

459

Control Center Opens the Control Center so that you can display all of your systems, databases, and database objects and perform administration tasks on them.

Replication Center Opens the Replication Center so that you can design your replication environment and set up your replication environment.

Satellite Administration Center Opens the Satellite Administration Center so that you can set up and administer satellites and the information that is maintained in the satellite control tables.

Data Warehouse Center Opens the Data Warehouse Center so that you can manage Data Warehouse objects.

Task Center Opens the Task Center so that you can create, schedule, and execute tasks.

Information Catalog Center Opens the Information Catalog Center so that you can manage your business metadata.

Health Center Opens the Health Center so that you can work with alerts generated while using DB2.

Journal Opens the Journal so that you can schedule jobs that are to run unattended and view notification log entries.

License Center Opens the License Center so that you can display license status and usage information for the DB2 products installed on your system and use the License Center to configure your system for license monitoring.

460

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Development Center Opens the Development Center so that you can develop stored procedures, user-defined functions, and structured types.

Contacts Opens the Contacts window where you can specify contact information for individual names or groups.

Tools Settings Opens the Tools Settings notebook so that you can customize settings and properties for the administration tools and for replication tasks.

Legend Opens the Legend window that displays all of the object icons available in the Command Center by icon and name.

Retrieve Table Data Retrieves the data for the table you have executed SQL statements against and displays it on the Query Results page.

Create Access Plan Creates the access plan for the current SQL statement and displays it on the Access Plan page.

Information Center Opens the Information Center so that you can search for help on tasks, commands, and information in the DB2 library.

Help Displays help for getting started with the Command Center. Tip: We suggest you use this extremely useful Help feature to navigate the DB2 Tool Suite until you are comfortable with the function provided in the DB2 Express Tool Suite.

Chapter 10. Database management and reporting

461

10.5 DB2 Command Center custom report example


It is very important to consider the following when you create your SQL scripts, scheduled script tasks, or database query tasks. These types of activities add overhead to your TotalStorage Productivity Center host processor. The following tips can help make these tasks more efficient, quicker to return results, and easier for use in problem determination, administrator notification, and processor load. For normal daily queries, it is better to use smaller queries run concurrently than one large, complex query. The more processors and memory your TotalStorage Productivity Center host has, the faster your queries will be and theoretically, the more complex your queries can be defined. The more applications you have running concurrently on your TotalStorage Productivity Center host when you are executing SQL queries; the slower your host performance, and the longer it will take for your queries to complete. The more granular and often your PM and DM data collections are, the more load is placed on the TotalStorage Productivity Center server. Due to this fact, this will have an effect on your SQL query completion speed (and vice-versa). Keep this in mind when you are creating, scheduling, and executing these tasks. For storage server monitoring SQL queries, query a minimal number of values that are important to you as an administrator. You can setup key performance indicators in your queries which are appropriate indicators of system performance health. More complex reports can be created from consolidating output from smaller queries, compiling and further formatting the data exported from the TotalStorage Productivity Center database into a spreadsheet application. From there, you can setup macros to sort and filter the data into a final report which only considers system spikes and for investigating bottlenecks and threshold exceptions. Use your TotalStorage Productivity Center event notifications to guide you in subsequent database searches for pertinent information. Use your ESS Specialist host-volume relationships as corresponding information in problem determination. The TotalStorage Productivity Center database queries can be defined to drill down to the volume ID or other suitable granular level. You can then correlate this to your ESS Specialist volume host-volume definitions. From there, you can determine which host applications are being utilized during certain suspect time periods in order to rectify those types of problems.

10.5.1 Extracting LUN data report


The level of detail available on the DB2 database takes us all the way down to the LUN level. This would require several reports to demonstrate. The necessary steps to achieve this type of report are detailed in order in the following sequence. Step 1 lists the TotalStorage Productivity Center table and the column within the table in the format (Table:Column). The information extracted from the tables is used in subsequent steps. 1. Run a base report against the an Enterprise Storage Server (Model 800 in this example) broken down by Serial # displaying the following information - either across or down - in the header of the report output - (P_TASK, M_MACH_SN, and M_CLUSTER_N are keys across the tables VPVPD, VPCFG, VPCLUS); Type Model Serial RAM 462 (VPVPD:M_MACH_TY) (VPVPD:M_MODEL_N) (VPVPD:M_MACH_SN) (VPVPD:M_RAM)

Managing Disk Subsystems using IBM TotalStorage Productivity Center

NVS from-date/time to-date/time

(VPVPD:M_NVS) (VPCLUS:PC_DATE_B/PC_TIME_B) (VPCLUS:PC_DATE_E/PC_TIME_E)

Next, display three columns with Date over the left column, Cluster 1 over the middle column, and Cluster 2 over the right column. Sort by date/time in the left column (VPCLUS:PC_DEV_DATE_E/PC_DEV_TIME_E). Left column is keyed by VPCLUS:P_TASK VPCLUS:M_MACH_SN VPCLUS:PC_DEV_DATE_E/PC_DEV_TIME_E

Under each cluster column, display sub-column headers of I/O Rate, Avg Cache Hold Time, and NVS % Full; Center and right columns are keyed by VPCLUS:PC_DEV_DATE_E/PC_DEV_TIME_E VPCLUS:M_CLUSTER_N I/O rate, (VPCLUS:Q_CL_IO_RATE) Average cache hold time, (VPCLUS:Q_CL_AVG_HOLD_TIME) NVS % full, (VPCLUS:Q_CL_NVS_FULL_PRCT)

Under the center and right columns, display rows with the following:

2. Select a cluster from step 1 to investigate further. 3. Then, build a report broken by Logical SubSystem (LSS) displaying the following information in the header to reflect the row data; Type, M_MACH_TY Model, M_MODEL_N Serial, M_MACH_SN Cluster, M_CLUSTER_N from-date/time, PC_DATE_B/PC_TIME_B to-date/time, PC_DATE_E/PC_TIME_E PC_DEV_DATE_B/PC_DEV_TIME_B PC_DEV_DATE_E/PC_DEV_TIME_E M_CARD_NUM DA ID #, M_CARD_NUM Loop A or B, M_LOOP_ID Array ID, M_ARRAY_ID Array Type, M_STOR_TYPE Average ms to satisfy all requests to this array, PC_IOR_AVG % Time Array Busy, Q_SAMP_DEV_UTIL Total I/O read/writes to this array, Q_IO_TOTAL Total sequential read/writes to this array, Q_IO_SEQ

Sort by date/time and sub-sort by Device Adapter number (DA #)

Display row data with corresponding headers for:

4. Select an LSS from step 2 to investigate further. 5. Then, build a report broken by loop displaying the following row information with the corresponding column headers; Type, M_MACH_TY Model, M_MODEL_N

Chapter 10. Database management and reporting

463

Serial, M_MACH_SN Cluster, M_CLUSTER_N LSS, M_LSS_LA DA #, M_CARD_NUM Loop, M_LOOP_ID from-date/time,PC_DATE_B/PC_TIME_B to-date/time, PC_DATE_E/PC_TIME_E Sort by date/time and sub-sort by Array PC_DEV_DATE_B/PC_DEV_TIME_B PC_DEV_DATE_E/PC_DEV_TIME_E M_ARRAY_ID Array ID, M_ARRAY_ID Array Type, M_STORE_TYPE # of write reqs issues to this array, PC_IO_WRITE # of ms to satisfy reads to this array, PC_RT_READ # of ms to satisfy writes to this array, PC_RT_WRITE Avg I/O rate for all requests, PC_IOR_AVG # of ms avg to satisfy all requests to this array, PC_MSR_AVG bytes read / sec from this array, PC_RBT_AVG bytes written / sec from this array, PC_WBT_AVG % time array busy, Q_SAMP_DEV_UTIL total I/O's issued to this array, Q_IO_TOTAL total sequential read/write requests to this array, Q_IO_SEQ

Display column headers for:

6. Select an array from step 3 to investigate further. 7. Build a report broken by volume displaying the following row data with the corresponding column names; Type Model Serial Cluster LSS DA# Loop Array from-date/time to-date/time M_MACH_TY M_MODEL_N M_MACH_SN M_CLUSTER_N M_LSS_LA M_CARD_NUM M_LOOP_ID M_ARRAY_ID PC_DATE_B/PC_TIME_B PC_DATE_E/PC_TIME_E

Sort by date/time and sub-sort by volume PC_DEV_DATE_B/PC_DEV_TIME_B PC_DEV_DATE_E/PC_DEV_TIME_E M_VOL_NUM # of the logical volume, M_VOL_NUM F / C, M_VOL_TY LUN Serial or SSID+base device address, M_VOL_ADDR

Display column headers for:

464

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Tip: The proceeding global to granular reporting sequence example can be achieved through the DB2 Tool Suite Command Center (for instance). The individual component reports can be scheduled in the DB2 Tool Suite and the output parsed and compiled into a spreadsheet format (table data output exported in .WKS format for example) such as Lotus 123 or Excel. Once the data is formatted into the worksheet, the native macro functionality of the spreadsheet could be used to process the data further to output graphical reports, summary reports, problem analysis document for root cause analysis, and performance analysis of SAN components, in order to meet the particular needs of your organization and SAN environment.

10.5.2 Command Center report


The proceeding global to granular reporting sequence example can be achieved through the DB2 Tool Suite Command Center. The individual component reports can be scheduled in the DB2 Tool Suite and the output parsed and compiled into a spreadsheet format (table data output exported in .WKS format for example) such as Lotus 123 or Excel. Once the data is formatted into the worksheet, the native macro function of the spreadsheet could be used to process the data further to output graphical reports, summary reports, problem analysis document for root cause analysis, performance analysis of SAN components, and so on, to meet the particular needs of your organization and SAN environment.

PMDATA - VPVPD table report example


In this section we present a detailed example of using the Command Center to extract data from the TotalStorage Productivity Center Performance Manager (PMDATA) database. We will be using the information provided in Step 1 of DB2 Command Center custom report example on page 462. 1. Use the following menu path to enter the IBM DB2 Tool Suite Command Center: Start Programs IBM DB2 Command Line Tools Command Center 2. Using the Interactive option of the Command Center, click that button in the middle left of the window (see Figure 10-8 on page 466). This will give you the option of Executing single SQL statements and DB2 CLP or executing MVS system commands. You also have the option to run an existing command script or one that you create in this example.

Chapter 10. Database management and reporting

465

Figure 10-8 Open Command Center window select Interactive button example

3. Utilize performance data collected from your storage servers using the PM database tables (PMDTATA). With the Command Center window open, click the Database Connection button ..... to the right of the Database Connection bar. The Select Database window opens. Figure 10-9 on page 467 shows an example of selecting the PM database tables (PMDATA) in the Command Center.

466

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Figure 10-9 Connect to PMDATA database example

4. Once you have selected the database you want to work with, you can use the previously described functions to manage information within the database, extract data, or setup SQL queries using the Interactive or SQL Assist options. For our example, we use only the PMDATA database information. You could just as easily utilize the DM (DMCOSERV database) data to retrieve related information (such as Asset data) or the IBMDIR database to manage information in the IBM Director tables. We will proceed to run a base report against a specific Model 800 ESS. Note: We can also run this report against all similar storage server types for which we currently have performance previously collected and stored in the PMDATA database. If we preferred to do this, we would include specific, or all, M_MACH_SN (and their associated) values available in the database. 5. We have connected to the PMDATA database and will utilize the SQL Assist function within the Command Center for our query example. Click the SQL Assist button to begin our SQL query script definition (see Figure 10-10 on page 468).

Chapter 10. Database management and reporting

467

Note: Use SQL Assist to create SQL statements. With SQL Assist and some knowledge of SQL, you can create SQL SELECT statements. In some environments, you can also use SQL Assist to create INSERT, UPDATE, or DELETE statements. SQL Assist is a tool that uses an outline and details panels to help you organize the information that you need to create an SQL statement. SQL Assist is available in the Control Center, the Command Center, the Replication Center, and the Development Center. See the Online Help for more information. SQL Assist and other functions within the DB2 Tool Suite incorporate button sensitive (mouse over) help pop-up windows to aid you in navigating and making your menu selections within the tools.

Figure 10-10 Connect to PMDATA database (Interactive) example

6. The SQL Assist window will now open (see Figure 10-11 on page 469). Click the Select radio button, in the middle right area of the window, since we are only going to retrieve data from the tables with our database queries in this example. The radio button options available in this window include the SQL query options; Select, Insert, Update, Delete. It is not recommended to do any SQL commands on your production database other than Select statements since with this function, you are only reading data from the database. If you want to manipulate your database further, it is

468

Managing Disk Subsystems using IBM TotalStorage Productivity Center

recommended to make a copy of the database (not the production backup) and work with the backup.

Figure 10-11 SQL Assist window

7. Notice that in the lower pane of the SQL assist window, there is the initial syntax of a select statement. We are now going to go through the steps to complete a comprehensive SQL Select query statement to view (or extract) data from our PMDATA database. In the upper left-hand pane called Outline, double-click the FROM (Source Tables) icon and the Details pane will open in the center of the SQL Assist window. The Available Tables tree is listed. Select DB2ADMIN to find the tables that you want to use in our query (see Figure 10-12 on page 470).

Chapter 10. Database management and reporting

469

Figure 10-12 Selecting the DB2ADMIN table button pull-down listing of available tables

8. We will now select the VPVPD table which contains a storage server configuration snapshot. Use the slider on the Available tables pane until you can click the VPVPD table. Now click the > button and the table name will be populated into the Selected Source Tables in the upper right corner of the SQL Assist window (see Figure 10-13 on page 471). Notice how the VPVPD table selection has now been entered automatically into the rudimentary Select statement in the lower pane of the window. This window will show you the SQL validated statement you have created thus far and keep track as you proceed through the SQL Assist function.

470

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Figure 10-13 Selecting VPVPD table

9. Now that you have selected the table you want to query for data, click the SELECT (Result Columns) icon in the Outline pane on the left of the window. The DB2ADMIN.VPVPD (Instance.Table) icon will appear in the pane. Select the + button in this icon and a list of the VPVPD columns will list in the table tree column listing (see Figure 10-14 on page 472).

Chapter 10. Database management and reporting

471

Figure 10-14 View the VPVPD available columns

10.Now select the VPVPD columns we want in our view with our SQL statement. You can select the columns in several ways. One way is to select the columns one at a time, or by clicking on the column name, selecting multiple column names by clicking the column names while holding the shift key down, or you can select all the columns by clicking the >> button. In our example we elected the columns we want while holding down the shift key and clicking the column names. Now click the > button to populate the Result columns pane (which is greyed out until you make your selections. Once you click the > button, a moment will pass and the column names will appear in the right-hand pane and in the SQL validated query statement that is building in the lower SQL Assist pane (see Figure 10-15 on page 473). User defined field variables are found in the SQL validated statement.

472

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Figure 10-15 Select VPVPD columns for our SQL query

11.Next click the Where (Row filter) icon in the outline pane. This will present a table list in the Available columns pane from which you can select where, and how, we want to filter the query (see Figure 10-16 on page 474).

Chapter 10. Database management and reporting

473

Figure 10-16 Where (Row filter) for column M_MACH_SN values (note mouse over help)

12.Now define ther statement to return results where the M_MACH_SN value (ESS serial number) equals 2105.22219. Place the cursor in the Value field. You can either type a specific value in or use the pull-down arrow to being up other options. One of the options presents you the opportunity to see field results already in the table. This will open a subsequent screen showing current column results. You can select from that screen how many results to display. The default is to show 25 rows. You can increase of decrease this value through the menu. After you have made your selection for the Value, click the > button to enter the value into the Search Condition pane and in the SQL validated pane (see Figure 10-17 on page 475).

474

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Figure 10-17 M_MACH_SN column value selection

13.You will not be using the Group By or Having SQL query functions for this simple query example. See the help screens for further information in for those query options. Now click Order By (Sort Criteria) in the outline pane. In the Available columns pane is the VPVPD table content tree (column names). Click value P_CDATE (performance collection date). Now click the > button and the column name appears in the Sort Columns pane. You have the option to select ASC (ascending) or DSC (descending) sort order. ASC is setup as a default sort order. Leave the ASC as-is (see Figure 10-18 on page 476).

Chapter 10. Database management and reporting

475

Figure 10-18 Order By P_CDATE ascending order value definition

14.Now that you have completed building a query statement, click the Run button to view the results (see Figure 10-19).

Figure 10-19 VPVPD table query statement results

15.After reviewing the results of query, click the OK button to return the SQL code to the main Command Center, Interactive window. Notice the mouse over, pop-up window (see Figure 10-20 on page 477). The SQL code from this example is shown in Example 10-1 on page 477.

476

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Example 10-1 SQL code sample

SELECT VPVPD.M_MACH_SN, VPVPD.M_MACH_TY, VPVPD.M_MODEL_N, VPVPD.M_CLUSTER_N, VPVPD.M_RAM, VPVPD.M_NVS, VPVPD.P_CDATE, VPVPD.P_CTIME FROM DB2ADMIN.VPVPD AS VPVPD WHERE VPVPD.M_MACH_SN = '2105.22219 ' ORDER BY VPVPD.P_CDATE ASC

Figure 10-20 Return SQL code we created to the Command Center, Interactive window

16.You will now save your SQL code for future use. You can schedule this as a recurring task or use it ad hoc. Click the Interactive tab on the menu bar at the top of the window and then click Save Command As... (see Figure 10-21 on page 478).

Chapter 10. Database management and reporting

477

Figure 10-21 Save Command As... an ascii file for later use

PMDATA - VPCCH table percentages and averages ad hoc report


This SQL query and report is comprised of contents from the VPCCH performance data table. In this example we are only querying for select information from the table from any 2105 model storage server (LIKE %2105%) but you could also setup the query to view results by time, time/date, particular storage server, or a wide range of granular levels. What you query for will depend largely on what you want to examine for your performance checks and reporting and how granular you need to be depending on root cause analysis needs (see Example 10-2 and Figure 10-22 on page 479).
Example 10-2 Sample SQL for VPCCH table SELECT VPCRK.M_MACH_SN, VPCRK.Q_IO_TOTAL, VPCRK.PC_B_HR_PRCT, VPCRK.PC_IOR_AVG, VPCRK.PC_MSR_AVG, VPCRK.PC_RBT_AVG, VPCRK.PC_WBT_AVG FROM DB2ADMIN.VPCRK AS VPCRK WHERE VPCRK.M_MACH_SN LIKE '%2105%' ORDER BY VPCRK.M_MACH_SN ASC, VPCRK.PC_IOR_AVG DESC, VPCRK.PC_MSR_AVG DESC, VPCRK.PC_RBT_AVG DESC, VPCRK.PC_WBT_AVG DESC, VPCRK.PC_B_HR_PRCT DESC

478

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Figure 10-22 Sample VPCCH query

PMDATA - VPCRK query


Now create a query for Average millisecond time to satisfy all subsystem I/O requests issued to a logical array 1. Look for a high average for the VPCRK.PC_MSR_AVG column. From the following example query, you are going to investigate in a more granular fashion in the subsequent query in Example 10-3. The result is shown in Figure 10-23 on page 480.
Example 10-3 PMDATA VPCRK query SELECT VPCRK.M_MACH_SN, VPCRK.PC_DEV_DATE_E, VPCRK.PC_DEV_TIME_E, VPCRK.PC_MSR_AVG, VPCRK.Q_IO_TOTAL, VPCRK.Q_SAMP_DEV_UTIL FROM DB2ADMIN.VPCRK AS VPCRK WHERE VPCRK.M_MACH_SN LIKE '%2105%' ORDER BY VPCRK.PC_DEV_DATE_E ASC, VPCRK.Q_SAMP_DEV_UTIL DESC, VPCRK.PC_MSR_AVG DESC

Chapter 10. Database management and reporting

479

Figure 10-23 VPCCH high level query

2. Drill down with the next query for the suspect time period to see which arrays were involved. This could be done with any of the table data you want to examine that have data associated with them. The SQL query is shown in Example 10-4. The query result is in Figure 10-24 on page 481.
Example 10-4 VPCRK SQL query for specific time period SELECT VPCRK.M_MACH_SN, VPCRK.PC_DEV_DATE_E, VPCRK.PC_DEV_TIME_E, VPCRK.M_LSS_LA, VPCRK.M_ARRAY_ID, VPCRK.M_DDM_NUM, VPCRK.M_CARD_NUM, VPCCH.M_VOL_NUM, VPCCH.M_VOL_ADDR, VPCCH.M_VOL_TY, VPCRK.PC_MSR_AVG FROM DB2ADMIN.VPCRK AS VPCRK, DB2ADMIN.VPCCH AS VPCCH WHERE VPCRK.M_MACH_SN LIKE '%2105%' AND VPCRK.M_MACH_SN LIKE '%2105%' AND VPCRK.PC_DEV_DATE_E = '2004-06-08' ORDER BY VPCRK.PC_DEV_DATE_E ASC, VPCRK.Q_SAMP_DEV_UTIL DESC, VPCRK.PC_MSR_AVG DESC

480

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Figure 10-24 VPCRK granular SQL query example

From the information above, you can determine the date, time of day, rank number, volume number, and volume address for the time period we examined from the previous query. You know that all the hits for this report are indicating the volumes are OS/390 assigned storage (value C in the M_VOL_TYPE column). Tip: You could save this query and run it as a scheduled task from the DB2 Tool Suite. You could also export the data for further manipulation and presentation with the use of a spreadsheet application. You could also setup SQL query tasks to run on a schedule, using the TotalStorage Productivity Center gauge reports to determine which areas need further investigation. The information derived from these queries of the PM database tables will correlate with the information you can derive from the ESS Specialist so you can determine which hosts and associated applications are causing performance concerns. For further information and examples of SQL queries, you can refer to the redbook IBM TotalStorage Expert Reporting: How to Produce Built-In and Customized Reports, SG24-7016.

10.6 Exporting collected performance data to a file


The IBM TotalStorage Productivity Center DB2 Tool Suite includes features which will enable you to export collected performance data in a spreadsheet (WorkSheet Format, WKS) or

Chapter 10. Database management and reporting

481

Comma Separated Variable (CSV) format. You can do this by opening the PM Performance Data collection task Execution History window. This can be accomplished several ways. Following is an example: 1. Open the Scheduler (either Month, Week, Day, or Job calendar views). 2. Right-click the specific task you want to export. 3. Click the Open Execution History... option. 4. Right-click the Export option of the specific task that was scheduled. The Spreadsheet (.csv) pull-down menu will appear. 5. Right-click the Spreadsheet (.csv) option and the Export Comma Separated Value Format window will open. 6. In the Export Comma Separated Value Format window, input the File Name, Drive, and determine where you want to save the file (see Figure 10-25).

Figure 10-25 Export Comma Separated Value Format window example

10.6.1 Control Center


The DB2 UDB Tool Suite includes the Control Center which provides an insight into the database you are using. You can use the Control Center to manage systems, DB2 Universal Database instances, DB2 Universal Database for OS/390 subsystems, databases, and database objects such as tables and views. In the Control Center, you can display all of your systems, databases, and database objects and perform administration tasks on them. From the Control Center, you can also open other centers and tools to help you optimize queries, jobs, and scripts, perform data warehousing tasks, create stored procedures, and work with DB2 commands. The following is a brief overview of how to discover useful information about the TotalStorage Productivity Center for Disk database and use this as the basis for your query statement creation. Tip: Within the DB Tool Suite, there are context sensitive pop-up help windows to aid you in navigating through the menu selections and tasks.

482

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Important: Never make any modifications to your existing TotalStorage Productivity Center database. If you want to learn and experiment, create a new database or export your production database and perform operations on the exported database only. Once you have opened the DB2 UDB Control Center, you can drill down to your TotalStorage Productivity Center database (PMDATA in this example) by using the explorer window on the left-hand side of the window (Figure 10-26).

Figure 10-26 Control center main window, tables view

On the right-hand side of the Control Center main window you can view the tables of the PMDATA database (since the Tables folder is highlighted on the left-hand side).In this following graphic we are going to explore the VTHRESHOLD table further by viewing the columns (Figure 10-27 on page 484). This is done by double-clicking on the particular table you want to view details on. We have selected the Columns tab at the top left side of the window. The column attributes are listed under the window column headers. You can also view the tables Keys, Check Constraints, or General Table attributes.

Chapter 10. Database management and reporting

483

Figure 10-27 Control center VPTHRESHOLD columns explore example

From this window, you can further explore the table by selecting the tabs on the upper portion of the window. We will now view the Primary Key(s) window for the CNODE table (Figure 10-28). This is very useful information when you are creating your own query statements. It will reduce the amount of research time spent digging through hardcopy documentation.

Figure 10-28 Control center primary key window example

You can also show any current SQL statement you are generating within the Control Center, Estimate the size of this table and determine the current size or add unique or foreign key associations from this window. Additional database information is available from within the Control Center and useful help is available as needed.

484

Managing Disk Subsystems using IBM TotalStorage Productivity Center

10.6.2 Data extraction tools, tips and reporting methods


One of the most frequently asked questions about the TotalStorage Productivity Center is: How can I extract performance data from TotalStorage Productivity Center, so that I can keep and use it outside of TotalStorage Productivity Center for management reports or for problem determination at a granular level? This section contains useful information about the different tools and methods of extracting, manipulating, and exporting data from the TotalStorage Productivity Center database. We will also examine the requirements and important safe database practices to avoid causing unnecessary grief to yourself and your data.

Reporting tools
In this section we outline some of the processes for getting the most out of the TotalStorage Productivity Center database using applications and tools outside of the TotalStorage Productivity Center product. This is not an exhaustive list, but these are things to keep in mind when you are trying to make decisions with respect to exporting data, creating, and disseminating custom reports: IBM DB2 Express (or any Version 8 full-featured IBM DB2 product) is required on the query system (laptop, mobile computer, desktop) Portable programming language and other tools for data extraction/parsing REXX C and C++ (can be compiled and disseminated in AIX environment) ESSCLI (asset/capacity data only) CLI Python QMF

Spreadsheet application Microsoft Excel IBM Lotus 123 Quick print-to-screen reports Parsed and formatted SQL query output Data Output types Data to files (compressed and uncompressed) Binary, ASCII, etc.

DELimited (ASCII) WKS (worksheet)

IBM DB2 DataJoiner


DB2 DataJoiner enables you to view all your data (for example, IBM, multivendor, relational, non-relational, local, remote, and geographic) as though it were local data. These are the highlights of this product: With a single SQL statement, you can access and join tables located across multiple data sources without needing to know the source location. Native support for popular relational data sources: DB2 Family, Informix, Microsoft SQL Server, Oracle, Sybase SQL Server, Teradata, and others. Client access (using DB2 clients) from a multitude of platforms, including Java (using JDBC).

Chapter 10. Database management and reporting

485

Integrated replication administration. DDL statements to easily create, drop, and alter data source mappings, users, data types and functions (user-defined and built-in). Excellent performance and intelligent use of pushdown and remote query caching. Refer to the following Web site for more information about IBM DataJoiner:
http://www.ibm.com/software/data/datajoiner/

QMF for Windows


QMF for Windows provides a Windows or Java interface to build queries or execute predefined queries with easy-to-use, point-and-click, drag-and-drop form creation for fast aggregation, grouping or formatting performed directly in the query results. It provides easy manipulation and integration with important commercial or custom Windows applications such as spreadsheets, desktop databases and executive information systems. DB2 QMF Version 8.1 transforms business data into a visual information platform for the entire enterprise with visual data on demand. Highlights of this release include: Support for DB2 Universal Database Version 8 functionality including IBM DB2 Cube Views, long names, unicode, and enhancements to SQL. The ability to easily build OLAP analytics, SQL queries, pivot tables, and other business analysis and reports with simple drag-and-drop actions. Visual information appliances such as executive dashboards that offer rich interactive functionality specific to virtually any information need. A database explorer for easily browsing, identifying, and referencing database assets. DB2 QMF for WebSphere, a tool that lets any Web browser become a zero-maintenance thin client for visual on demand access to enterprise DB2 business data. Simplified packaging for easier ordering. For more information about QMF for Windows, refer to the following Web sites:
http://www.ibm.com/software/data/qmf/ http://www.rocketsoftware.com/qmf/

You can download the free QMF for Windows Try and Buy version from the following Web site:
http://www-3.ibm.com/software/data/qmf/reporter/june98/downloads.html

Other SQL Query tools


In this section we discuss applications, other than the IBM DB2 Tool suite, from which Structured Query Language (SQL) can be written and executed. This is far from an exhaustive list of the tools and applications available for implementing SQL queries in your environment. You can run a SQL statement using one of several platform-specific tools for writing and executing SQL statements. The best suggestion is to determine which is the most appropriate interface, tool, or application for your particular needs. For DB2 Universal Database (UDB) UNIX and Intel platforms you can use the Command Center or the Command Line Processor (CLP) (see Data Extraction using DB2 Command Line Processor Interface on page 487). You may be familiar with tools such as Query Management Facility (QMF) for Windows. It is a graphical user interface (GUI) that connects to any DB2 UDB.

486

Managing Disk Subsystems using IBM TotalStorage Productivity Center

There are numerous other tools and applications such as IBM DB2 Intelligent Miner, IBM Object REXX, LotusScript which contain powerful scripting and/or report formatting capabilities and can access DB2 UDB on UNIX or Intel, IBM ~ iSeries, z/OS, as well as any database manager connected to DataJoiner. Refer to following Web sites for more information about these other tools: DB2 Intelligent Miner:
http://www.ibm.com/software/data/iminer

Object REXX:
http://www.ibm.com/software/awdtools/obj-rexx/

LotusScript:
http://www.ibm.com/software/data/db2/db2lotus/db2lscpt.htm

There are no direct solutions to print the built-in reports or save the report files directly from TotalStorage Productivity Center. However, you can issue standard SQL statements to extract the data. All asset, capacity, and performance data is available in the form of DB2 tables. DB2 UDB management tools will be useful in utilizing your table data in the most efficient manner.

Data Extraction using DB2 Command Line Processor Interface


It is very important to exercise caution when editing the DB2 tables and creating or dropping indices. You run the risk of losing the information within the tables and completely corrupting the DB2 database on the host machine by these activities. We strongly recommend that you backup your original database and do any database manipulation from your backup database copy. Note: The TotalStorage Productivity Center database has already been optimized through indexing. There is no need for you to perform any further indexing on the database.

Export and import utilities for IBM DB2 CLP Interface


DB2 UDB provides export and import utilities. These utilities operate on logical objects as opposed to physical objects. For example, you can use an export command to copy an individual table to a file system file. At some later time, you might want to restore the table, in which case you would use the import command. Although export and import can be used for backup and restore operations, they are really designed for moving data, for example, for workload balancing or migration.

Chapter 10. Database management and reporting

487

Usage Notes: The db2move tool: Exports, imports, or loads user-created tables. If a database is to be duplicated from one operating system to another operating system, db2move only helps you to move the tables. You also need to move all other objects associated with the tables, such as: aliases, views, triggers, user-defined functions, and so on. db2look is another DB2 UDB tool to help you easily move some of these objects by extracting the Data Definition Language (DDL) statements from the database. When export, import, or load APIs are called by db2move, the FileTypeMod parameter is set to lobsinfile. That is, LOB data is kept in separate files from PC/IXF files. There are 26 000 file names available for LOB files. The lLOAD action must be run locally on the machine where the database and the data file reside. When the load API is called by db2move, the CopyTargetList parameter is set to NULL; that is, no copying is done. If logretain is on, the load operation cannot be rolled forward later. The tablespace where the loaded tables reside is placed in backup pending state, and is not accessible. A full database backup, or a tablespace backup, is required to take the tablespace out of backup pending state.

SQL commands to extract data to a file example


In this section we show how to extract specific DB2 table information for use in another application or to export to another DB2 database. SQL commands are used to redirect DB2 select statement output to a file in a Windows or LINUX environment. The commands in the following example may be executed from a Command Prompt or from the DB2 Command Window (db2 needs to prefix every line of commands) or Command Line Processor. We will use the Command Line Processor for the examples below. 1. Export data to file format DEL (space delimited format), WSF (WorkSheet Format), CSV (Comma Separated Variables), or IXF (Integrated Exchange Format). The following commands provided use SQL. The examples provided are on a Windows platform. Note: The Integrated Exchange Format (IXF) data interchange architecture is a host file format designed to enable exchange of relational database structure and data. The personal computer (PC) version of the IXF format (PC/IXF) is a database manager adaptation of the host IXF format. A PC/IXF file contains a structured description of a database table or view. Data that was exported in PC/IXF format can be imported or loaded into another DB2 database. 2. Create a folder to for the output files of the data extract. For example, mkdir c:\ibmout (Windows 2000 command window, or use Windows Explorer to create new folder) 3. Start Programs IBM DB2 Command Line Tools click Command Line Processor (CLP) 4. Connect to the DB2 using the following command in the CLP window: connect to pmdata user db2admin using db2admin Note: The screen response should be a few lines which says you are connected and the level of DB2 is 8.1.2. You should replace the word db2admin (following the word using) with the actual password you are using for your database instance). 5. Issue the following command to extract the data from the VPVPD table substituting the folder name with one of your choice (VPVPD table contains cluster-level and storage 488
Managing Disk Subsystems using IBM TotalStorage Productivity Center

server-level configuration data; generated at the start of Performance Data Collection) and replacing the mmdd with the date of the extract. Export VPVPD table data to file: export to c:\ibmout\vpvpdmmdd.txt of del select * from vpvpd 6. Issue the following command to extract a specific day's worth of data from the VPCRK table (logical array-level performance data) substituting the date to be extracted and the same substitution for the filename. VPCRK discrete table output directed to file: export to c:\ibmout\vpcrkmmdd.txt of del select * from vpcrk where pc_date_b = 'mm/dd/yyyy' Note: Be patient while this process takes place. The prompt will return when the process is complete. The more complex your SQL statement is, the more data to be extracted, and the amount of background host processor load will have a limiting effect on the speed of processing the command.

Redirect output to a file


The following commands can be used to redirect output to a file instead of to your screen. 1. Start the DB2 Command Line Processor. The backslash (\) character is used to continue a statement onto another line. 2. Enter the following commands and press the Enter key at the end of each command line entry. connect to pmdata user db2admin using db2admin quit db2 (select * from vpvpd) > c:\ibmout\vpvpdmmdd.txt db2 (select * from vpcrk where pc_date_b = 'mm/dd/yyyy') > \ c:\ibmout\vpcrkmmdd.txt The following can also be performed through the DB2 Tools Suite Command Line Interface or Command Center. The commands are not case sensitive and are presented here and include explanations. a. Sample connect to database: connect to pmdata user db2admin using db2admin Where pmdata is the TotalStorage Productivity Center DB2 database, user db2admin, using the password of db2admin b. Select * from CNODE command example: select * from vpclul The previous SQL query will select all (column) information from table vpclul, all rows are implied by the asterisk *. Note: Data is stored as a matrix with Columns (Field names) and Rows (Field Values). For more information about relational database tables, see the redbook IBM TotalStorage Expert Reporting: How to Produce Built-In and Customized Reports, SG24-7016.

Chapter 10. Database management and reporting

489

Tip: You can use the FETCH clause in your SQL statement when testing your queries and scripts. This will limit your large script output to how ever many rows you have defined in the FETCH clause. Remember to remove the FETCH clause when your testing is completed to eliminate the erroneous output from your scripts. This can be placed as the last line of your SQL statement (note the semicolon which indicates to the CLP the end of the SQL statement); fetch first 10 rows only;

10.7 Database backup and recovery overview


A database can become unusable because of hardware or software failure, or both. You may, at one time or another, encounter storage problems, power interruptions, and application failures, and different failure scenarios require different recovery actions. You can protect your data against the possibility of loss by having a well rehearsed recovery strategy in place. DB2 UDB provides a range of facilities for backing up, restoring, and rolling data forward, which enable one to build a recovery procedure. Good warehousing practice covers reliability of the target databases together with the warehouse control databases. Just protecting the target databases is not enough if you want to keep an operational service running satisfactorily. Similarly just maintaining backup of the warehouse metadata helps your IT staff but doesn't satisfy the users who need the data from the target database Some of the questions that you should answer when developing your recovery strategy are: Will the database be recoverable? How much time can be spent recovering the database? How much time will pass between backup operations? How much storage space can be allocated for backup copies and archived logs? Will tablespace level backups be sufficient, or will full database backups be necessary? What level of complexity is acceptable to for the value of the data? Can I recreate any lost data from other sources? What database skills are available? Your database recovery strategy should ensure that all information is available when it is required for database recovery. You should include a regular schedule for taking database backups. You should also include in your overall strategy procedures for recovering command scripts, applications, user-defined functions (UDFs), stored procedure code in operating system libraries, and load copies. The concept of a database backup is the same as any other data backup: taking a copy of the data and then storing it on a different medium in case of failure or damage to the original. The simplest case of a backup involves shutting down the database to ensure that no further transactions occur, and then simply backing it up. You can then rebuild or recover the database if it becomes damaged or corrupted in some way. Different recovery methods are discussed to fit with your data warehouse business requirements.

Planning considerations
Planning is one of the most important areas for consideration before beginning to do database backups. We will cover the factors which should be weighed against one another in planning for recovery, for example, type of database, backup windows and relative speed of backup and recovery methods. We also introduce various backup methods.

490

Managing Disk Subsystems using IBM TotalStorage Productivity Center

In general terms DB2 can offer a number of options for backup and recovery management to meet the needs of a wide range of applications. The more simple backup and recovery options provide data protection with minimal administrator skill or effort. Other more powerful options give greater levels of data protection but require more administrator skill and require more effort to maintain If your organization has existing high levels of skills with DB2 or other relational databases you may already have standard operating procedures for protecting databases. If your organization is less skilled in this area we may want to choose a simple backup and recovery process that doesnt require a lot of administrator new skill or effort.

Speed of recovery
If you ask users how quickly they want you to be able to recover lost data, they usually answer immediately. In practice, however, recovery takes time. The actual time taken depends on a number of factors, some of which are outside your control (for example, hardware may need to be repaired or replaced). Nevertheless, there are certain things that you can control and that will help to ensure that recovery time is acceptable: Develop a strategy that strikes the right balance between the cost of backup and the speed of recovery. Document the procedures necessary to recover from the loss of different groups or types of data files. Estimate the time required to execute these procedures (and do not forget the time involved in identifying the problem and the solution). Set user expectations realistically, for example, by publishing service levels that you are confident you can achieve.

Backup and recovery considerations


DB2 automatically takes care of problems caused by power interruptions. It will automatically restart and return a database to the state it was in at the time of the last complete transaction. Media and application failures are more severe. The simplest case of a backup involves shutting down the database to ensure that no further transactions occur, and then just backing it up. If a database needs to be restored to a point beyond the last backup, then logs are required to reapply any changes made by transactions that committed after the backup was made. You can: Back up a database to a fixed disk, a tape, or a location managed by a storage management product. Back up a database that is active or inactive. Back up a database immediately or schedule backups for a later time. Back up a complete database or only selected table spaces. Some additional resources for DB2 backup and recovery are available in the Backing Up DB2 Using Tivoli Storage Manager, SG24-6247 and DB2 Warehouse Management: High Availability and Problem Determination Guide, SG24-6544.

Database Logging
In DB2 UDB databases, log files are used to keep records of all data changes. They are specific to DB2 UDB activity. Logs record actions of transactions. If there is a crash, Logs are used to playback/redo committed transactions during recovery. Logging is always on for regular tables in DB2 UDB: Possible to mark some tables or columns as NOT LOGGED Possible to declare and use USER temporary tables There are two kinds of logging:
Chapter 10. Database management and reporting

491

Circular logging (default) This is the TotalStorage Productivity Center database default logging type. Archive logging In addition, capture logging is available for replication purposes. Each type of logging corresponds to the method of recovery you want to perform. Circular logging is used if the maximum recovery you want to perform is crash or restore recovery. Archive logging is used if you want to be able to perform rollforward recovery. Note: IBM does not recommend or support the use of Archival logging with the TotalStorage Productivity Center product database.

Circular logging
Circular logging is the default behavior when a new database is created. (The logretain database configuration parameter setting is NO). With this type of logging, only full, offline backups of the database are valid. As the name suggests, circular logging uses a ring of online logs to provide recovery from transaction failures and system crashes. The logs are used in a round-robin fashion and retained only to the point of ensuring the integrity of current transactions. Circular logging does not allow you to roll a database forward through transactions performed after the last full backup operation. All changes occurring since the last backup operation are lost. Only the crash recovery and restore recovery can be performed when this type of logging. Active logs are used during crash recovery to prevent a failure (system power or application error) from leaving a database in an inconsistent state. The data changes are recorded in the log files and when all the units of work are committed or rolled back in a particular log file, the file can be reused. The number of log files used by circular logging is defined by the logprimary and logsecondary database configuration parameters. If there are UOWs running in a database using all the primary log files and still not reaching a point of consistency, then a secondary log files are allocated one at a time. Figure 10-29 shows Circular Logging log path.

Figure 10-29 Circular logging log path example

492

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Archive logging
Archive logging is used specifically for rollforward recovery. You can configure this logging mode by setting the logretain database configuration parameter to RECOVERY. Rollforward recovery can use both archived logs and active logs to rebuild a database or a tablespace either to the end of the logs, or to a specific point-in-time. The rollforward utility achieves this by reapplying committed changes found in the following three types of log files: Active logs: Crash recovery also manipulates active logs, which uses them to place the database into a consistent state. They contain transaction records that have not been committed and also the committed transaction information that has not been written to the database on disk. You can locate active log files in the LOGPATH directory. Online archived logs: When changes in the active log are no longer needed for normal processing, the log is closed, and becomes an archived log. An archived log is said to be online when it is stored in the database log path directory (see Figure 10-30).

Figure 10-30 Online Archival logging log path example

Offline archived logs: An archived log is said to be offline when it is no longer found in the database log path directory. When you want to use archive logging (see Figure 10-31 on page 494) you must make provision for the logs to be stored away from the database. This is done in DB2 UDB by specifying a userexit parameter and interfacing to a suitable archive manager. Full documentation of this is supplied in the DB2 manuals (see Online resources on page 528).

Chapter 10. Database management and reporting

493

Figure 10-31 Offline Archival logging log path example

Database recovery
A database restore will recreate the database from a backup. The database will exist as it did at the time the backup completed. If archival logging were used before the database crash, it would then be possible to roll forward through the log files to reapply any changes since the backup was taken. It is possible to roll forward either to the end of the logs or to a specific point in time. The granularity available on the last transaction needs to be weighed against database performance. Important: Logfiles are just as important as the backup files. It is not possible restore the database without logfiles.

10.8 Backup example


This example uses the simple method of backup for TotalStorage Productivity Center. It is not necessary to configure archive logging and therefore requires the minimum database management, administration and planning. Backups taken in this way are performed with the database offline. To achieve this TotalStorage Productivity Center will need to be stopped while the backup takes place. The following example script will stop TotalStorage Productivity Center, performs a backup of all the DB2 databases then restarts TotalStorage Productivity Center. In our test the environment backup took less than 7 minutes. There are two files: TPC_backup.bat - The script you run database_list - The DB2 scripted list of databases to backup

494

Managing Disk Subsystems using IBM TotalStorage Productivity Center

File: database_list This file contains a line for each database to backup. Depending on the TotalStorage Productivity Center components you have installed this list may vary. You can use DB2 Control Center to establish the full list of databases in your installation. The backup data will reside in C:\db2_backups in this example. You need to create this directory before using this process.
Example 10-5
backup database DIRECTOR to C:\db2_backups without prompting;

backup database DMCOSERV to C:\db2_backups without prompting; backup database ELEMCAT to C:\db2_backups without prompting; backup database ESSHWL to C:\db2_backups without prompting; backup database PMDATA to C:\db2_backups without prompting; backup database REPMGR to C:\db2_backups without prompting; backup database TOOLSDB to C:\db2_backups without prompting;

File: TCP_backup.bat This is the script you run. It stops the IBM Director service which will close all connection to the DB2 databases. This will allow DB2 to take an offline backup. The script then restarts the IBM Director service.
Example 10-6 @ECHO ON @REM This is a sample backup script @REM to backup TotalStorage Productivity Center @REM for Disk and Replication @REM ----

@REM @REM

Stopping TotalStorage Productivity Center -----------------------------------------

net stop IBM Director Support Program @REM Starting backup of DB2 databases @REM -------------------------------C:\PROGRA~1\IBM\SQLLIB\BIN\db2cmd.exe /c /w /i db2 -tvf C:\scripts\database_list @REM @REM Restarting TotalStorage Productivity Center -------------------------------------------

net start IBM Director Support Program

Chapter 10. Database management and reporting

495

496

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Appendix A.

TotalStorage Productivity Center DB2 table formats


This appendix contains the TotalStorage Productivity Center DB2 table formats that can be used to create customized reports.

Copyright IBM Corp. 2004, 2005. All rights reserved.

497

A.1 Performance Manager tables


The following are the Performance Manager DB2 table formats used in the reports in Chapter 10, Database management and reporting on page 449.

A.1.1 VPVPD table


The VPVPD table contains cluster-level and storage server-level configuration data generated at the start of Performance Collection. The key is P_TASK,M_MACH_SN,M_CLUSTER_N.
Table A-1 Columns P_TASK VPVPD Type INTEGER NOT NULL CHAR (12) NOT NULL CHAR(4) Description Sequence number of the performance collection task that read this data from the storage server. Serial number of this storage server The higher level identifier for the storage server product, for example 2105 for the IBM Enterprise Storage Server 2105 The model number for the storage server, for example E20. Cluster number for this cluster Amount of random access memory in this cluster, in megabytes Amount of non-volatile storage in this cluster, in megabytes The date of this snapshot of the configuration for this storage server, collected by the performance collector The time of day of this configuration snapshot

M_MACH_SN M_MACH_TY

M_MODEL_N M_CLUSTER_N M_RAM M_NVS P_CDATE

CHAR(3) SMALL INTEGER NOT NULL INTEGER INTEGER DATE

P_CTIME

TIME

A.1.2 VPCFG table


The VPCFG table contains logical array configuration data generated at start of Performance Collection. The key is P_TASK, M_MACH_SN, M_CLUSTER_N, M_CARD_NUM, M_LSS_LA, M_ARRAY_ID.
Table A-2 Columns P_TASK VPCFG Type INTEGER NOT NULL CHAR (12) NOT NULL Description Sequence number of the performance collection task that read this data from the storage server. Serial number of this storage server

M_MACH_SN

498

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Columns M_CLUSTER_N M_CARD_NUM M_LSS_LA M_ARRAY_ID M_LOOP_ID

Type SMALLINT NOT NULL SMALLINT NOT NULL INTEGER NOT NULL CHAR(8) NOT NULL CHAR (1)

Description Cluster number for this logical array Card number of adapter associated with this logical array An ESS internally generated logical subsystem identifier An ESS internally generated logical array identifier SSA Loop Identifier (for example, A or B) associated with the disk group containing this logical array. If this ESS has the AAL feature enabled, this field contains the value X. Identifying number of the disk group containing this logical array Disk number of the disk group (and final identifier of the logical array) if an independent disk, 0 otherwise Attribute of an array: S if single wide strip size (32K), D if double wide strip size (64K) Number of physical DEFAULT ''S'' disks being used by the logical array, excluding spares Storage type of the logical array: 0 JBOD, 1 - RAID5, 2 - RAID10 Smallest capacity, in units of GB*10, among physical disks used by the logical array Slowest speed, in units of RPM's, among physical disks used by the logical array

M_GRP_NUM M_DISK_NUM

SMALLINT SMALLINT

M_DBL_WIDE

CHAR(1) NOT NULL INT 4

M_DDM_NUM

M_STOR_TYPE

SMALLINT NOT NULL DEFAULT-1 INTEGER NOT NULL DEFAULT -1 INTEGER NOT NULL DEFAULT -1

M_DDM_SIZE

M_DDM_SPEED

A.1.3 VPVOL table


The VPVOL table contains logical volume configuration data; generated at start of Performance Collection. The key is P_TASK,M_MACH_SN,M_LSS_LA,M_VOL_NUM.
Table A-3 VPVOL Columns P_TASK Type INTEGER NOT NULL CHAR (12) NOT NULL Description Sequence number of the performance collection task that read this data from the storage server. Serial number of this storage server

M_MACH_SN

Appendix A. TotalStorage Productivity Center DB2 table formats

499

Columns M_LSS_LA M_VOL_NUM

Type INTEGER NOT NULL INTEGER NOT NULL CHAR (1) CHAR (8)

Description An internally generated logical subsystem identifier Identifying number of this logical volume (and lowest level identifier of the logical volume) Character F if an open systems (fixed block) volume, C if an S/390 volume LUN serial number if the logical volume is an open systems (fixed block) volume, SSID + Base device address if an S/390 volume

M_VOL_TY M_VOL_ADDR

A.1.4 VPCCH table


The VPCCH table contains volume-level performance data (for I/O requests, or "command chains", including those causing cache/DASD transfers).
Table A-4 VPCCH Columns P_TASK Type INTEGER NOT NULL INTERGER Description Sequence number of the performance collection task that read this data from the storage server. An internally generated, consecutive number that uniquely identifies the sample statistics gathered for one collection. Serial number of this storage server Cluster number for this logical volume. An internally generated logical subsystem identifier. An ESS internally generated logical array identifier. Identifying number of this logical volume (and lowest level identifier of the logical volume) Date that this sample time period began (this is, performance counters were collected) The time of day that this sample time period began (that is, performance counters were collected) Date that this sample time period ended (that is, performance counters were collected again.)

PC_INDEX

M_MACH_SN M_CLUSTER_N M_LSS_LA M_ARRAY_ID M_VOL_NUM

CHAR (12) NOT NULL SMALLINT INTEGER NOT NULL CHAR(8) INTEGER NOT NULL DATE NOT NULL TIME NOT NULL DATE

PC_DATE_B

PC_TIME_B

PC_DATE_E

500

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Columns PC_TIME_E

Type TIME

Description The time of day that this sample time period ended (this is, performance counters were collected again) Number of normal (non-sequential) I/O read requests (command chains that contained at least one search or read command but no write command) in this time period for this logical volume Number of normal (non-sequential) I/O write requests (command chains that contained at least one write command). Number of cache hits for normal (non-sequential) I/O read requests (normal, read command chains that were completed without requiring access to any DASD). Number of cache hits for normal (non-sequential) I/O write requests (normal, write command chains that were completed without requiring access to any DASD). Number of sequential I/O read requests (sequential mode command chains that contain at least one search or read command but no write commands). Number of sequential I/O write requests (sequential mode command chains that contain at least one write command). Number of cache hits for sequential I/O read requests sequential mode, read command chains that were completed without requiring access to any DASD). Number of cache hits for sequential I/O write requests (sequential mode, write command chains that were completed without requiring access to any DASD). Number of disk to cache track transfers for non-sequential I/O requests (number of tracks transferred successfully from DASD to cache excluding sequential mode next track promotions). Number of disk to cache track transfers for sequential I/O requests (number of tracks transferred successfully from DASD to cache due to sequential mode next track promotions). Number of cache to disk track transfers (number of tracks transferred from cache to DASD asynchronous to transfers from the channel).

PC_N_IO_R

INTEGER

PC_N_IO_W

INTEGER

PC_N_CH_R

INTEGER

PC_N_CH_W

INTEGER

PC_S_IO_R

INTEGER

PC_S_IO_W

INTEGER

PC_S_CH_R

INTEGER

PC_S_CH_W

INTEGER

PC_D2C

INTEGER

PC_SEQ_D2C

INTEGER

PC_C2D

INTEGER

Appendix A. TotalStorage Productivity Center DB2 table formats

501

Columns PC_RHR_AVG

Type SMALL INT

Description Cache hit ratio for read I/Os (total number of cache hits for read requests / total number of read requests). Cache hit ratio for write I/Os (total number of cache hits for write requests / total number of write requests). Overall cache hit ratio (total number of cache hits for all requests / total number of requests). Cache hit ratio for sequential I/Os (total number of cache hits for sequential requests / total number of sequential requests). Cache hit ratio for normal (non-sequential) I/Os (total number of cache hits for non-sequential requests / total number of non-sequential requests). Number of record mode read I/O requests (number of command chains associated with a record access mode read operation, and the chain contains no write commands). Number of record mode read cache hits (number of record mode read requests which were completed without requiring any access to DASD). Cache hit ratio for record mode reads (number of record mode read cache hits / number of record mode read requests). Number of DASD fast write I/O requests (same as normal write IO requests). Number of DASD fast write-delayed requests (requests of this type delayed due to NVS space constraints). (DASD fast write-delayed request / total of IO requests) * 100. Number of seconds in this time period. Percent (0 - 100) of the time period of this sample (in the hour of the start time). Internally generated identifier of the creator of this record. Zero if normal, a negative value if the location of this logical volume cannot be identified using the VPCFG, VPVOL tables.

PC_WHR_AVG

SMALL INT

PC_THR_AVG

SMALL INT

PC_SHR_AVG

SMALL INT

PC_NHR_AVG

SMALL INT

PC_RMR_IO

INTEGER

PC_RMR_CH

INTEGER

PC_RMRHR_AVG

SMALL INT

PC_DFW_IO PC_DFW_DELAY

INTEGER INTEGER

PC_DFW_DELAY_PRCT PC_INT_SECS PC_B_HR_PRCT P_OWNER P_COMM

SMALL INT SMALL INT SMAL INT NOT NULL INTEGER SMALL INT

502

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Columns PC_Q_W_PR M_CARD_NUM M_LOOP_ID

Type INTEGER NOT NULL SMALL INT NOT NULL CHAR(1) NOT NULL SMALL INT NOT NULL SMALL INT NOT NULL

Description Number of quick-write promote operations. Card number of adapter associated with this logical volume. SSA loop identifier (for example, A or B) associated with the disk group containing the logical volume. Identifying number of the disk group containing this logical volume. Disk number of the disk group, if an independent disk, 0, otherwise character F if an open systems volume, C if an S/390 volume. Character F if an open systems (fixed block) volume, C if an S/390 volume LUN serial number if the logical volume is an open systems (fixed block) volume, SSID + Base device address if an S/390 volume Identifying the time level of the statistic; S for sample, H for hourly. Device date that this sample time period began. Device time of day that this sample time period began. Device date that this sample time period ended. Device time of day that this sample time period ended.

M_GRP_NUM M_DISK_NUM

M_VOL_TY M_VOL_ADDR

CHAR (1) CHAR (8)

TIME_LEVEL PC_DEV_DATE_B PC_DEV_TIME_B PC_DEV_DATE_E PC_DEV_TIME_E

CHAR (1) NOT NULL DATE NOT NULL TIME NOT NULL DATE TIME

Appendix A. TotalStorage Productivity Center DB2 table formats

503

504

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Appendix B.

Worksheets
This appendix contains worksheets that are meant to be used during the planning and installation of the TotalStorage Productivity Center. The worksheets are meant to be examples and therefore you can decide not to use them if, for example, you already have all or most information collected somewhere. If the tables are too small for your handwriting, or you want to store the information in an electronic format, just use a word processor or spreadsheet application to use our examples as a guide to create your own installation worksheets. In this appendix you will find the following worksheets: Users IDs and passwords Storage device information IBM Enterprise Storage Server IBM FAStT IBM San Volume Controller

Copyright IBM Corp. 2004, 2005. All rights reserved.

505

B.1 User IDs and passwords


We have created a table to record the users IDs and passwords that you will use during the installation of IBM TotalStorage Productivity Center for reference during the installation of the components and for future add-ons and agent deployment. Use it for planning purposes. One of the following worksheets is needed for each machine where at least one of the components or agents of Productivity Center is installed. For example, you might have multiple DB2 databases or logon accounts and you need to remember the IDs of each DB2 individually.

B.1.1 Server information


Table B-1 Productivity Center Server Server Machine Hostname IP Address ____.____.____.____ Configuration information

In Table B-2 simply mark if a manager or a component is going to be installed on this machine.
Table B-2 Managers/Components installed Manager/Component Productivity Center for Disk Productivity Center for Replication Productivity Center for Fabric Productivity Center for Data Tivoli Agent Manager DB2 WebSphere Installed (y/n)?

B.1.2 User IDs and passwords to lock the key files


Table B-3 can be used to note the password that you used to lock the key files.
Table B-3 Passwords used to lock the key files Default key file name Key file name Password

MDMServerKeyFile.jks

MDServerTrusFile.jks

506

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Default key file name

Key file name

Password

agentTrust.jks

Enter the user IDs and password that you used during the installation in Table B-4 below. Depending on the selected managers and components some of the lines will not be used for this machine.
Table B-4 User IDs used on this machine Element Suite Installer Default/ recommended User ID Administrator Enter user ID below Enter password below

DB2

db2admin1

IBM Director (see also below) Resource Manager

Administratora

manager2

Common Agent (see also below) Common Agent

AgentMgrb

itcauserb

TotalStorage Productivity Center universal user Tivoli NetView

tpcsuida

IBM WebSphere

Host Authentication

1. This account can have whatever name you like. 2. This account name cannot be changed. 3. The DB2 administrator user ID and password are used here, see Fabric Manager User IDs on page 51.

Appendix B. Worksheets

507

B.2 Storage device information


This section contains worksheets which can be used to gather important information about the storage devices that will be managed by TotalStorage Productivity Center. You will need to have this information during the configuration of the Productivity Center. Some of the information you will need before you install the device specific CIM Agent, as this is sometimes dependent on a specific code level. Determine if there are firewalls in the IP path between the TotalStorage Productivity Center server(s) and the devices, which might not allow the necessary communication. In the first column of each table try to enter as much information as possible to be able to identify the devices later on.

B.2.1 IBM Enterprise Storage Server


Use Table B-5 to collect the information about your ESS devices. Important: Check the device support matrix for the associated CIM Agent.
Table B-5 Enterprise Storage Server Name, location, organization Both IP addresses LIC level ESS Specialist user name ESS Specialist password CIM Agent hostname and protocol

508

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Name, location, organization

Both IP addresses

LIC level

ESS Specialist user name

ESS Specialist password

CIM Agent hostname and protocol

B.2.2 IBM FAStT


Use Table B-6 to collect the information about your FAStT devices. Check the device support matrix before you install.
Table B-6 FAStT devices Name, location, organization Firmware level IP address CIM Agent hostname and protocol

Appendix B. Worksheets

509

B.2.3 IBM SAN Volume Controller


Use Table B-7 to collect the information about your SVC devices.
Table B-7 SAN Volume Controller devices Name, location, organization Firmware level IP address User ID Password CIM Agent hostname and protocol

510

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Appendix C.

Event management
This appendix contains additional information about the IBM Director options that can be used to build Event Action Plans. This information complements the information in Event Action Plan Builder on page 215.

Copyright IBM Corp. 2004, 2005. All rights reserved.

511

C.1 Event management introduction


IBM Director event management enables you to identify, categorize, and automatically initiate actions in response to network events. Event management has the following functions: Maintains a log of all events that are received and logged by the IBM Director Server. Using the Event Log, you can view details on all logged events or subsets of logged events. To view a subset of events, you must apply an event filter to the set of events. Director supplies predefined event filters as well as an Event Filter Builder dialog that enables you to build filters that are specific to your site's needs. Maintains a history of actions performed in response to events. Executes predefined or user-defined responses to selected events. By applying an event action plan to a managed system or group of systems, you can initiate actions based on the needs of your working environment.

C.1.1 Understanding events and event actions


An event is a means of identifying a change of state of a process or device on the network, so that a notification of the change can be generated and tracked. For example, an event identifies when a device changes from online on the network to offline, or when a critical resource threshold, such as virtual memory utilization, is met. IBM Director uses the following criteria to identify the characteristics of an event, such as its origin, cause, and severity. Date: Identifies the day the event was generated. Time: Identifies the time of day the event was generated. Event Type: Provides origination information and descriptive detail to help identify the source and cause of the event. Event Text: Provides additional descriptive detail to help identify the cause of the event. System Name: Identifies the name of the managed system from which the event was received. Severity: Identifies the urgency of the event. Category: Identifies whether the event signifies a problem (Alert) or an all-clear condition (Resolution). Sender Name: Identifies the source from which the event was sent. Group Name: Identifies the name of the group associated with the managed system targeted in the event. Group name only appears in group events. Extended Attributes: Identifies additional keywords and keyword values that can further qualify some categories of events, such as SNMP. System Variables: Identifies user-defined system variables that help test and track the status of network resources. Events that are generated as a result of monitoring a group of managed systems are called Group events. Events that are generated as the result of monitoring one or more individual systems are termed Individual events. Actions define the steps to take in response to an event, for example, entering the event in the event log, sending a message that includes the text of the event, or executing a

512

Managing Disk Subsystems using IBM TotalStorage Productivity Center

command. Actions that can be taken in response to an event are created using the predefined action templates described in Event Actions on page 519.

C.1.2 Understanding event filters


An event filter describes a set of characteristics, for example, severity and event type, that is used to select a single event or a group of events. When applied to a managed system, group of systems, or group, an event filter can be used to control which events are displayed for viewing in the event log and which events trigger the initiation of specific actions.

Types of event filters


You can create the following types of event filters: Simple Event Filters: The default general-purpose filter type. Exclusion Event Filters: Allow the exclusion of selected event types, in addition to the options given by the Simple Event Filters. Threshold Event Filters: Allow the selection of an interval or count threshold that must be met, in addition to the options given by the Simple Event Filters. Duplication Event Filters: Allow for duplicate events to be ignored, in addition to the options given by the Simple Event Filters. Exclusion, threshold, and duplication event filters are useful in filtering events from one server to another.

Predefined event filters


Director supplies a few event filters that you can use to respond to events on the basis of severity, and to create new event filters. These predefined filters can also be deleted. Predefined event filters are listed in the Event Filters pane of the Event Action Plan Builder window (see Figure C-1 on page 514).

Appendix C. Event management

513

Figure C-1 Event Action Plan Builder

Creating an Event Action Plan on page 215 describes how to associate a predefined event filter with an event action plan.

User-defined event filters


IBM Director provides a facility to create your own filters, the Event Filter Builder, according to the needs of your environment. To create a filter, choose from one or more event categories, such as time and day of the week the event occurred, severity of the event, originator of the event, and type of event, from the dialog window. Each filtering characteristic is represented as a field name when you view the event log.

Event filter builder


Use this dialog to create an event filter or change an existing event filter (see Figure C-2 on page 515). The fields in the dialog are as follows:

514

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Figure C-2 Simple Event Filter Builder

Any
By default, Any is selected for all filtering categories, indicating that all filtering criteria apply. You must deselect Any to select or enter a filtering criteria for a specific filtering category.

Event Type
The Event Type tab is the most important tab as it is here that you select the events upon which you want the event action plan to be activated. It is used to specify the source or sources of the events that are to be processed by this filter.

Severity
Identifies the urgency of the event. Severity is typically used in action plans because it identifies potentially urgent problems requiring immediate attention. You can select multiple levels of severity as filtering criteria. Logical OR applies for multiple selections. For example: if you select Fatal and Critical, the filtering criteria matches if the originator of the event classifies the event as Fatal or as Critical. Severity levels in the order of most severe to least severe are: Fatal The application that issued the event has assigned a severity level indicating that the source of the event has already caused the program to fail and should be resolved before the program is restarted. Critical The application that issued the event has assigned a severity level indicating that the source of the event may cause program failure and should be resolved immediately. Minor The application that issued the event has assigned a severity level indicating that the source of the event should not cause immediate program failure, but should be resolved.
Appendix C. Event management

515

Warning The application that issued the event has assigned a severity level indicating that the source of the event is not necessarily problematic, but may warrant investigation. Harmless The application that issued the event has assigned a severity level indicating that the event is for information only; no potential problems should occur. Unknown The application that generated the event did not assign a severity level.

Day/Time
Enables you to specify day and time ranges for a filter. Specifying a day and time range in a filter adds control over when actions are run and therefore not run. Use the pull-down menus to select values in each category, then click the Add button when you finish the selections. Your settings are added to the selections pane. You can create as many day/time range entries as you like. Each time you create a day/time range entry, click Add to add the entry to the list in the selections pane. To remove an entry from the selections pane, click the entry, then click the Delete button. The time zone that applies to the day/time filtering entries is the time zone in which the IBM Director Server is located. If your console is not in the same time zone as the server, the difference in time zones is shown above the selections pane. For example: if the IBM Director Server is located in New York and your console is located in California, the time zones displayed and used are Eastern Standard Time (EST), and the following is displayed above the selections pane: Server Time - Local Time = 3 Hours. Day of the Week Use the pull-down menu to select the day of the week to which this filter is to apply. Weekday (Monday - Friday) and weekend (Saturday and Sunday) selections are available. Starting Time Use the pull-down menu to select the starting time of an interval within which this filter is active. Ending Time Use the pull-down menu to select the ending time of an interval within which this filter is active. Add Adds your day and time selections to the list in the selections pane. You can add multiple day/time entries to the list. Delete Deletes a day/time entry from the list of entries in the selections pane. To delete an entry, select it, then click this button. Block queued events Select this check box to avoid filtering on events that had to be queued for transmission to the IBM Director Server. Multiple events can be queued for transmission to the IBM Director Server if the managed system for which the event was generated cannot send the event at the time of its occurrence. This option can be useful if the timing of the event is 516
Managing Disk Subsystems using IBM TotalStorage Productivity Center

important or if you want to avoid filtering on multiple queued events that are sent all at once when the IBM Director Server becomes accessible.

Category
The category specifies the resolution status of the event as a filtering criterion. Alert: Signifies the problem. Resolution: Signifies that the problem has been resolved and is no longer a problem.

Extended Attributes
Enables you to qualify the filtering criteria using additional Keywords and keyword Values that can be associated with some categories of events, such as SNMP. These additional keywords and corresponding values are referred to as the event's extended attributes. This category can be particularly useful for narrowing the filtering criteria to a lower level of detail, for example, to isolate one or more values originating from a specific system. You can also view the extended attributes of a specific event by opening the Event Log task in the Tasks pane of the Director Console and select an appropriate event from the list. The event's extended attributes, if present, are displayed at the bottom of the Event Details panel, below the Sender Name category. Because event types are hierarchical, an event with a particular event type has its associated extended attributes as well as the extended attributes of its parent event types. For example, the event type Director.Topology.Offline has extended attributes for Director.Topology.Offline and Director.Topology. You can specify keywords and values in Extended Attributes only if one event type is selected. If the current event type is set to Any, Extended Attributes is disabled. Extended Attributes is also disabled if multiple event types are selected. If the Extended Attributes panel is enabled for a specific event type but no keywords are listed, the IBM Director Server is not aware of any keywords that can be used for filtering. An event will meet the filtering criteria as follows: If you select multiple keywords, all values received must match all values of all selected keywords (Boolean AND). If you specify multiple values for a single keyword, the values received must match at least one of the values specified for the keyword (Boolean OR). Any By default, this check box is selected indicating to filter on all extended attributes. Deselect Any to select specific keyword/value pairs. Keywords Select the keywords on which you want to filter. If no keywords are listed, the IBM Director Server has not been made aware of or has not published the keywords for the selected event category. You can select multiple keywords. Values Specifies a value for the keyword on which you want to filter. You can specify multiple values, but you cannot specify a range of values. If you want to enter multiple values for a single keyword, use the Add key each time you want to add a value. Boolean OR is used to determine if an event's extended attributes meet the filtering criteria for multiple values of a single keyword.
Appendix C. Event management

517

If you enter more than one keyword/value pair, Boolean AND is used to determine if an event's extended attributes meet the filtering criteria (all keyword values must be true). Case Sensitive Select this option if the specified keyword value should be filtered as case sensitive. Update Allows you to change the value of a selected keyword/value pair. Select a keyword/value pair. Select Values to change the corresponding value. Select Update to make the change take effect. Delete Deletes a selected keyword/value pair as a selection criterion.

Frequency
This only appears for Duplication and Threshold Event Filters. Interval For Duplication Event Filters, the Interval field can be used without using the Count field (Count=0). Interval specifies a window of time that begins when an event meets the filtering criteria. The first occurrence of an event that meets the criteria triggers associated actions and starts a countdown of the units that define the interval. For example, if you enter 10 and select seconds, a 10-second timer starts when an event meets the filtering criteria. If Count is set to 0, all other instances of an event meeting the criteria do not trigger associated actions during the interval. If Interval is set to a value greater than 0 and Count is set to a value greater than 0, after the first occurrence of an event meets the filtering criteria, the value entered in Count (n) specifies the number of times an event must meet the criteria within the interval before associated actions can be triggered again. If an event meets the criteria for the nth time within the interval, the next time (n+1) an event meets the criteria, associated actions are triggered, the count is reset, and the interval is reset. For Threshold Event Filters, the Interval field must be used in conjunction with the Count field. Interval specifies a window of time that begins when an event meets the filtering criteria. The first occurrence of an event that meets the criteria does not trigger associated actions, but starts a countdown of the units that define the interval. For example, if you enter 10 and select minutes, a 10-minute timer starts when an event meets the filtering criteria. The value entered in Count specifies the number of times (n-1) an event has to meet the criteria before associated actions are triggered. The first n-1 events that occur within the interval do not cause associated actions to trigger. The nth time an event meets the criteria within the interval, associated actions are triggered, the count is reset, and the interval is reset. Count For both duplication and threshold event filters, the Count field can be used without using the Interval field (value=0 for selected type of interval). For Duplication Event Filters, Count must be an integer from 0 to 100 and specifies the number of duplicate events to ignore after the first occurrence of an event meets the filtering criteria. For example, if you enter 5 in Count, an event must meet the criteria 6 times after the first event meets the criteria to trigger associated actions again. If you specify an interval and Count is set to the value 0, the first time the criteria are met the associated actions trigger, the interval countdown begins, and no actions are triggered during the interval.

518

Managing Disk Subsystems using IBM TotalStorage Productivity Center

For Threshold Event Filters, Count must be an integer from 1 to 100. Count specifies the number (n-1) of events that must meet the filtering criteria before associated actions are triggered. The first n-1 events are ignored. For example, if you enter the value 5 in Count, the first 4 duplicate events are ignored and the fifth event triggers associated actions.

Excluded Event Type


This only appears for Exclusion Event Filters. Use this to identify sources of events within the network that you want to exclude from the event filtering criteria specified using the Event Type. That is, you can filter on a specified group of events but exclude certain events that meet the criteria selected on this page. The exclusion filter can be useful also in identifying the criteria that do not apply rather than identifying all the criteria that do apply.

System Variables
This is only enabled if one or more system variables exist. You can create a system variable using the Set Event System Variable event action. System Variables are user-defined keyword/value pairs that are known only to the local IBM Director Server. You can further qualify the filtering criteria by specifying a system variable. These user-defined system variables are not associated with system variables in any way.Refer to Understanding System Variables in the IBM Director Help for more information about how to use system variables.

C.1.3 Event Actions


The Event Actions pane lists the predefined action types (see Figure C-3 on page 520). With the exception of Add Event to Event Log, each type of event must be customized by either double-clicking or using the right-click Customize menu item. The event actions are used to specify which reactions you want IBM Director to take as a result of the occurrence of an event.

Appendix C. Event management

519

Figure C-3 Default Event Actions

In this section we discuss some of the possible actions.

Event action right-click options


All event actions have right-click context menus. These menus include several tools for troubleshooting and maintaining the actions.

Menu Function
Customize enables the creation of custom actions. Add to Event Action Plan adds the action to the currently selected action plan. Show Implementations displays the systems or groups to which this Action has been applied. Rename allows a new name to be assigned to this Action. Update makes it possible to modify the tasks performed by the Action. Delete removes the Action. If the Action is in use on a group or system, the software will notify the user and prompt for a second verification prior to removal. Test executes the task(s) associated with the Action.

Creating an action
Following are the the steps to create an action. 1. From the Action pane, right-click Send an Event Message to a Console User, and click Customize. IBM Director sorts actions alphabetically and executes them in that order. 2. Fill in the fields using event data substitution variables (see Figure C-4 on page 521). For more information about event data substitution variables, see Event Data Substitution on page 521.

520

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Figure C-4 Customize Action window

3. Select File Save As to save the action and enter the name of the action. In the example, the name System_text Event Message was used. 4. The new action now appears as a subentry listed under Send an Event Message to a Console User (see Figure C-5).

Figure C-5 New Action - System_text Event Message

C.1.4 Event Data Substitution


Some event actions allow you to include event-specific information as part of the text message. Including event information is referred to as event data substitution. Refer to the

Appendix C. Event management

521

help associated with a specific event action template for information about where event data substitution can be used. The text of an event message is divided into keywords. When used in a message, a keyword must be preceded by the ampersand symbol (&). The keywords are:
&date

Specifies the date the event occurred.


&time

Specifies the time the event occurred.


&text

Specifies the event details, if supplied by the event.


&type

Specifies the event type criteria used to trigger the event.


&severity

Specifies the severity level of the event.


&system

Specifies the name of the system for which the event was generated.
&sender

Specifies the name of the system from which the event was sent. This keyword returns null if unavailable.
&group

Specifies the group to which the target system belongs and is being monitored. This keyword returns null if unavailable.
&category

Specifies the category of the event.


&pgmtype

Specifies a dotted representation of the event type using internal type strings.
&timestamp

Specifies the coordinated time of the event (milliseconds since 1/1/1970 12:00 AM GMT).
&rawsev

Specifies the non-localized string of event severity (FATAL, CRITICAL, MINOR, WARNING, HARMLESS, UNKNOWN).
&rawcat

Specifies the non-localized string of event category (ALERT, RESOLVE).


&corr

Specifies the correlator string of the event. Related events, such as those from the same monitor threshold activation, will match this.
&snduid

Specifies the unique ID of the event sender.


&sysuid

Specifies the unique ID of the system associated with the event.


&prop:filename#propname

522

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Specifies the value of the property string propname from property file filename (relative to \tivoliWg\classes).
&sysvar:varname

Specifies the event system variable varname. This keyword returns null if a value is unavailable.
&slotid:slot-id

Specifies the value of the event detail slot with the non-localized ID slot_id.
&md5hash

Specifies the MD5 hash code (CRC) of the event data (good event specific unique ID).
&hashtxt

Specifies a full replacement for the field with an MD5 hashcode (32-character hex code) of the event text.
&hashtxt16

Specifies a full replacement for the field with a short MD5 hashcode (16-character hex code) of the event text.
&otherstring

Specifies the value of the detail slot with the localized label that matches otherstring. This keyword returns OTHERSTRING if unavailable. Note: When you specify an event data substitution keyword containing more than one word, substitute the underscore character ("_") for each space between words. For example, to use the keyword "User Logon" you must enter "User_Logon" in the text of the event message. A sample entry containing this keyword might be: "User &User_Logon just logged on to the system." Example of message text with event data substitutions:
Please respond to the event generated for &system, which occurred &date. The text of the event was &text with a severity of &severity.

C.1.5 Updating Event Plans, Filters, and Actions


Use the procedures described here to make changes to an existing event filter, action, or event action plan. Creating an Event Action Plan describes procedures for creating an event filter, action, or event action plan.

Updating an Event Filter


To change the contents of an event filter: 1. Click the Event Action Plan Builder button on the toolbar of the Management Console to display the Event Action Plan Builder window. The filters are listed in the Event Filters pane. 2. Right-click the filter you want to edit to display the context menu, then select Update. The Event Filter Builder dialog is displayed with the filter's current settings. 3. Make whatever changes you like, then select File Save or click the Save icon to save your changes. Your changes take effect immediately for all instances where you have applied the filter.

Appendix C. Event management

523

Deleting an Event Filter


You can remove both predefined and user-defined filters. To remove a filter: 1. Click the Event Action Plan Builder button on the toolbar of the Management Console to display the Event Action Plan Builder window. The filters are listed in Event Filters. If a filter has been associated with an event action plan, it is also displayed in Event Action Plans under the plan with which it is associated. 2. To remove an application of a filter in the Event Action Plans pane, locate the event action plan with which the filter is associated, and do one of the following: Right-click the filter to display the context menu, then select Delete. -ORSelect the filter, then click the Delete icon ( ) in the toolbar. Note: The Delete icon applies to all highlighted items in the window. Ensure that only the items you want to delete are highlighted before you proceed. To remove a filter and all of its applications in event action plans: Right-click the filter in the Event Filters pane and select Delete. -ORSelect the filter, then click the Delete icon ( ) in the toolbar. Note: The Delete icon applies to all highlighted items in the window. Ensure that only the items you want to delete are highlighted before you proceed. If the filter is included in an event action plan, you are prompted to ensure that you want to remove all instances of the filter.

Editing a Custom Action


To change the contents of a customized action: 1. Click the Event Action Plan Builder button on the toolbar of the Management Console to display the Event Action Plan Builder window. The custom actions are listed under the corresponding action templates in the Actions pane. 2. Right-click the custom action you want to edit, to display the context menu, and then select Update. The action's dialog is displayed with the current settings. 3. Make whatever changes you like, then select File Save or click the Save icon to save your changes and close the dialog. Your changes take effect immediately for all instances where you have applied the custom action.

Copying a Custom Action


It is often easier to create an action from an existing custom action with settings similar to those you need. To create a new action using the contents of an existing custom action: 1. Click the Event Action Plan Builder button on the toolbar of the Management Console to display the Event Action Plan Builder window. The custom actions are listed under the corresponding action templates in the Actions pane. 2. Right-click the custom action you want to use as a basis for the new action to display the context menu, then select Update. The action's dialog is displayed with the current settings. 3. Make whatever changes needed for the new action, then select File Save As or click the Save As icon to save your changes. A dialog is displayed prompting for the name of the action. 4. Enter the name you want to assign to the action and click OK to save your changes and close the dialog. The new custom action is added as a child entry of the corresponding action template.

524

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Removing a Custom Action


To remove a customized action: 1. Click the Event Action Plan Builder icon on the toolbar of the Management Console to display the Event Action Plan Builder window. The custom actions are listed under the corresponding action templates in the Actions pane. Click the ( ) icon to expand the tree of customized actions. If the custom action has been associated with an event action plan, it is also displayed in the Event Action Plans pane under the filter with which it is associated. 2. To remove an application of a custom action in the Event Action Plans pane, locate the event action plan with which the action is associated, and do one of the following: Right-click the action to display the context menu, then select Delete. -ORSelect the action, then click the Delete icon ( ) in the toolbar. Note: The Delete icon applies to all highlighted items in the window. Ensure that only the items you want to delete are highlighted before you proceed. To remove a customized action and all of its applications in event action plans, do one of the following: Right-click the custom action in the Actions pane to display the context menu, then select Delete. Or, Select the custom action, then click the Delete icon ( ) in the toolbar. Note: The Delete icon applies to all highlighted items in the window. Ensure that only the items you want to delete are highlighted before you proceed. You are prompted to ensure that you want to remove all instances of the action.

Removing an Event Action Plan


To remove an Event Action Plan: Select the Event Action Plan Builder icon in the toolbar of the Director Console to display the Event Action Plan Builder window. Right-click an Event Action Plan to display the context menu, and then do one of the following: Select Delete. Select the event action plans you want to delete, then select Edit Delete from the menu bar. Select the event action plans you want to delete, then click the Delete icon in the toolbar.

Removing a Distributed Event Action Plan


To delete a Distributed Event Action Plan from a top-level server and all distributed servers to which the plan has been propagated: 1. In the Event Action Plans pane of the Event Action Plan Builder window, click the square box next to the Distributed Event Action Plan icon to display all user-defined plans. 2. Right-click the plan that you want to delete, then select Delete. You are prompted to ensure that you want to delete the plan. 3. Select Yes If you have applied the plan to one or more systems, you are prompted to ensure that you want to delete it because all network implementation is deleted as well. 4. Select Yes and the user-defined plan and all implementations of the plan are removed on the network.
Appendix C. Event management

525

Deleting from a server


To remove a distributed event action plan from a Director server: 1. In the Group Contents pane of the Director Console: 2. Right-click in an open area and select Associations Distributed Event Action Plans. All managed systems to which one or more distributed event action plans have been applied can now display their distributed event action plans icon. 3. Click the square box next to the server to display the distributed event action plans icon. 4. Click the square box next to the distributed event action plans icon to expand the list of plans. 5. Right-click the plan that you want to delete, then select Delete. The plan is immediately deleted from the server.

526

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Related publications
The publications listed in this section are considered particularly suitable for a more detailed discussion of the topics covered in this redbook.

IBM Redbooks
For information on ordering these publications, see How to get IBM Redbooks on page 528. Note that some of the documents referenced here may be available in softcopy only. IBM TotalStorage Productivity Center: Getting Started, SG24-6490 IBM TotalStorage SAN Volume Controller, SG24-6423 IBM Tivoli Storage Resource Manager: A Practical Introduction , SG24-6886 IBM Tivoli Storage Area Network Manager: A Practical Introduction, SG24-6848 IBM TotalStorage Enterprise Storage Server Implementing ESS Copy Services with IBM Eserver zSeries, SG24-5680 IBM TotalStorage Expert Reporting: How to Produce Built-In and Customized Reports, SG24-7016 IBM TotalStorage Enterprise Storage Server Implementing ESS Copy Services in Open Environments, SG24-5757 DB2 Warehouse Management: High Availability and Problem Determination Guide, SG24-6544 DB2 UDB/WebSphere Performance Tuning Guide, SG24-6417 Up and Running with DB2 for Linux, SG24-6899 IBM TotalStorage Business Continuity Solutions Guide, SG24-6547 IBM TotalStorage Enterprise Storage Server PPRC Extended Distance, SG24-6568

Other Publications
DFSMS Advanced Copy Services, SC35-0428 z/OS DFSMSdfp Advanced Services, SC26-7400 IBM TotalStorage Enterprise Storage Server Web Interface Users Guide, SC26-7448 IBM TotalStorage Enterprise Storage Server Command-Line Interface Users Guide, SC26-7494 IBM TotalStorage SAN Multiple Device Manager Command-Line Interface Guide, SC26-7585 IBM TotalStorage SAN Multiple Device Manager Configuration Guide, SC26-7586 IBM TotalStorage SAN Multiple Device Manager CIM Agent Developers Reference, SC26-7587

Copyright IBM Corp. 2004, 2005. All rights reserved.

527

Online resources
These Web sites and URLs are also relevant as further information sources: Storage Networking Industry Association Web site
http://www.snia.org/

Distributed Management Taskforce, Inc. Web site


http://www.dmtf.org/

IBM TotalStorage Productivity Center technical support


http://www-1.ibm.com/servers/storage/support/virtual/tpc.html

The following Web site is useful for reference material concerning IBM TotalStorage Productivity Center and the products mentioned in this redbook.
http://www.storage.ibm.com/servers/storage/software/index.html

DB2 Universal Database for Linux, UNIX and Windows Technical Support
http://www-306.ibm.com/software/data/db2/udb/support.html

DB2 Technical Support, Version 8 Information Center and PDF product manuals
http://www-306.ibm.com/cgi-bin/db2www/data/db2/udb/winos2unix/support/v8pubs.d2w/en_main

How to get IBM Redbooks


You can search for, view, or download Redbooks, Redpapers, Hints and Tips, draft publications and Additional materials, as well as order hardcopy Redbooks or CD-ROMs, at this Web site:
ibm.com/redbooks

Help from IBM


IBM Support and downloads
ibm.com/support

IBM Global Services


ibm.com/services

528

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Index
A
addess command 140 addessserver command 141 adduser command 143 agent manager 45 agentTrust.jks file 57 common agent 115 resource manager 115 TCP/IP ports 45 agentTrust.jks file 51, 57 archive logging 492493 associated CIM Agent device support matrix 508 data collection SVC,SVC data collection 260 data collection task 229, 231 Data Manager 44 database backup 490 database name 48, 75, 103 new database 114 new Element Catalog subcomponent database 103 new Replication Manager subcomponent database 104 database purge function 20 database purge task 453 database recovery time 491 database restore 494 DataJoiner 487 DB2 database purging 450 view a table 427 DB2 Command Center 457 DB2 Control Center 482 DB2 Cube Views 486 DB2 database connect to 454 DB2 database health 423 DB2 database-size monitoring 450 DB2 DataJoiner 485 DB2 Development Center 456 DB2 Development Tools 455 DB2 Event Analyzer 457 DB2 General Administration Tools 456 Control Center 456 Journal 456 Replication Center 456 Task Center 456 DB2 Health Center 424 DB2 host 180 DB2 Intelligent Miner 487 DB2 journal 424 DB2 logging 491 DB2 Monitoring Tools 457 DB2 report customized example 462 DB2 Tool Suite 453 command line processor 454 Command Line Tools 454 DB2 UDB Journal utility 423 DB2 user Id 56, 63 name 55, 81 DB2 Utilities Command Center 457 db2move tool 488 DDL 486 default directory 51, 63, 80 delete a gauge 256 Device Manager

B
block aggregation 28 BWN005921E message 281 BWN005922E message 281 BWN005996W message 286

C
certificates 51 CIM Agent 2931, 44, 49 agent code 31 client application 31 device 31 device provider 31 ESS CLI 125 ESS configuration 124 overview 32 Service Location Protocol 32 CIM Browser interface 148 CIM management model 26 CIM Object Manager 2931, 123 CIM-compliant 31 CIMOM 30 CIMOM communication 174 CIMOM SVC console 173 circular logging 492 Command Center 458, 486 Command Line Processor (CLP) 455, 486 command-line interface 407 communication protocols 42 Control Center 482 cpthresh 270 creating gauges 243 CSV format report 482 customized reports 482

D
DA benefit 38 Data agent 109

Copyright IBM Corp. 2004, 2005. All rights reserved.

529

device discovery 181 discovery 181 LUN mapping 200 mdisk display 198, 205, 207 overview 17 Director trace logging 444 directory agent 33, 35 discovery 423 device 181 disk subject matter expert 10 display gauges 247, 254 distinguished name right-hand portion 83 distinguished name (DN) 83

F
Fabric agent 47, 109 Fabric Manager 44, 46, 56 fabric zoning 332 FAStT CIM Agent defining devices 161 SLP registration 164 FAStT device 509 FETCH clause 490 file aggregation 28 FlashCopy 357 flashsess command 411

G
gauge definition 12 gauge properties 254 gauges 20, 242 creating 243 delete 256 display 247, 254 exception 250 performance 243 properties 254

E
enable logging 435 enabling WAS trace 435 ESS CIM Agent 124 addess command 140 addessserver command 141 configuring 139 install 128 log files 135 post install 137 setuser interactive tool 143 verify ESS connectivity 144 verify install 138 ESS CIM agent SLP registration 153 ESS CIMOM CIM Browser interface 148 restart 142 SLP registration 146 telnet command 148 ESS CIMOM verification 144 ESS CLI 124 install 125 verification 128 verifyconfig 146 ESS data collection 229 ESS thresholds 257 ESS user authentication 444 esscli command 128 Event Action Plan 21 event filter 513 export and import action plans 221 Message Browser 220 Event Action Planner 512 actions 520 creating an action 215 event 512 event filter builder 514 Event Action Plans 422 Event Filter Builder 433 Event Filters 22 Event Log 431 Event Services 21 exception gauges 250

H
hard zoning 332 health monitoring 18 host name 54, 57, 71

I
IBM DB2 Universal Database Server 91 IBM Director 16 Event Action Plans 422 Event Services 22 IBM Director (ID) 46, 507 IBM Director event logs 422 IBM Director Scheduler device discovery 184 IBM FAStT 505 Use 505 IBM Object REXX 487 IBM Tivoli Common Agent 52 IBM Tivoli SAN Manager 18 IBM TotalStorage Open Software Family 1, 3 IBM WebSphere selection panel 89 IDs and passwords (IP) 45 in-band 334 inband discovery 10 IP address 48, 81, 506

J
JDBC 485 job scheduling facility 233

530

Managing Disk Subsystems using IBM TotalStorage Productivity Center

K
key files 51

L
Launch Device Manager 181 License Agreement 59 lscollection 270 lsfilter 270 lsgauge 270 lspair command 412 lsseq command 412 lssess command 409 LUN to host port mapping 200

M
manage replication sessions 400 mdisk group 205 mdisks display 198, 205, 207 multicast messages 34 multicast request 35 multicast traffic 38

Package Location 87 Productivity Center for Data 5 architecture 7 features 6 Productivity Center for Disk architecture 42 functions 11 gauge definition 12 Volume Performance Advisor 12 Productivity Center for Fabric 7 benefits 10 overview 7 Productivity Center for Replication 21 architecture 42 overview 12 provider component 30 Python 485

Q
QMF 485 QMF for Windows 486 Query Management Facility (QMF) 486

N
nestat command 132 next panel 68

R
raswatch 423 Redbooks Web site 528 Contact us xiii relational database management system 450 remote console 314 repcli command syntax 408 repcli utility 408 Replication Manager 21 CLI 407 Continuous Synchronous Remote Copy 385, 395 copyset 361 Copyset details 392 create a group 362 create a storage group 362 define storage group 368 delete a storage pool 373 freeze operations 434 groups 359 managing a storage pool 372 modify a storage group 366 overview 356 Point-in-Time Copy session 375 replica sessions 21, 356 restarting 435 sequence 361 Session Properties window 398 Sessions window 396 setting up 362 storage paths 375 storage pool 359 storage pool create 369 suspend a session 405 suspended status 398 synchronized state 404 tasks 358

O
offline archived logs 493 On Demand environment 23 out of band 334 outband discovery 9

P
Package Location 78 perfcli tool 445 performance database purge task 450 performance gauge 243 Performance Manager 19 command line interface 269 customized reports 482 Enable threshold button 258 ESS data collection 229 ESS data collection task 229 exporting data 482 function 228 gauges 20, 242 threshold filters 259 thresholds task 257 Volume Performance Advisor 21 perftool 269 ping command 144 pmcli 270 PMDATA table 483 Port Number 110 PPRC 357 Productivity Center 14, 43, 45 components 14

Index

531

troubleshooting 434 verifying source-target relationship 379 view group properties 367 view storage pool properties 374 Replication Manager (RM) 45 Replication Manager problem determination 434 Replication Manager subcomponent 103 replication session 358 Replication subject matter expert 12 Resource Manager 46, 56, 115, 507 REXX 485 rmgauge 270

S
same time 109 other applications 111 Remote Fabric Agent Deployment 109 SAN islands 332, 353 SAN Manager 18 SAN Volume Controller thresholds 261 SAN Volume Controller (SVC) 44, 510 service agent 33, 35 Service Location Protocol 32 multicast 37 service agent 33 user agent 33 setdevice command 429 setessthresh/setsvcthresh 270 setfilter 270 setoutput 270 setoutput command 417 setuser command 143 setuser interactive tool 143 showgauge 270 showsess command 409 Simple Network Management Protocol (SNMP) 48 SLP active DA discovery 36 CIM Agent 31 configuration recommendation 39, 121 DA configuration 39 DA discovery 36 DA functions 37 directory agent configuration 39, 122 discovery requirements 174 environment configuration 39 ESS CIM agent 153 multicast messages 34 multicast request 35 multicast service request 36 passive DA discovery 37 registration 33 registration persistency 175 router configuration 39, 121 service agent 33 service attributes 34 service type 34 setting up DA 40 slp.conf 40, 150

starting 138 unicast 38, 120 user agent 34 User Datagram Protocol message 34 verify install 137 verifyconfig command 153 when to use DA 38, 121 SLP considerations 429 SLP discovery summary 174 slp logfile 430 SLP tracing 430 slp.conf file 40, 150, 430 slp.reg file 175 SMI-S 28 SNIA 27 soft zoning 332 SQL Assist 468 SQL command example 488 SQL scripts considerations 462 standards organizations 2, 26 startcimbrowser command 148 startesscollection 270 startsvccollection 270 stopcollection 270 stopsess command 416 storage device 508 important information 508 storage orchestration 3 Structured Query Language (SQL) 450, 486 Subsystem Device Driver (SDD) 45 suite installer 43, 45, 507 superuser account name 180 suspendsess command 415 SVC mdisk group 205 SVC CIMOM 166 console 166 Multiple Device Manager console account 167 register to SLP DA 173 SVC console account 167 verification 173 SVC data collection error 445 SWDATA 455

T
TCP/IP port 45 telnet command 148, 173 threshold checking 19 threshold properties 262 thresholds task 257 Tivoli Agent Manager 49, 77 Tivoli Event 55 Tivoli NetView 10, 46, 507 7.1.3 55 General Topology map service 46 Object Collection facility socket 47 Object Database 46 Object Database event socket 46 OVs_PMD management service 46

532

Managing Disk Subsystems using IBM TotalStorage Productivity Center

OVs_PMD request service 46 Pager 46 password 56 password only DB2 database 55 PMD service 46 release 55 SAN menu 55 Service 52 SnmpServer 47 Topology Manager 46 Topology Manager socket 46 trapd socket 46 Web server socket 47 Tivoli NetView 7.1.1 55 Tivoli NetView 7.1.3 55 Tivoli NetView password only 56 TotalStorage Productivity Center 4 database maintenance 483 database query 482 DB2 logging 492 Device Manager 18 Event Action Plan 21 export PM data 485 performance considerations 122 Performance Manager 19 remote console 314 report tools 485 SAN Manager 18 server 508 SQL commands 488 universal user 49 vdisk display 198, 205, 207 TotalStorage Productivity Center (TPC) 43, 45, 505 TotalStorage Productivity Center for Fabric enabling communications 337 install considerations 335 launching 352 overview 332 remote console 336 zoning API 336 TSANM 209, 212213

workload characteristics 273 workload profile 277 VPA overview 272 VPCCH table 426, 478 VPCLUS table 426 VPCRK table 426, 479, 489 VPVOL table 499 VPVPD table 488 VTHRESHOLD table 483

W
WAS trace 434 WAS trace control 442 WBEM browser 148 WEBM management model 26 WebSphere Application Server 51, 65 ikeyman utility 92 Information panel 81 WebSphere logfile 428 WebSphere startServer.log 428

X
XML 30

U
unicode 486 user agent 34 User Datagram Protocol message 34 user Id 49, 507 user name 50, 63, 71 Users IDs 505

V
vdisks display 198, 205, 207 verifyconfig command 146, 153 Volume Performance Advisor 12, 21 authentication 283 getting started 277 multiple recommendations 276 predefined workload profiles 278 recommendation process 275

Index

533

534

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Managing Disk Subsystems using IBM TotalStorage Productivity Center

(1.0 spine) 0.875<->1.498 460 <-> 788 pages

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Back cover

Managing Disk Subsystems using IBM TotalStorage Productivity Center

Install and customize Productivity Center for Disk Install and customize Productivity Center for Replication Use Productivity Center to manage your storage

IBM TotalStorage Productivity Center is designed to provide a single point of control for managing networked storage devices that implement the Storage Management Initiative Specification (SMI-S), including the IBM TotalStorage SAN Volume Controller, Enterprise Storage Server, and FAStT. TotalStorage Productivity Center includes the IBM Tivoli Bonus Pack for SAN Management, bringing together device management with fabric management, to help enable the storage administrator to manage the Storage Area Network from a central point. The storage administrator has the ability to configure storage devices, manage the devices, and view the Storage Area Network from a single point. This software offering is intended to complement other members of the IBM TotalStorage Virtualization family by simplifying and consolidating storage management activities. This IBM Redbook includes an introduction to the TotalStorage Productivity Center and its components. It provides detailed information about the installation and configuration of TotalStorage Productivity Center for Disk and TotalStorage Productivity Center for Replication and how to use them. It is intended for anyone wanting to learn about TotalStorage Productivity Center and how it complements an on demand environment and for those planning to install and use the product.

INTERNATIONAL TECHNICAL SUPPORT ORGANIZATION

BUILDING TECHNICAL INFORMATION BASED ON PRACTICAL EXPERIENCE


IBM Redbooks are developed by the IBM International Technical Support Organization. Experts from IBM, Customers and Partners from around the world create timely technical information based on realistic scenarios. Specific recommendations are provided to help you implement IT solutions more effectively in your environment.

For more information: ibm.com/redbooks


SG24-7097-01 ISBN 0738493848

Vous aimerez peut-être aussi