Académique Documents
Professionnel Documents
Culture Documents
Mary Lovelace
Gerd Becker
Dan Edwards
Shayne Gardener
Mikael Lindstrom
Craig McAllister
Norbert Pott
ibm.com/redbooks
International Technical Support Organization
December 2009
SG24-7718-00
Note: Before using this information and the product it supports, read the information in “Notices” on
page xiii.
This edition applies to Version 6, Release 1, of IBM Tivoli STorage Manager (product number 5698-B22).
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii
Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiv
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii
The team who wrote this book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii
Become a published author . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xx
Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xx
Chapter 15. IBM Tivoli Storage Manager Data Protection for Mail: Exchange 6.1 . . 235
15.1 System requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236
Contents v
15.1.1 Data Protection for Microsoft Exchange V6.1 on Windows for x86. . . . . . . . . . 236
15.1.2 Microsoft Exchange Server 2003 SP2 or later . . . . . . . . . . . . . . . . . . . . . . . . . 236
15.1.3 Data Protection for Microsoft Exchange V6.1 on Windows for x64. . . . . . . . . . 236
15.1.4 Compatibility issues with earlier versions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237
15.1.5 Backup methods supported . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237
15.2 Individual Mailbox Restore feature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238
15.2.1 Individual Mailbox Restore limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239
15.2.2 Tivoli Storage Manager 6.1 Mailbox Restore features . . . . . . . . . . . . . . . . . . . 239
15.2.3 Exchange Server: Mailbox Restore. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240
15.2.4 Tivoli Storage Manager Mailbox Restore limitations . . . . . . . . . . . . . . . . . . . . . 242
15.2.5 The restoremailbox command line parameter. . . . . . . . . . . . . . . . . . . . . . . . . . 242
Chapter 16. Installation and upgrade planning for Tivoli Storage Manager V6.1 . . . 245
16.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246
16.2 Upgrade strategy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246
16.2.1 What you can and cannot do with Tivoli Storage Manager V6.1 . . . . . . . . . . . 246
16.2.2 Upgrade considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248
16.3 Preparation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248
16.3.1 System requirements for the V6.1 server system . . . . . . . . . . . . . . . . . . . . . . . 248
16.3.2 System requirements for the V6.1 reporting and monitoring. . . . . . . . . . . . . . . 250
16.3.3 Client environment requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250
16.3.4 Tivoli Storage Manager Client compatibility . . . . . . . . . . . . . . . . . . . . . . . . . . . 250
16.4 Database capacity planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251
16.4.1 Overview of the four different log types. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251
16.4.2 Recovery logs summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253
16.5 Planning an upgrade from V5 to V6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254
16.5.1 Database restructure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254
16.5.2 Estimating the upgrade time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255
16.5.3 Space requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257
16.5.4 Work sheet for planning space for the V6.1 server . . . . . . . . . . . . . . . . . . . . . . 263
16.5.5 High level process for upgrading the server to V6.1 . . . . . . . . . . . . . . . . . . . . . 263
16.6 Naming best practices. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264
16.7 Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265
16.8 Upgrading an existing system versus a new system. . . . . . . . . . . . . . . . . . . . . . . . . 266
16.8.1 Comparison of methods for moving data to the V6.1 database . . . . . . . . . . . . 267
16.8.2 Details of the database upgrade process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271
16.8.3 Tivoli Storage Manager V6.1 upgrade utilities . . . . . . . . . . . . . . . . . . . . . . . . . 274
16.9 Upgrade scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275
16.9.1 Scenario 1: New system, media method. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 276
16.9.2 Upgrading the server using the wizard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 276
16.9.3 Scenario 2: New system, network method . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277
16.9.4 Upgrading using the wizard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278
16.9.5 Scenario 3: Same system, media method . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278
16.9.6 Summary of the wizard method scenario . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279
16.9.7 Scenario 4: Same system, network method . . . . . . . . . . . . . . . . . . . . . . . . . . . 280
16.9.8 Upgrading the server using the wizard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281
16.9.9 Hybrid upgrade migration method. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282
16.10 Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 288
16.10.1 Testing the upgrade process for a server . . . . . . . . . . . . . . . . . . . . . . . . . . . . 288
16.10.2 Test by extracting data from a separate copy of the server . . . . . . . . . . . . . . 289
16.10.3 Test by extracting data from the production server. . . . . . . . . . . . . . . . . . . . . 290
Part 7. Installation, customization, and upgrade of Tivoli Storage Manager V6.1 Server and Client 301
Contents vii
18.11 Using the Tivoli Storage Manager configuration wizard . . . . . . . . . . . . . . . . . . . . . 378
18.12 Creating the server instance manually . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 392
18.12.1 Manually creating a Tivoli Storage Manager instance . . . . . . . . . . . . . . . . . . 393
18.12.2 Running multiple server instances on a single system . . . . . . . . . . . . . . . . . . 398
18.12.3 Configuring server and client communications . . . . . . . . . . . . . . . . . . . . . . . . 399
18.12.4 TCP/IP options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 399
18.12.5 Named Pipes options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 400
18.12.6 Shared memory options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 400
18.12.7 SNMP DPI subagent options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 400
18.12.8 Monitoring the server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 401
18.12.9 Network connection types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 401
18.13 Debugging techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 402
18.13.1 Investigating log messages. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 402
18.13.2 How to completely remove Deployment Engine . . . . . . . . . . . . . . . . . . . . . . . 403
18.14 Gathering logs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 404
Chapter 19. Tivoli Storage Manager V6.1 Backup-Archive Client update and installation
changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 407
19.1 Backup-Archive Client updates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 408
19.1.1 New function in Tivoli Storage Manager V6.1 . . . . . . . . . . . . . . . . . . . . . . . . . . 408
19.1.2 Related commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 410
19.2 Installation of the Tivoli Storage Manager V6.1 client . . . . . . . . . . . . . . . . . . . . . . . . 411
19.2.1 Migrating from earlier versions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 411
19.2.2 Considerations for migrating between processor architectures . . . . . . . . . . . . 412
19.2.3 Unicode considerations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 413
19.2.4 Additional migration information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 414
19.2.5 Upgrading Open File Support or online image . . . . . . . . . . . . . . . . . . . . . . . . . 414
19.2.6 NDMP support requirements (Extended Edition only) . . . . . . . . . . . . . . . . . . . 414
19.2.7 Installing from the Tivoli Storage Manager DVD . . . . . . . . . . . . . . . . . . . . . . . . 415
19.2.8 Installation steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 415
19.2.9 Installation steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 416
Part 8. Tivoli Storage Manager V6.1 monitoring, reporting, ISC, and Administration Center . . . . . . 429
Chapter 20. Monitoring and reporting in Tivoli Storage Manager V6.1 . . . . . . . . . . . 431
20.1 Monitoring and reporting overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 432
20.1.1 Administration Center: Health Monitor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 432
20.1.2 Administration Center: Reporting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 433
20.1.3 Tivoli Storage Manager Monitoring and Reporting . . . . . . . . . . . . . . . . . . . . . . 434
20.2 Monitoring and reporting installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 437
20.3 Installing the Monitoring and Reporting feature . . . . . . . . . . . . . . . . . . . . . . . . . . . . 438
20.4 Business Intelligence and Reporting Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 447
Contents ix
23.1 The basics of planning the upgrade . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 522
23.2 Upgrade scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 522
23.3 Upgrading from V5.5 to V6.1 step by step . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 523
23.3.1 Modifying the server before the upgrade . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 524
23.3.2 Upgrade steps: V5.5 server to V6.1 on Windows platform . . . . . . . . . . . . . . . . 528
23.3.3 Summary of the upgrade . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 551
23.4 Steps after V6.1 server is started . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 555
23.4.1 Initial verification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 555
23.4.2 Database backup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 556
23.5 Sample commands to run for database upgrade validation . . . . . . . . . . . . . . . . . . . 557
23.6 Common database maintenance tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 558
23.7 Scripting and reporting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 559
23.7.1 SQL function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 559
23.7.2 SQL syntax enforcement examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 560
23.8 How to rollback to V5 if needed or restart the process . . . . . . . . . . . . . . . . . . . . . . . 560
23.9 Debugging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 561
23.10 Gathering logs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 562
23.11 Upgrade for NAS TOC data on AIX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 562
23.11.1 Steps for the upgrade . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 562
23.11.2 Summary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 570
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 631
Contents xi
xii Tivoli Storage Manager V6.1 Technical Guide
Notices
This information was developed for products and services offered in the U.S.A.
IBM may not offer the products, services, or features discussed in this document in other countries. Consult
your local IBM representative for information on the products and services currently available in your area.
Any reference to an IBM product, program, or service is not intended to state or imply that only that IBM
product, program, or service may be used. Any functionally equivalent product, program, or service that does
not infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to
evaluate and verify the operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter described in this document. The
furnishing of this document does not give you any license to these patents. You can send license inquiries, in
writing, to:
IBM Director of Licensing, IBM Corporation, North Castle Drive, Armonk, NY 10504-1785 U.S.A.
The following paragraph does not apply to the United Kingdom or any other country where such
provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION
PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR
IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT,
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of
express or implied warranties in certain transactions, therefore, this statement may not apply to you.
This information could include technical inaccuracies or typographical errors. Changes are periodically made
to the information herein; these changes will be incorporated in new editions of the publication. IBM may make
improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time
without notice.
Any references in this information to non-IBM Web sites are provided for convenience only and do not in any
manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the
materials for this IBM product and use of those Web sites is at your own risk.
IBM may use or distribute any of the information you supply in any way it believes appropriate without
incurring any obligation to you.
Information concerning non-IBM products was obtained from the suppliers of those products, their published
announcements or other publicly available sources. IBM has not tested those products and cannot confirm the
accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the
capabilities of non-IBM products should be addressed to the suppliers of those products.
This information contains examples of data and reports used in daily business operations. To illustrate them
as completely as possible, the examples include the names of individuals, companies, brands, and products.
All of these names are fictitious and any similarity to the names and addresses used by an actual business
enterprise is entirely coincidental.
COPYRIGHT LICENSE:
This information contains sample application programs in source language, which illustrate programming
techniques on various operating platforms. You may copy, modify, and distribute these sample programs in
any form without payment to IBM, for the purposes of developing, using, marketing or distributing application
programs conforming to the application programming interface for the operating platform for which the sample
programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore,
cannot guarantee or imply reliability, serviceability, or function of these programs.
The following terms are trademarks of the International Business Machines Corporation in the United States,
other countries, or both:
AIX 5L™ IBM® System i®
AIX® MQSeries® System p®
DB2® NetView® System Storage™
DPI® OS/390® System z®
DS6000™ Passport Advantage® Tivoli®
DS8000® POWER5™ TotalStorage®
FlashCopy® POWER® WebSphere®
GDPS® ProtecTIER® XIV®
GPFS™ Redbooks® z/OS®
HACMP™ Redbooks (logo) ® zSeries®
HyperSwap® SANergy®
ITIL is a registered trademark, and a registered community trademark of the Office of Government
Commerce, and is registered in the U.S. Patent and Trademark Office.
Snapshot, Network Appliance, SnapMirror, SnapLock, FlexVol, FilerView, Data ONTAP, NetApp, and the
NetApp logo are trademarks or registered trademarks of NetApp, Inc. in the U.S. and other countries.
Data ONTAP, FilerView, FlexVol, NetApp, Network Appliance, SnapMirror, Snapshot, and the Network
Appliance logo are trademarks or registered trademarks of Network Appliance, Inc. in the U.S. and other
countries.
AMD, AMD Opteron, the AMD Arrow logo, and combinations thereof, are trademarks of Advanced Micro
Devices, Inc.
SUSE, the Novell logo, and the N logo are registered trademarks of Novell, Inc. in the United States and other
countries.
Oracle, JD Edwards, PeopleSoft, Siebel, and TopLink are registered trademarks of Oracle Corporation and/or
its affiliates.
ACS, Red Hat, and the Shadowman logo are trademarks or registered trademarks of Red Hat, Inc. in the U.S.
and other countries.
mySAP, SAP, and SAP logos are trademarks or registered trademarks of SAP AG in Germany and in several
other countries.
VMware, the VMware "boxes" logo and design are registered trademarks or trademarks of VMware, Inc. in the
United States and/or other jurisdictions.
Java, and all Java-based trademarks are trademarks of Sun Microsystems, Inc. in the United States, other
countries, or both.
Microsoft, Windows NT, Windows, and the Windows logo are trademarks of Microsoft Corporation in the
United States, other countries, or both.
Intel, Itanium, Pentium, Intel logo, Intel Inside logo, and Intel Centrino logo are trademarks or registered
trademarks of Intel Corporation or its subsidiaries in the United States and other countries.
UNIX is a registered trademark of The Open Group in the United States and other countries.
Notices xv
xvi Tivoli Storage Manager V6.1 Technical Guide
Preface
This IBM® Redbooks® publication provides details of changes, updates, and new functions
in IBM Tivoli® Storage Manager Version 6.1. We also cover all the new functions of Tivoli
Storage Manager that have become available since the publication of IBM Tivoli Storage
Manager Version 5.4 and Version 5.5 Technical Guide, SG24-7447.
This book is for customers, consultants, IBM Business Partners, and IBM and Tivoli staff who
are familiar with earlier releases of Tivoli Storage Manager and who want to understand what
is new in Version 6.1. Because we target an experienced audience, we use certain shortcuts
to commands and concepts of Tivoli Storage Manager. If you want to learn more about Tivoli
Storage Manager functionality, see IBM Tivoli Storage Management Concepts, SG24-7447,
and IBM Tivoli Storage Manager Implementation Guide, SG24-5416.
This publication should be used in conjunction with the manuals and readme files provided
with the products and is not intended to replace any information contained in them.
Figure 1 The team: Dan, Craig, Mary, Mikael, Gerd, Shayne, Norbert
Gerd Becker is a Project Manager for EMPALIS GmbH, a Premium IBM Business Partner in
Germany. He has more than 25 years of IT experience, including over 13 years experience
with storage management products such as DFSMS and Tivoli Storage Manager. His areas
of expertise include IBM Tivoli Storage Manager implementation projects and education at
customer sites, including mainframe environments (OS/390®, VSE, VM, and Linux® for
zSeries®). He holds several certifications, including technical and sales, and is an IBM Tivoli
Certified Instructor. He has developed and taught several storage classes for IBM Education
Services in Germany, Switzerland, and Austria. He has been Chairman of the Guide Share
Europe (GSE) user group for more than six years. He is author of the Redbooks publication,
IBM Tivoli Storage Manager Technical Guide 5.3, participated in the beta test for Tivoli
Storage Manager Version 5.5 and 6.1, and is a member of the Tivoli Storage Manager
Advisory Council.
Shayne Gardener is a Tivoli Storage Consultant based in the United Kingdom as a member
of the EMEA Global Response Team. He has nearly 20 years of customer facing experience
in Computer Support. He has an HND in Computing from Gloucestershire University in
Cheltenham, United Kingdom. He has nearly 10 years of service with IBM. His skill areas
include IBM Tivoli Storage Manager and its complementary products along with Professional
and Technical Certification. He is certified as an IBM Certified Deployment Professional -
Tivoli Storage Manager V6.1, an IBM Certified Specialist - Tivoli Storage Manager FastBack
V5.5, an IBM Certified Solution Advisor - Tivoli Storage Solutions 2009 and is also certified for
the ITIL® V3 Foundation Certificate in IT Service Management.
Mikael Lindstrom is a IT Specialist for IBM ITD Sweden working as a team leader for
Storage and as a Technology lead for Tivoli Storage Manager. He has nine years of IT
experience and has been working for IBM since 2006. Mikael has Tivoli Storage Manager
server and client experience on Windows® and AIX platforms since 2002 including three
years experience in designing and implementing Tivoli Storage Manager Solutions on
Windows and AIX platforms. He has participated in the Tivoli Storage Manager V6.1 Beta
program. He is a certified Tivoli Storage Manager Storage Administrator and certified Tivoli
Storage Manager Deployment Professional in V5 and V6 and is the Tivoli Storage Manager
officer of the Tivoli User Group in Sweden.
Craig McAllister is a Tivoli Consultant who has specialized in storage management and
closely related topics since 1998. He has worked for IBM United Kingdom since the year
2000 and he supports clients all over the region for presales and services engagements with
Tivoli Storage Manager and TotalStorage® Productivity Center. Craig has authored several
Redbooks publications, including IBM Tivoli Storage Manager Versions 5.4 and 5.5 Technical
Guide, SG24-7447.
Norbert Pott is an IBM Tivoli Storage Manager Support Specialist in Germany. He works for
the Tivoli Storage Manager back-end support team and provides support to customers
worldwide. He has 27 years of experience with IBM, over 18 years of experience in IT, and
more than 11 years of experience with the Tivoli Storage Manager product, starting with
ADSM Version 2.1.5. His areas of expertise include Tivoli Storage Manager client
development skill and in-depth knowledge when it comes to problem determination. He is an
author of the Redbooks publications, IBM Tivoli Storage Manager Version 5.3 Technical
Workshop Presentation Guide, SG24-6774, IBM Tivoli Storage Manager Implementation
Guide, SG24-5416, IBM Tivoli Storage Management Concepts, SG24-4877, and IBM Tivoli
Storage Manager Versions 5.4 and 5.5 Technical Guide, SG24-7447.
Barry Fruchtman
Colin Dawson
Donald Moxley
Jo Lay
Ken Hannigan
Matthew Anglin
Michael G. Sisco
Tivoli Storage Manager Server development
Alexei Kojenov
Stefan Bender
Tivoli Storage Manager Client Development
Andy Ruhl
Benjamin Schockert
John Wang
Todd Owczarzak
Wolfgang Beuttler
Tivoli Storage Manager Software Support
Clare M Byrne
Gary Spizizen
Holly King
Liudyte Baker
Tivoli Storage Manager Information Development
Roger Stakkestad
IBM SWG Norway
Cyrus Niltchian
Tricia Jiang
Technology Sales Enablement
Charles Nichols
Dave Canan
Randy Larson
Robert Elder
Tomas Hepner
Zong Ling
Performance and ATS
Urs Moser
Integrated Technology Delivery, Server Systems Operations
Austen M Cook
Tashfique Hossain
Storage System Test
Joerg Pohlmann
IBM Global Services, Canada
Roger Stakkestad
IBM SWG Norway
Preface xix
Peter Kask
IBM Innovation Center - Stockholm, Sweden
Konstantin Arnold
Biozentrum
Pharmazentrum Information Technology / Div. of Bioinformatics
Swiss Institute of Bioinformatics (SIB)
Dieter Unterseher
NetApp®
Your efforts will help increase product acceptance and customer satisfaction. As a bonus, you
will develop a network of contacts in IBM development labs, and increase your productivity
and marketability.
Find out more about the residency program, browse the residency index, and apply online at:
ibm.com/redbooks/residencies.html
Comments welcome
Your comments are important to us!
We want our books to be as helpful as possible. Send us your comments about this book or
other IBM Redbooks publications in one of the following ways:
Use the online Contact us review Redbooks form found at:
ibm.com/redbooks
Send your comments in an e-mail to:
redbooks@us.ibm.com
Mail your comments to:
IBM Corporation, International Technical Support Organization
Dept. HYTD Mail Station P099
2455 South Road
Poughkeepsie, NY 12601-5400
Optional software modules allow business-critical applications that must run 24x365 to utilize
Storage Manager's centralized data protection with no interruption to their service. Optional
software extensions also allow SAN-connected computers to use the SAN for data protection
data movements, and provide Hierarchical Storage Management to automatically move
unused data files from online disk storage to offline tape storage. Storage Manager Extended
Edition expands on the data backup and restore and managed data archive and retrieve
capabilities of the base Storage Manager by adding disaster planning capability, NDMP
control for NAS filers, and support for large tape libraries.
Figure 1-1shows the interrelation of the components in IBM Tivoli Storage Manager.
Log
Database
Storage
Servers, Clients, Repository
Application systems Storage Area
Network
ISC Server can be
run on the same
server as the TSM
server
IBM Tivoli Storage Manager helps ensure recoverability through the automated creation,
tracking, and vaulting of reliable recovery points.
IBM Tivoli Storage Manager Extended Edition provides the following support:
Base IBM Tivoli Storage Manager (for basic backup-archive using a tape library with up to
four drives and 48 slots)
Disaster Recovery Manager
NDMP (for selected network-attached storage devices)
Large tape libraries (more than four drives or 48 slots)
IBM Tivoli Storage Manager for Storage Area Networks and IBM Tivoli Storage Manager for
Space Management can be used with either IBM Tivoli Storage Manager or IBM Tivoli
Storage Manager Extended Edition.
Additional Tivoli products working in conjunction with Tivoli Storage Manager are described in
“IBM Tivoli Storage Manager for products” on page 13.
TSM V6.1
TSM V5.5
TSM V5.4
TSM V5.3
TSM V5.2.2
er
ADSM Marketing/
TSM V5.2
Sales moved from
g
na
IBM Storage TSM V5.1.5
Ma
Systems to 03/2009
IBM Tivoli Software TSM V5.1
11/2007
TSM V4.2
g e
ra 01/2007
to
TSM V4.1
12/2004
Tivoli
Storage il S 12/2003
o 06/2003
Tiv 10/2002
Manager
V3.7
ADSM V3.1
ADSM V2.1
IB M 04/2002
06/2001
ADSM V1.1
07/2000
SM
ADSM V1.2
09/1999
AD 01/1999
1997
1995
1993
You can find information about upgrading to and from various versions of Tivoli Storage
Manager server and client in the appropriate installation guides. Also, you can check the
Tivoli Storage Manager Version 6.1 information center for new installation instructions:
http://publib.boulder.ibm.com/infocenter/tsminfo/v6/index.jsp?topic=/com.ibm.itsm.
nav.doc/t_installing.html
This feature allows the client system to directly write data to, or read data from, storage
devices attached to a storage area network (SAN), instead of passing or receiving the
information over the network. Data movement is thereby off-loaded from the LAN and from
the Tivoli Storage Manager server, making network bandwidth available for other uses. For
instance, using the SAN for client data movement decreases the load on the Tivoli Storage
Manager server and allows it to support a greater number of concurrent client connections.
The storage agent, a component of the feature, makes LAN-free data movement possible.
See also the relevant user guide for your system. For AIX it is IBM Tivoli Storage Manager for
SAN for AIX Storage Agent User's Guide, Version 6.1, SC23-9797.
1.5.2 Tivoli Storage Manager HSM for Windows Version 6.1, additions, and
changes
IBM Tivoli Storage Manager HSM for Windows provides space management for Microsoft®
Windows NTFS file systems. File migration policies can be defined by an administrator using
the HSM for Windows GUI. File migration eligibility is determined by include and exclude
policy criteria such as file type (extension) and various criteria related to the age of a file
(creation, modification, last access).
HSM for Windows helps free administrators and users from file system pruning tasks. HSM
for Windows is designed to assist administrators to more effectively manage Windows NTFS
disk storage by automatically migrating files selected based on administrator established
policy to less expensive storage devices, while preserving Windows NTFS file accessibility.
See also IBM Tivoli Storage Manager HSM for Windows Administration Guide, Version 6.1,
SC23-9795, and Using the Tivoli Storage Manager HSM Client for Windows, REDP-4126.
1.5.3 Tivoli Storage Manager for Space Management, additions and changes
The IBM Tivoli Storage Manager for Space Management client for UNIX and Linux (the HSM
client) migrates files from your local file system to distributed storage and can then recall the
files either automatically or selectively. Migrating files to storage frees space for new data on
your local file system and takes advantage of lower-cost storage resources that are available
in your network environment.
Tivoli Storage Manager for Space Management is available for AIX JFS2 and GPFS, Linux
GPFS, Solaris VxFS, and HP-UX JFS file systems. Also refer to the IBM Tivoli Storage
Manager for Space Management for UNIX and Linux User's Guide, Version 6.1,
SC23-9794-00.
Announcement letters can be found using keyword Tivoli Storage Manager at:
http://www-01.ibm.com/common/ssi/index.wss
You can see the original Tivoli Storage Manager V6.1 announcement letter at:
http://www-01.ibm.com/common/ssi/cgi-bin/ssialias?subtype=ca&infotype=an&appname=i
Source&supplier=897&letternum=ENUS209-004
Information about additional Tivoli Storage Manager V6.1 products can be found at:
http://www-01.ibm.com/common/ssi/cgi-bin/ssialias?subtype=ca&infotype=an&appname=i
Source&supplier=897&letternum=ENUS209-088
For further details about the separate products, see the relevant parts of the Tivoli Storage
Manager announcement letters as found using the product keywords, for example, Tivoli
Storage Manager for Mail, at:
http://www-01.ibm.com/common/ssi/index.wss
You can also consult the installation and users guides for the different products and platforms
at:
http://publib.boulder.ibm.com/infocenter/tsminfo/v6/index.jsp
Note: Be aware that these products have separate license features. Be sure to register
these licences to ensure the desired function.
Data Protection for Exchange performs online backups and restores of Microsoft Exchange
Server storage groups.
New features
Data Protection for Exchange 6.1 provides the new mailbox restore feature.
With the Data Protection for Exchange 6.1 mailbox restore feature, you can perform individual
mailbox recovery and item-level recovery operations in Microsoft Exchange Server 2003 or
Microsoft Exchange Server 2007 environments using Data Protection for Exchange backups.
Note: Mailbox restore tracks and stores mailbox location history, which is used to
automate mailbox restore operations. This causes a slight delay before each backup.
Mailbox restore applies to backups that are taken with Data Protection for Exchange:
For Exchange Server 2003 environments, mailbox restore applies to Data Protection for
Exchange proprietary backups only. For Exchange Server 2003, mailbox restore
operations cannot be performed using VSS backups.
For Exchange Server 2007 environments, mailbox restore applies to any Data Protection
for Exchange proprietary backups or VSS backups.
Data Protection for Exchange 6.1 (and later) maintains mailbox location history. No
mailbox location history is available for backups taken with prior versions. When restoring
from these prior version backups, if the mailbox to be restored from has been moved or
deleted since the time of the backup, the /mailboxoriglocation parameter is necessary.
Supported environments
The hardware and software requirements for IBM Tivoli Storage Manager for Mail Version 6.1
are documented at:
http://www-01.ibm.com/support/docview.wss?&uid=swg21318434
New features
The following new features are provided:
Automatic classification of Microsoft SharePoint content is based on business importance
and modification frequency, which allows creation of custom backup plans to help
optimize storage space and system resources.
A new enhanced graphical user interface (GUI) can streamline user interaction for typical
tasks and can improve the end-user experience.
Added support is provided for reusable backup templates to assist in the standardization
of common backup settings.
Item level backup data can be indexed for easier retrieval of data on restore.
A new fast backup method is available to help leverage SharePoint's change logs.
Supported environments
The Tivoli Storage Manager for Microsoft SharePoint supported operating systems and
system requirements:
http://www-01.ibm.com/support/docview.wss?rs=667&uid=swg21378227
Tivoli Storage Manager for Enterprise Resource Planning is specifically optimized to help
protect your vital SAP data. An administration assistant helps maximize administrator
productivity by helping to simplify administration, configuration, and monitoring of Tivoli
Storage Manager for Enterprise Resource Planning in production environments. This
powerful solution helps enable administrators to effectively, consistently, and reliably manage
backup and recovery of multiple SAP systems with large volumes of data.
The Tivoli Storage Manager for Enterprise Resource Planning software module allows
multiple SAP database servers to share a single Tivoli Storage Manager server to
automatically manage the backup data. As the intelligent interface to SAP databases, Tivoli
Storage Manager for Enterprise Resource Planning V6.1 supports heterogeneous
environments with large volume data backups, data recovery, data cloning, and disaster
recovery of multiple SAP database servers.
Tivoli Storage Manager for Enterprise Resource Planning V6.1 has enhancements to take
advantage of enhancements in Tivoli Storage Manager for Advanced Copy Services V6.1.
New features
Here we describe the new functions and improvements in IBM Tivoli Storage Manager for
ERP V6.1. Note that SAP AG has discontinued the use of the term mySAP in favor of SAP.
The following new functionality has been added to Version 6.1 of Data Protection for
SAPOracle or DB2:
Executable files on Windows platforms (except Java applets) now bear a digital signature.
Install Anywhere has replaced Install Shield as the installation vehicle.
As of version 7.1, the SAP BR*Tools components have a facility for invoking snapshot
(in SAP terminology, volume) backups and restores. Such requests received by Tivoli
Storage Manager for ERP are redirected to the Tivoli Storage Manager for Advanced
Copy Services (ACS) product (if it is installed). To facilitate the interaction of Tivoli Storage
Manager for ACS with Tivoli Storage Manager for ERP when the user wants to perform a
Tivoli Storage Manager backup of the snapshots produced, certain parameters have been
added to the Tivoli Storage Manager for ERP profile for use by Tivoli Storage Manager for
ACS. For more information, refer to the Tivoli Storage Manager for ACS documentation.
AIX 6.1 is now supported.
Supported environments
The list of IBM Tivoli Storage Manager for Enterprise Resource Planning V6.1.0 requirements
is documented at:
http://www-01.ibm.com/support/docview.wss?rs=667&uid=swg21321826
By integrating hardware and software-based snapshot capabilities with IBM Tivoli Storage
Manager and its data protection components for Microsoft Exchange, Microsoft SQL, IBM
DB2 UDB, Oracle, and SAP, you can help manage your snapshot backup operations and
leverage the performance, scheduling, and media management functions of Tivoli Storage
Manager to help ensure that your application servers are operational 24 hours a day.
Tivoli Storage Manager for Copy Services provides the integration with Microsoft Volume
Shadow Copy Service (VSS) and VSS providers for snapshots. Tivoli Storage Manager for
Advanced Copy Services provides the integration with IBM FlashCopy® as supported by IBM
System Storage™ SAN Volume Controller (SVC), IBM System Storage DS6000™, IBM
System Storage DS8000®, and other snapshot mechanisms.
Supported environments
The IBM Tivoli Storage Manager for Advanced Copy Services V6.1 requirements are
documented at:
http://www-01.ibm.com/support/docview.wss?rs=3043&uid=swg21321830
The hardware and software requirements for IBM Tivoli Storage Manager for Copy Services
V6.1 are documented at:
http://www-01.ibm.com/support/docview.wss?rs=3042&uid=swg21321332
In Example 3-1, you can see the relationship between the WWN and the Serial Number of the
VTL to the SAN discovery on the Tivoli Storage Manager Server.
Example 3-1 WWN and Serial Number of the SAN Adapters and virtual drives on a VTL
1. Port:-
Name: 0a
Role: Frontend
Port WWNN: 500a09800000de30
Port WWPN: 510a09820000de30
Topology: Link Down
Port ID: 0x0
Loop ID: 0x0
2. Port:-
Name: 0b
Role: Backend
Port WWNN: 500a09800000de30
Port WWPN: 510a09830000de30
Topology: Private Loop
Port ID: 0xef
Loop ID: 0x0
..
lines deleted
..
21. Virtual Drive:-
Virtual Library: testpc_vtl
Virtual Drive: Drive0
Serial Number: 77e846640f01a098045df0
Vendor ID: IBM
Product ID: ULTRIUM-TD4
Barcode:
22. Virtual Drive:-
Example 3-2 shows how the SAN devices are mapped in Tivoli Storage Manager, which we
can query with the query san f=d command.
Use this information to check the SAN devices and map them to the corresponding element
address. With this new functionality, it is also possible for virtual tape libraries to discover
automatically the correct device address and update the path definitions.
Virtual Tape Libraries (VTLs) maintain volume space allocation after Tivoli Storage Manager
has deleted a volume and returned it to a scratch state. The VTL has no knowledge that the
volume was deleted and it keeps the full size of the volume allocate. This can be extremely
large depending on the devices being emulated. As a result of multiple volumes that return to
scratch, the VTL can maintain their allocation size and run out of storage space.
The only way for the VTL to realize that a volume has been deleted and its space can be
reallocated is to write to the beginning of the newly returned scratch volume. The VTL will
then see the volume as available. Tivoli Storage Manager can relabel volumes that have just
been returned to scratch if the RELABELSCRATCH parameter is specified.
This optional parameter has been added to the DEFINE and UPDATE LIBRARY commands
and is intended for use with VTLs. It specifies whether the server relabels volumes that have
been deleted and returned to scratch. The syntax is:
RELABELSCRatch Yes I No
When this parameter is set to Yes, a LABEL LIBVOLUME operation is started and the
existing volume label is overwritten.
Note: If you have both virtual and real volumes in your VTL, both types will be relabeled
when this parameter is enabled. If the VTL includes real volumes, specifying this option
could impact performance. This function is only available for SCSI Libraries.
To determine if the RELABELSCRATCH parameter is set to Yes, you can issue the QUERY
LIBRARY command, as shown in Example 3-4.
RECLAIMDELAY
This option delays the reclamation of a SnapLock volume, allowing remaining data to expire,
so that there is no need to reclaim the volume.
Specifies the number of days to delay the reclamation of a SnapLock volume. Before
reclamation of a SnapLock volume begins, the Tivoli Storage Manager server allows the
specified number of days to pass, so that any files remaining on the volume have a chance to
expire. The default reclaim delay period is four days and can be set anywhere from 1 to 120
days. In Example 3-5 we specify that the number of days to delay reclamation is 30 days.
ANR2119I The RECLAIMDELAY option has been changed in the options file.
dsmserv.opt:
NDMPPREFDATAINTERFACE 192.168.111.81
SANDISCOVERY ON
RECLAIMDELAY 30
RECLAIMPERIOD
This option allows you to set the number of days for the reclamation period of a SnapLock
volume.
It specifies the number of days allowed for the reclamation period of a SnapLock volume.
After the retention of a SnapLock volume has expired, the Tivoli Storage Manager server will
reclaim the volume within the specified number of days if there is still data remaining on the
volume. The default reclaim period is 30 days and can be set anywhere from 7 to 365 days.
In Example 3-6 we specify 30 days as the reclamation period for our SnapLock volume:
Example 3-6
setopt reclaimperiod 30
ANR2119I The RECLAIMPERIOD option has been changed in the options file.
dsmserv.opt:
NDMPPREFDATAINTERFACE 192.168.111.81
SANDISCOVERY ON
RECLAIMDELAY 30
RECLAIMPERIOD 30
Note: The reclamation period does not begin until the RECLAIMDELAY period has
expired.
The following changes have been implemented to the Tivoli Storage Manager server for
HP-UX passthru device driver support:
The Tivoli Storage Manager device driver package no longer includes the ddtrace utility,
Tivoli Storage Manager kernel modules mod.o for HP 11i v1 or tsmtape, tsmchgr, tsmoptc
for HP 11i v2. Two new device configuration tools, autoconf and tsmdlst, are included in
the device driver package and are installed to the /opt/tivoli/tsm/devices/bin directory
unless you specify another location.
The Tivoli Storage Manager passthru device driver is packaged with the Tivoli Storage
Manager server and storage agent packages.
The sctl driver must be loaded into the kernel before devices are configured for the Tivoli
Storage Manager passthru device driver. Issue the following command to verify that the
sctl driver is installed.
>lsdev | grep sctl
If the driver has been loaded, you will see output similar to this:
lsdev | grep sctl
203 -1 sctl ctl
The HP-UX stape, sdisk, and schgr native drivers are required for device configuration for
the Tivoli Storage Manager passthru device driver. To verify that these drivers are loaded
in the kernel, issue the following commands from any directory. You should see output
similar to what is listed with each command:
– stape:
>lsdev | grep stape
lsdev | grep stape
205 -1 stape tape
– sdisk:
>lsdev | grep sdisk
lsdev | grep sdisk
188 31 sdisk disk
– schgr:
>lsdev | grep schgr
lsdev | grep schgr
231 29 schgr autoch
The autoconf utility uses the tsmddcfg script to configure devices and calls the tsmdlst utility
to display all devices that have been configured by the passthru device driver. The device
information is saved in lbinfo, mtinfo, and optinfo in the devices bin directory.
Note: You can also run autoconf with the -f option. Autoconf will issue ioscan to scan the
system before configuring devices. This might take several minutes.
To prevent potential data integrity problems, verify that Tivoli Storage Manager devices can
only be accessed through Tivoli Storage Manager passthru special files. If a device is
controlled by the passthru driver and also one of the stape, schger, or sdisk drivers, you need
to delete the corresponding device special files that are created by those drivers.
If there are no changes to the device hardware path on the system during the migration from
the Tivoli Storage Manager kernel device driver to the passthru device driver, Tivoli Storage
Manager device names should remain the same.
New commands, utilities, and options are available for the V6.1 server because of changes in
database operations and new functions.
Table 4-2 lists the new server utilities in Tivoli Storage Manager V6.1.
BACKUP DB The SET DBRECOVERY command must be run first to set a device
For an example, see 6.3.3, “DR class for database backups.
site recovery scenario” on
page 129. An incremental database backup is now a backup of all changes
since the last full backup. In earlier versions of the server, an
incremental backup was a backup of all changes since either the
last full backup or the last incremental backup.
BACKUP/RESTORE NODE The commands support creating SnapMirror to Tape images of file
QUERY NASBACKUP systems on NetApp file servers.
DEFINE/UPDATE DEVCLASS Device formats have been added for some operating systems.
DEFINE/DELETE/QUERY/UP The space trigger commands now support space triggers only for
DATE SPACETRIGGER storage pools. The database and log space triggers are no longer
available.
See “Triggered automatic backups” on page 98.
DEFINE VOLUME The maximum capacity of a volume in a DISK storage pool is 8 TB.
EXPIRE INVENTORY Expiration can be run for specific nodes and node groups, or for all
see Chapter 9, “Expiration nodes in a policy domain. The types of data to be examined for
enhancements” on page 161. expiration can also be specified.
QUERY DRMSTATUS
QUERY LOG
See Example 5-33 on page 99.
QUERY SESSION A new field in the output indicates the actions that occurred during
the session.
QUERY STATUS Output is changed. Obsolete options are removed, and the
database backup trigger is removed.
DSMSERV (starting the server) New options are available for specifying the owning user ID for the
server instance on startup. The new options are also available for
other DSMSERV utilities.
DSMSERV FORMAT Obsolete parameters are removed. New parameters are added to
see “Database configuration” specify the directories for database space, and the maximum size
on page 53 and “LOG and locations of the recovery log.
configuration” on page 55.
This utility is used to format a database for installation of a new
server.
DSMSERV LOADFORMAT This utility is used only for formatting a new, completely empty
see “Database configuration” database. An empty database is used only as part of the process
on page 53 and Chapter , “LOG of upgrading an earlier version of the server to V6.1. After you
configuration” on page 55 format an empty database, you use the DSMSERV INSERTDB
utility to insert data that was extracted from the database of an
earlier version of the server.
DSMSERV RESTORE DB Volume history is now required for restoring the database.
Restore a database to its
most current state All restore operations use roll-forward recovery.
see Example 5-45 on
page 109. The function for restoring individual database volumes was
Restore a database to a removed. The server no longer manages database volumes.
point in time
TXNGROUPMAX The default value is increased from 256 to 4096. Check whether
the server options file has this option:
If the server options file does not include this option, the
server automatically uses the new default value.
If the server options file includes a value for the option, the
server uses that specified value. If the specified value is less
than 4096, consider increasing this value, or removing the
option so that the server uses the new default value.
Increasing the value or using the new default value can
improve the performance for data movement operations such
as storage pool migration and storage pool backup.
For a detailed discussion of the new TXNGROUPMAX default, refer to Chapter 10, “Changes
to the TXNGROUPMAX default” on page 177.
CONVERT ARCHIVE The operation that this command performed is no longer needed.
DEFINE LOGCOPY Instead of log volume copies, you can specify a log mirror to have
the active log protected by a mirror copy.
For information about the directories that are used for the logs, use
the QUERY LOG command.
EXTEND LOG Server options are available for increasing the size of recovery
see “LOG configuration” on logs.
page 55
QUERY SQLSESSION The information that this command supplied is no longer in the
server database. SQL SELECT settings are replaced by syntax
options that are available in a DB2 SELECT command.
RESET BUFPOOL The BUFPOOLSIZE option has been eliminated, therefore this
command is not needed.
RESET LOGCONSUMPTION
RESET
LOGMAXUTILIZATION
SET LOGMODE Logging mode for the database is now always roll-forward mode.
SET SQLDATETIMEFORMAT The commands are replaced by options in the DB2 SELECT
command syntax.
SET SQLDISPLAYMODE
SET SQLMATHMODE
UNDO ARCHCONVERSION The operation that this commands performed are no longer
needed.
UPDATE ARCHIVE
Table 4-8 shows deleted server utilities in Tivoli Storage Manager V6.1.
DSMSERV DISPLAY Information about volumes used for database backup is available
DBBACKUPVOLUME from the volume history file. The volume history file is now required
to restore the database.
DSMSERV DUMPDB The operation that this utility performed is no longer needed.
DSMSERV EXTEND LOG This utility is replaced by the following server options:
ACTIVELOGSIZE
ACTIVELOGDIR
MIRRORLOGDIR
With these options, you can add recovery log space if the log is full
when the server is down.
DSMSERV LOADDB The operation that this utility performed is no longer needed.
DSMSERV RESTORE DB The server does not track individual database volumes in V6.1.
Restore a single database
volume to its most current The volume history file is required to perform database restore
state operations.
Restore a database to a
point in time when a volume
history file is unavailable
DSMSERV UNLOADDB The operation that this utility performed is no longer needed.
Table 4-9 shows deleted server options in Tivoli Storage Manager V6.1.
BUFPOOLSIZE The server adjusts the value of buffer pool size dynamically.
DBPAGESHADOWFILE
LOGPOOLSIZE The server uses its own fixed-size recovery log buffer pool.
MIRRORREAD Mirroring of the active log is supported, but not of the database.
Provide availability protection for the database by locating the
MIRRORWRITE database on devices that have high availability characteristics.
The list is available in the Information Center located at the following Web site:
http://publib.boulder.ibm.com/infocenter/tsminfo/v6
The list is also available in the Tivoli Storage Manager Messages publication for V6.1.
In this chapter we provide a brief history of the proprietary database that comes with previous
versions of IBM Tivoli Storage Manager server and introduces the new DB2 database used
with Tivoli Storage Manager V6.1. We explain how to estimate the space requirements for the
database and log, and we guide you through a complete backup and restore cycle for your
database.
We give you a list of helpful DB2 commands and system utilities, and you learn how to
configure the DB2 and the DB2 Control Center, a graphical administration tool, to connect to
a Tivoli Storage Manager database.
At the end of the chapter, we introduce some tips to collect diagnostic information, in case
this is required.
This proprietary database was chosen for two primary reasons at the time:
The Tivoli Storage Manager proprietary database was portable across platforms.
The V1R1 of ADSM shipped for the MVS and VM platforms. In subsequent releases, this
platform coverage expanded to OS/2, AIX, HP, SUN, Windows, and so on. This expanded
platform coverage was possible because the proprietary DB package was imbedded in
the product and the architecture was to be platform independent. At the time Tivoli
Storage Manager was first being delivered, DB2 was not available on all of the platforms
where Tivoli Storage Manager was expected to run, or on platforms that Tivoli Storage
Manager was expected to be ported.
The Tivoli Storage Manager proprietary database was chosen for performance.
The Tivoli Storage Manager proprietary database does not have locking or other common
database serialization constructs built into it. It was optimized more for performance than
what typical database products provided at the time.
The database has surpassed every expectation in terms of scalability and performance,
supporting up to 530 GB databases. Over time, however, Tivoli Storage Manager
development had to develop and maintain code that essentially keeps alternative indices for
every table and can audit and correct referential integrity problems between tables. The
database does not support secondary indexes, and a lot of code had to be written to
implement and maintain alternate index information to speed searches. In addition, Tivoli
Storage Manager development has written and maintained its own SQL engine for query
processing and to load the data warehouse from the server database.
Here are some significant characteristics of the Tivoli Storage Manager proprietary database:
Locking is done at an "advisory" level within the application itself.
The database does not implement locking at a record level and such, there are latch
semantics at a page level that latch a page in “exclusive” mode when a given record on
the page is being inserted, updated, or deleted.
The database package does not provide or maintain indexes for tables.
The Tivoli Storage Manager application maintains the tables and any apparent index or
alternate table view that is currently needed or supported by implementing these as
separate tables. For example, if a given object on the server is represented by a record
entry in tables A, B, and C, it is the application’s responsibility to insert all three of these
records when the object is created, the application must make any necessary updates
across these records if an object is updated, and it must delete all three of these records if
a given object is deleted.
1
ARIES/NT: A Recovery Method Based on Write-Ahead Logging for Nested Transactions was described by IBM
fellow C. Mohan and K. Rothermel. You can find the original paper at http://www.vldb.org/conf/1989/P337.PDF
As Tivoli Storage Manager has evolved over the years, enhancements to the database have
generally been small. There has been investment in the database such as the following
enhancements: DUMP/LOAD/AUDITDB, BACKUP/RESTORE, and SQL. However, the
investment in the database has been primarily on an as-needed basis.
Do not take this list for a complete reference or explanation of the actual capabilities, it is
shown here only to illustrate advanced capabilities compared to the proprietary Tivoli Storage
Manager database. These and additional functions are provided directly within DB2 or other
database products. Compared to Tivoli Storage Manager Version 6, in older versions of the
product, equivalents had to be implemented and maintained in the application code itself as
opposed to the database.
Over the years, Tivoli Storage Manager administrators developed a good sense of how to use
existing commands and utilities to configure for best performance. Most of these did become
obsolete as the database itself now takes care of them
Figure 5-1 illustrates the various components involved when an operation is made to the
integrated proprietary database.
Recovery
Application Code Log
DB Operation
Figure 5-1 Tivoli Storage Manager proprietary database and log components
The changes that were necessary to utilize and exploit DB2 as the Tivoli Storage Manager
database were as follows:
In order to use DB2 as the database, the existing DB component was eliminated. It was
replaced with a component known as a remote data base (RDB). In this context, “remote”
implies a database remote or separate from the Tivoli Storage Manager server itself. The
RDB component is responsible for managing the Tivoli Storage Manager interaction with
the available DB2 call level interface (CLI) APIs. This interface also provides the
management of the available DB2 administrative APIs.
The existing recovery log processing and component is superseded by DB2. The
component is removed from the server initialization processing. DB2 owns and manages
the recovery log functionality using its own recovery log capabilities and management.
Transaction management now considers DB2 as a participant on a transaction.
The Tivoli Storage Manager DB component has always been a participant in a transaction
and now the RDB component will also be a participant in any database related
transaction. The RDB component is integrated in Tivoli Storage Manager so that it allows
for the existing two-phase commit processing. Historically Tivoli Storage Manager has
provided its own transaction manager and the DB2 semantics fit well in the existing model.
The pre-Tivoli Storage Manager V6.1 locking scheme was implemented to “protect”
sections of the database based on decisions being done by the application code.
However, DB2 has its own locking and access control strategies. The Tivoli Storage
Manager server code was updated to continue to provide the appropriate access control
Memory management
The primary memory usage item for the server has historically been the server buffer pool
specified by the BUFPOOLSIZE option. The buffer pool is the in-memory cache for the
database pages used for pre-Tivoli Storage Manager V6.1 database management
operations. With the transition to DB2, the buffer pool space shifts to the DB2 buffer pool
where it is used for DB2's own database management operations on the Tivoli Storage
Manager server’s behalf. There is also additional memory that DB2 will utilize for its own
operating and management. You can consider the DBMEMPERCENT option being the
replacement for the BUFPOOLSIZE option. However, it represents much more than the
historic buffer pool because it represents the amount of RAM that DB2 can use for everything,
such as the buffer pools (plural now) and the sort heap.
On UNIX systems, when you start the Tivoli Storage Manager server, the server attempts to
change the ulimit values to unlimited. In general, this helps to ensure optimal performance
and to assist in debugging. If you are a non-root user, when you start the server, attempts to
change the ulimits might fail. To ensure proper server operation if you are running as a
non-root user, make sure that you set the ulimits as high as possible, preferably to unlimited,
before starting the server.
Even if you are running the server as root, all DB and log directories and files must be
writable by the database instance user ID. The server no longer writes the DB and log files;
DB2 does.
The suggested updates provided by the db2osconf utility are the minimum settings required to
run DB2 on your system. To run both Tivoli Storage Manager and DB2, additional changes
are required in addition to the suggestions made by the db2osconf utility (see Table 5-1 and
Table 5-2).
Details about the db2osconf utility are available at the IBM DB2 Database for Linux, UNIX,
and Windows Information Center:
http://publib.boulder.ibm.com/infocenter/db2luw/v9r5/index.jsp?tab=search&searchWo
rd=db2osconf&maxHits=500
Additional details on changing kernel parameters are available at the IBM DB2 Database for
Linux, UNIX, and Windows Information Center:
http://publib.boulder.ibm.com/infocenter/db2luw/v9r5/index.jsp?tab=search&searchWo
rd=kernel¶meters&maxHits=500
You can use the information in Table 5-3 to determine the minimum values that you should set
to run Tivoli Storage Manager and DB2 together on your target operating system.
Log Mirror
(optional)
TSM DB
MirrorLogDir
Archive Log
ArchiveLogDir
TSM STGPools (disk, tape) Failover
Archive
Log
(optional)
ArchFailoverLogDir
If you can estimate the maximum number of files that might be in server storage at any time,
you can estimate the database size from the following information:
Each stored version of a file requires about 600 to 1000 bytes of database space.
Each cached file, copy storage pool file, and active-data pool file requires about 100 to
200 bytes of database space.
Overhead can require up to 25% in additional space.
In the following examples, the computations are probable maximums. In addition, the
numbers are not based on using file aggregation. In general, aggregation of small files
reduces the required database space. Assume the following numbers for a Tivoli Storage
Manager system:
The size of the database depends on the number of client files to be stored and the
method by which the server manages them.
Versions of files
The following considerations apply:
Backed up files:
Up to 500,000 client files might be backed up. Storage policies call for keeping up to three
copies of backed up files:
500.000 files x 3 copies = 1.500.000 files
Archived files:
Up to 100,000 files might be archived copies of client files.
Space-managed files:
Up to 200,000 files migrated from client workstations might be in server storage.
File aggregation does not affect space-managed files.
At 600 bytes per file, the space required for these files is:
(1.500.000 + 100.000 + 200.000) x 600 = 1.0GB
Therefore, cached files, copy storage pool files, and active-data pool files require about
0.5 GB of database space.
Overhead
About 1.5 GB is required for file versions, cached copies, copy storage pool files, and
active-data pool files. Allow up to 50% additional space (or 0.7 GB) for overhead.
During SQL queries of the server, intermediate results are stored in temporary tables that
require space in the free portion of the database. Therefore, using SQL queries requires
additional database space. The more complicated the queries, the greater the space that is
required.
Note: In the preceding examples, the results are estimates. The actual size of the
database might differ from the estimate because of factors such as the number of
directories and the length of the path and file names. As a best practice, periodically
monitor your database and adjust its size as necessary.
Adding a new database directory after initial load will cause a REORG of the database.
Because this is expensive and disruptive, it should be avoided.
For example, if you need 100 GB of server storage, your database should be between 1 GB
and 5 GB. During SQL queries of the server, intermediate results are stored in temporary
tables that require space in the free portion of the database. Therefore, using SQL queries
requires additional database space. The more complicated the queries, the greater the space
that is required. Notice: In the preceding examples, the results are estimates. The actual size
of the database might differ from the estimate because of factors such as the number of
directories and the length of the path and file names. As a best practice, periodically monitor
your database and adjust its size as necessary.
You can use the worksheet in Table 5-4 to help you plan the amount and location of storage
needed for the V6.1 server.
The database
Active log
Archive log
The amount of storage space for the database is managed automatically. The database
space can be spread across multiple directories. After you specify the directories for the
database, the server uses the disk space available to those directories as required.
Plan for 33 - 50% more than the space that is used by the V5 database. (Do not include
allocated but unused space for the V5 database in the estimate.) Some databases can grow
temporarily during the upgrade process; consider providing up to 80% more than the space
that is used by the V5 database.
Estimation steps
You can estimate the amount of space that the database will require by completing the
following steps:
1. Use the QUERY DB FORMAT=DETAILED command to determine the number of used
database pages in your V5 database.
2. Multiply the number of used database pages by 4096 to get the number of used bytes.
3. Add 33 - 50% to the used bytes to estimate the database space requirements.
Consider testing the upgrade of the database to get a more accurate estimate. Not all
databases will grow as much as the suggested 33 - 50% increase in space.
When the server is operating normally, after the upgrade process, some operations might
cause occasional large, temporary increases in the amount of space used by the database.
Continue to monitor the usage of database space to determine whether the server needs
more database space.
For the best efficiency in database operations, anticipate future growth when you set up
space for the database. If you underestimate the amount of space that is needed for the
database and then must add directories later, the database manager might need to perform
more database reorganization, which can consume resources on the system. Estimate
requirements for additional database space based on 600 - 1000 bytes per additional object
stored in the server.
Note: You cannot use raw logical volumes for the database. If you want to reuse space on
the disk where raw logical volumes were located for an earlier version of the server, you
must create file systems on the disk first.
The minimum size of the active log is 2048 MB (2 GB); the maximum is 131.072 MB
(128 GB). The default is 2048 MB. You might want to begin with an active log size of 4 GB to
8 GB. Monitor the space usage and adjust the size of the active log as needed.
Creating a log mirror is optional. The additional space that the log mirror requires is another
factor to consider when deciding whether to create a log mirror.
A full backup of the database causes obsolete archive log files to be pruned, to recover
space. The archive log files that are included in a backup are automatically pruned after two
more full database backups have been completed. Therefore, the archive log should be large
enough to contain the logs generated since the previous two full backups.
If you perform a full backup of the database every day, the archive log must be large enough
to hold the log files for client activity that occurs over two days. Typically 600 - 4000 bytes of
log space are used when an object is stored in the server. Therefore you can estimate a
starting size for the archive log using the following calculation:
objects stored per day x 3000 bytes per object x 2 days
For example:
5.000.000 objects/day x 3000 bytes/object x 2 days = 30.000.000.000 bytes,
or 30 GB
It is important to maintain adequate space for the archive log directory. If the drive or file
system where the archive log directory is located becomes full and there is no archive failover
log directory, the data remains in the active log directory. This condition can cause the active
log to fill up, which causes the server to stop.
Specifying an archive failover log directory can prevent problems that occur if the archive log
runs out of space. If the drive or file system where the archive failover log directory is located
becomes full, the data remains in the active log directory. This condition can cause the active
log to fill up, which causes the server to stop.
Database configuration
With the utilization of Database Managed Space (DMS) tablespace design, the database
manager controls the storage space. Use the DBDIR/DBFILE parameter with the
DSMSERV[LOAD]FORMAT command to specify up to 128 directories available to DB2,
known as containers. This is DB2 terminology for what Tivoli Storage Manager calls database
directories. The database volumes are managed by DB2; you are no longer asked to format
them. In “DB and log security in UNIX” on page 47, we discuss this in more detail. You cannot
place the containers on raw logical volumes, and you should make sure each container is in a
separate file system or LUN.
The database defined for Tivoli Storage manager is taking advantage of the AUTOMATIC
STORAGE parameter. Automatic storage is a storage management technique where storage
for multiple tablespaces is automatically managed at the database level:
Multiple tablespaces automatically draw increments of storage from a “database storage
pool” on demand.
This removes the need to watch for disk shortages in each individual tablespace.
It also removes the need to manually enlarge containers or add stripe sets.
It uses the DMS infrastructure internally, combining the performance benefits of the DMS
infrastructure with manageability benefits of System Managed Space (SMS).
You should monitor the file system for space being available to the database, use the QUERY
DBSPACE command as shown in Example 5-2.
Example 5-2 QUERY DBSPACE: monitoring the database for available space
tsm: TIRAMISU>q dbspace
Location: g:\tsm\server1\database
Total Size of File System (MB): 59,388.70
Space Used on File System (MB): 59,387.81
Free Space Available (MB): 0.00
The QUERY DBSPACE is a new command for the Tivoli Storage Manager V6.1 server, use
the DSMSERV DISPLAY DBSPACE utility (see Example 5-38 on page 105) when the server
is not running. Table 5-5 explains the fields being returned by the command.
Total Size of Windows: Total space (in MB) on the drive where the directory is located
File System
(MB) UNIX and Linux: Total space in the file system where the path is located
Space Used Windows: Total used space (in MB) on the drive where the directory is located
on File System
(MB) UNIX and Linux: Total used space in the file system where the path is located
Free Space Windows: Space remaining on the drive where the directory is located
Available
(MB) UNIX and Linux: Space remaining in the file system where the path is located
If two or more directories or paths are located on the same drive (Windows) or the
same file system, then the total free space will be divided into that many directories
or paths
LOG configuration
You specify the logs used by the server with the ACTIVELOGDIR and the ARCHLOGDIR
parameters to the DSMSERV [LOAD]FORMAT command. Both parameters are required.
For the active log you can, in addition, specify the optional ACTIVELOGSIZE parameter.
If you do not specify the active log size, it defaults to 2 GB. The active log directory specifies
the directory in which the Tivoli Storage Manager server writes and stores active log files.
Optionally you can specify the ARCHFAILOVERLOGDIR and MIRRORLOGDIR with either of
the format commands. ARCHFAILOVERLOGDIR specifies the directory to be used as an
alternate storage location if the ARCHLOGDIR directory is full. MIRRORLOGDIR specifies
the directory in which the server mirrors the active log (those files in the ACTIVELOGDIR
directory). For both, the same restrictions apply as with the ARCHLOGDIR.
The active log directory and the mirror log directory should be on high-speed reliable disk, the
archive log directory can be configured to utilize slower disk. The failover archive log can be
even slower, assumed it is used infrequently you can even use NFS.
The log file flow is illustrated in Figure 5-3. When they are full, the log files are closed by DB2
and get copied to the archive log directory, transactions might still be active when the file gets
archived. The server continues to copy full log files to the archive log directory until the
directory becomes full, then copies will go to the failover archive log directory if defined. If
even the failover archive log directory fills up, for example, because of unexpected workload,
the active logs will retain in the active log directory. This can result in an out of log space
condition and a server halt if the active log directory fills up, too.
The active log is used to store current in-flight transactions for the server. For example, if the
server has 10 backup/archive client sessions performing backups, the transactions used by
those sessions will be represented in the active log and used to track changes to the server
database such as the insert, delete, or update to records for tables within the server
database. The active log needs to be sized large enough to hold the largest concurrent
workload that the server will encounter. Or put another way, it needs to be able to store the
data representing in-flight transactions for the largest concurrent workload that the server can
support.
The active log will then continue at a steady state of the defined active log size. As new
transactions are started in response to server activities, this will drive the head of the log
forward as these new transaction records are prepended to the head of the active log. The tail
of the active log is continually truncated as the oldest in-flight transactions complete, which
allows active log volumes to become inactive and eligible to be archived (copied) to the
archive log directory.
The archive log size is not maintained using a steady state or predefined size like the active
log uses. The archive log will store all inactive log files based on retention policies that the
Tivoli Storage Manager server manages. These policies are not configurable by you as an
administrator, the archive log files will be pruned by DB2 following the pruning policy set up
by Tivoli Storage Manager. This pruning is a function of full database backup cycles. To take
a step back, we need to first understand how and why full database backups are considered
in this processing. When any database backup is performed, the database backup will
contain the actual database pages that are allocated and in use for the database and the
active and archive log information necessary to represent transaction consistency for the
database data stored within that database backup.
The Tivoli Storage Manager V6.1 server only operates in ROLLFORWARD recovery mode.
In order to accommodate this, the archive log files are kept in order to represent all the
transactional changes to the server database from the time of the last FULL database
backup. So, a full database backup cycle in this discussion represents both the time between
one full database backup and the next, and more importantly, it represents all the
transactional changes that were recorded from the time the previous full database backup
was performed until the time the next full database backup is done.
The Tivoli Storage Manager server requires that archive log space is retained representing
two full database backup cycles. Because the server requires archive log space representing
two full database backup cycles, you must also consider the active or current transaction load
which is the files from the active log that become inactive and eligible to be moved to the
archive log. his then increases the archive log space requirement from being two times the
space needed to store the transactional data for a single database backup cycle to being up
to a total of three times this total space requirement.
Given the framework of this discussion, next we provide recommendations on how to size the
active and archive log space assigned to a Tivoli Storage Manager V6.1 server.
Active and archive log sizing upgrading to Tivoli Storage Manager V6.1
Here we discuss the active and archive log sizing considerations when upgrading from a
Tivoli Storage Manager V5.x server to Tivoli Storage Manager V6. As a starting point, the
active log should be two times the size of the existing Tivoli Storage Manager V5.x recovery
log. So if the V5.x recovery log was 10 GB, an active log size of 20 GB would be appropriate.
To estimate transaction activity on your V5.x server between full database backups, do this:
1. Prior to your next regularly scheduled FULL database backup, issue the command,
SHOW LOGV. Make note of the log head LSN (HeadLsn), which will be in the form
XXXXX.YYY.ZZZZ. Example 5-3 shows the output for one of our test servers, but let us
assume for your production server that the value is 140312.105.1011.
2. Let the full database backup run and then allow the server to operate normally.
3. At the next scheduled FULL database backup, issue the SHOW LOGV command. Again
make note of the log head LSN (HeadLsn); this might now be a value like 158600.88.16.
4. To estimate the transaction activity for this server between full database backups, subtract
the XXXXX values from the noted log head LSNs that were captured from the SHOW
LOGV command. Using our example values, we would use 158600 - 140312. The result
in this example is 18288. This value represents megabytes (MB) so the transaction load
for a single database backup cycle for this example would be approximately 18 GB.
Using the estimated transaction workload for a single full database backup cycle, a
conservative estimate of the space needed for the archive log would then be three times this
value. Using the example values from the previous step, this would be 18 GB * 3 or
approximately 54 GB.
Sizing consideration for active and archive log if deploying a new V6.1 server
When deploying a new Tivoli Storage Manager V6.1 server, an estimate is needed based on
the expected load on the server in terms of the number of files to be stored nightly. The
number of files stored for a given nightly schedule window is just one way to estimate this.
This estimate needs to be done by the administrator or team implementing this server. This
estimate should consider the total number of files to be backed up, archived, and space
managed in a given night.
Note that for this calculation, 3053 log bytes are needed per file in a given transaction. This
represents the log bytes needed when backing up files from a Windows server where the file
names vary in length from 12 to 120 bytes. This was done as a backup to a DISK storage
pool because these pools have increased log overhead and use as compared to sequential
media storage pools. You might need to consider a value larger than 3053 if the data being
stored has file names that are longer than those referenced previously.
In this example, the recommended active log size would be 3.5 GB. There are a number of
variations or other considerations which we discuss next.
Variation 1
If the 300 clients use the client option RESOURCEUTILIZATION set greater than the default,
such that each client session ran with up to a maximum of 3 sessions in parallel, this would
then change the servers inflight concurrent load from 300 sessions to 900 (as a maximum).
The calculation then becomes: (((900 * 4096) * 3053) / 1 GB) = 10.5 GB.
Variation 2
If a workload of 1000 clients were used with the client option RESOURCEUTILIZATION set
greater than the default, such that each client session ran with up to a maximum of 3 sessions
in parallel, this would then result in a calculation of: (((3000 * 4096) * 3053) / 1GB) = 35 GB.
Variation 3
If the backups were being done using simultaneous write, such that the data was being stored
to two copy pools in addition to the primary storage pool, the log bytes estimate per file should
be increased. If a value of 200 bytes for each copy storage pool is used, then this value is
3453. Using the original calculations, the result would be nearly 4 GB.
In the preceding examples and discussion, this assumes that the client store operations are
done in isolation. For example, migration, deduplication (identify processing), reclamation,
expiration, and even other administrative tasks such as administrative commands or SQL
from administrative clients, are not being run concurrently with this client workload. Any of
these operations included in the processing during the time for this client workload would
increase the active log space required.
This also assumes that the client workloads are somewhat homogeneous. Of particular
interest is the duration of transactions. If there are a large number of transactions that
complete quickly (complete in a short amount of time), this can cause active log space issues
if there are also long running transactions (meaning those that complete in a long amount of
time) in this mix. If the mix of client workloads and the relative amount of time needed for
specific transactions to complete is somewhat heterogeneous, then increasing the active log
size might be necessary in order to compensate for these timing differences.
In this scenario, migration from the DISK storage pool to a sequential media pool
(DEVTYPE=FILE) uses approximately 110 bytes per file migrated, which results in 10.5 MB
of log space used for each 100,000 files migrated. Using our original example where the
nightly client load in terms of number of files is 1,228,800, if these files then were migrated to
the NEXT pool, then the log space needed for this migration operation (assuming all files
were migrated) would be 129 MB.
Continuing on with the log size estimate for a Tivoli Storage Manager V6.1 server, if the
active log is estimated to require 3.5 GB, then the archive log would require 10.5 GB.
Use the information from “Variation 2” on page 60 about sizing the active log for a new Tivoli
Storage Manager V6.1 server installation, if the nightly backup load is 100,000 files for each
of 300 clients (100,000 files is a change rate of 10% on a total number of files of 1,000,000 for
each client). This represents 30,000,000 files for that nightly workload. If those 30,000,000
files represented 60,000,000 deduplicatable extents, the total archive log space required
would be 84 GB.
In this scenario, the active log impact of 60,000,000 extents is based on the
TXNGROUPMAX server options setting. The identify process will operate on aggregates
(groups) of files based on how many files were stored in a given transaction. If the average
number of extents per file is 2 (60,000,000 / 30,000,000) and the number of files in a
transaction is 4096, then the extents per aggregate is 8192, which results in 12 MB of active
log space used per 4096 files having 8192 extents.
The next consideration relative to the active log size needed for IDENTIFY processing is how
many processes are being run. If there are 10 IDENTIFY processes running in parallel, then
the concurrent load on the active log is 12 MB * 10 or 120 MB.
The final consideration for active log impacts from IDENTIFY processing is in regard to the
impacts of really large files. For example, if a client backs up a single object, perhaps an
image backup of a file system, that is 800 GB in size. This can represent a very high number
of extents because of the nature of this data; for this discussion, assume that it is 1.2 million
extents. This 1.2 million extents would represent a single transaction for an IDENTIFY
process that requires an estimated 1.7 GB of active log space. This 1.7 GB of active log
space might be easily attainable in isolation.
But if lots of other activity is happening in the active log, such as other IDENTIFY processes
that are only having to process 8192 extents per transaction, the active log might become
constrained for space because the small transactions are intermixing with the large
transaction used to identify the extents for the 800 GB single object. If the deduplication
enabled storage pool will have a mix of data with lots of uniform relatively small files while
also having a small number of very large highly deduplicatable objects, then we recommend
that you plan to increase the active log size by a factor of two. The issue is that it is not only
the raw space that is needed, but also the timing and duration of the transactions requiring
that space while other activities are concurrently processing. So, if the previous estimates
recommend a 25 GB active log size, then with deduplication in the mix, the active log size
becomes 50 GB and the archive log is then 150 GB.
In conclusion, there are a number of factors to consider when planning for the size of the
Tivoli Storage Manager V6.1 server active and archive logs. The previous examples and
discussion present some basic values that can be used for estimation purposes. Keep in
mind that you might need to consider larger values in your actual environment.
0000013.LOG 0000014.LOG
Restore Backup 2
and Roll Forward to
end of log 12.
The answer is, that the product is designed to require very little DB2 knowledge on the part of
the administrator. And you should not be directly changing anything within the DB2
subsystem unless directed to do so by IBM support, or as part of a procedure that is
documented in the Tivoli Storage Manager product manuals.
Note: For the Tivoli Storage Manager server to work correctly and the integrated DB2
database being able to manage itself appropriately, use of the administration tools is not
required.
Regardless of the mode used, the DB2 command line processor provides maximal support
for working with DB2 instances and DB2 databases. Example 5-4 shows the invocation of the
command line processor and the result of submitting a command for the list of active
databases.
Example 5-4 Activating the command line processor and submitting a command
C:\Program Files\Tivoli\TSM\db2\BIN>db2
(c) Copyright IBM Corporation 1993,2007
Command Line Processor for DB2 Client 9.5.2
You can issue database manager commands and SQL statements from the command
prompt. For example:
db2 => connect to sample
db2 => bind sample.bnd
To exit db2 interactive mode, type QUIT at the command prompt. Outside
interactive mode, all commands must be prefixed with 'db2'.
To list the current command option settings, type LIST COMMAND OPTIONS.
Active Databases
Before you can submit commands against a Tivoli Storage Manager database, you need to
tell the command line processor to establish a connection with the database by issuing
the db2 connect command; see Example 5-5. After completing your task, submit the
db2 connect reset command to free up resources.
Note that for the db2 select command to work, the fully qualified name in the form
schema.table-name, in this case tsmdb1.nodes, must be used. An alias for the table cannot
be used in place of the actual table. The schema is the user name under which the table or
view was created.
NODENAME PLATFORM
-------------------------------------------------------------- -----------------
OLDSKOOL WinNT
You can use the command line processor to get detailed explanations of the reported
SQLCODE or SQLSTATE. We use the SQL1024N and SQLSTATE=08003 from the previous
example with Example 5-6. The user response reported confirms that we had to connect to
the database.
Explanation:
User response:
sqlcode: -1024
sqlstate: 08003
To verify the current database configuration, the db2 get db cfg command becomes helpful.
You can familiarize with the result returned by reviewing Example 5-7. We highlighted some of
the configurables that you should already know about.
Database territory = C
Database code page = 819
Database code set = ISO8859-1
Database country/region code = 1
Database collating sequence = IDENTITY
Alternate collating sequence (ALT_COLLATE) =
Number compatibility = OFF
Varchar2 compatibility = OFF
Database page size = 16384
Backup pending = NO
Database is consistent = NO
Use the db2 list db directory command to list the contents of the system database
directory. If you specify an optional path, the contents of the local database directory are
listed.
Database 1 entry:
As shown in Example 5-9, you can use the db2 get instance command to verify the current
instance. If you need a list of available instances, see “db2ilist: List instances command” on
page 604.
Use the db2 describe table command to list details on a specific table. As shown in
Example 5-10, the command lists the following information about each column:
Column name
Type schema
Type name
Length
ScaleNulls (yes/no)
14 record(s) selected.
You can use DB2 profile registry command, db2set, to display, set, or remove DB2 profile
variables. db2set is an external environment registry command that supports local and
remote administration, by the DB2 Administration Server, of DB2's environment variables
stored in the DB2 profile registry.
In Example 5-11 you can see how to use the command to query the current settings.
To query DB2 for the current code release level, use the db2level command (Example 5-12).
To query DB2 for the current code release level, use the db2level command as shown in
Example 5-13.
You can use the db2start and db2stop commands to start and stop the DB2 database
manager, however, the preferred method to stop is a halt command submitted by either the
Tivoli Storage Manager command line or by an administrative user. Use these commands
only if directed so by IBM; if you try to stop or start DB2 while the Tivoli Storage Manager
server is up and running an error will be reported.
C:\Program Files\Tivoli\TSM\db2\BIN>db2start
SQL1026N The database manager is already active.
Appendix B, “DB2 and SQL commands” on page 597 lists additional DB2 and DB2 system
commands that you will find to be useful.
If you are interested in more information about the CLP, refer to DB2 Basics: Getting to know
the DB2 UDB command line processor, at:
http://www.ibm.com/developerworks/db2/library/techarticle/dm-0503melnyk/
By default, the Tivoli Storage Manager instance is not configured for access by the DB2
Control Center. In the following figures we explain how to set up the DB2 Control Center so
you can act on the database.
Figure 5-5 shows the initialization panel presented to you with the invocation of the DB2
Control Center.
The DB2 Control Center opens in the object view. You can use the object tree to display and
work with system and database objects.
The object tree displays the relation between objects in a hierarchy. You expand the object
tree down from a particular object, the objects that reside, or are contained, in that object are
displayed underneath.
To invoke actions on an object in the object tree, right-click to open a pop-up menu of
available actions. Then select a menu choice. A window or notebook opens to guide you
through the steps required to complete the action (Figure 5-7). You will see examples later.
Next you press the expansion selector (+) to expand All Systems, see Figure 5-8.
At this point, the local system Idaho is the only system known to the DB2 Control Center.
You press the expansion selector (+) to expand the IDAHO system branch (Figure 5-9).
Next you press the expansion selector (+) to expand the IDAHO system → Instances branch
(Figure 5-10).
Note: Future versions of the product might no longer create the dummy DB2TSM instance.
Figure 5-15 DB2 Control Center: IDAHO Instances branch action options
The Add Instance window allows you to make the Tivoli Storage Manager server instance
available to the DB2 Control center. As shown in Figure 5-16, just press Discover.
Figure 5-18 shows that next you are prompted for an Instance Node Name. A unique
nickname is required.
Next click the new IDAHO_S1 (SERVER1) instance you just created as shown in Figure 5-20.
Again you can click Discover, this time to search for available databases as shown in
Figure 5-23.
On the Add Database window, you can specify an Alias and an additional Comment. In our
example you accept the details and click Apply (see Figure 5-25).
Figure 5-26 DB2 Control Center: cancel search for additional databases
At this point you have completed the task. Instance IDAHO_S1 and the database TSMDB1
are accessible through the DB2 Control Center (see Figure 5-27).
Figure 5-27 DB2 Control Center: TSMDB1 configured for instance IDAHO_S1
You can also manually make the database and the instance available to the DB2
ControlCenter. What the DB2 Control Center actually does, when you add the server instance
as just described, is to define it as a remote node/instance in the local, default instance using
a command as shown here:
CATALOG LOCAL NODE node-name INSTANCE tsm-instance-name
The effect of both of these commands is that the new Tivoli Storage Manager instance and
the new database are both cataloged in the local, default instance. For an example of this
approach being documented, see “Client ODBC configuration” on page 583.
Alternatively, you can set the db2instance environment variable and then invoke the DB2
Control Center with the db2cc command as shown in Example 5-15.
C:\Program Files\Tivoli\TSM\db2\BIN>db2cc
TSM TSM
DB2 Server
Database
2
The first time you will observe a server database backup being performed is right after
installation. You will see the messages reported in Example 5-16.
This initial backup is required by DB2 in order for Tivoli Storage Manager to set the recovery
log processing mode to ROLLFORWARD. At this point, this database backup only contains
the server schema (DDL). This database backup is subsequently deleted because it only
contains the server schema definitions that can be recreated by Tivoli Storage Manager
anyway.
So we submit the SET DBRECOVERY command. See Example 5-18 for the results.
You use the SET DBRECOVERY command to specify the device class to be used for full
automatic backups. You can verify the current setting with the QUERY DB FORMAT=DETAIL
command as shown in Example 5-19.
As you can see in Example 5-19, the output returned by the QUERY DATABASE command
has changed, Table 5-6 gives you the details.
Total Size of File System (MB) Total space in MB in the current storage location.
Windows: Total space on the drive where the directory is located.
UNIX and Linux: Total space in the file system where the path is
located.
Space Used by Database(MB) Space in MB currently allocated and in use in the current storage
location.
Free Space Available (MB) Space in MB not in use but available in the database.
Windows: Space remaining on the drive where the directory is
located.
UNIX and Linux: Space remaining in the file system where the path
is located.
Total Buffer Requests The total number of buffer pool data logical reads and index logical
reads since the last time the database was started or since the
database monitor was reset.
Sort Overflows The total number of sorts that ran out of the sort heap and might
have required disk space for temporary storage.
Lock Escalation The number of times that locks have been escalated from several
row locks to a table lock.
Package Cache Hit Ratio A percentage indicating how well the package cache is helping to
avoid reloading packages and sections for static SQL from the
system catalogs. It also indicates how well the package cache is
helping to avoid recompiling dynamic SQL statements. A high ratio
indicates it is successful in avoiding these activities.
Last Database Reorganization The last time that the database manager performed an automatic
reorganization activity.
Full Device Class Name The name of the device class this is used for full database
backups.
Incrementals Since Last Full The number of incremental backups that were performed since the
last full backup.
Last Complete Backup The date and time of the last full backup.
Date/Time
Example 5-20 documents how the backup still fails. When a BACKUP DB or RESTORE DB
command fails with a message indicating the DB2 SQLCODE or the SQLERRMC with return
codes, you can get a description of the DB2 SQLCODE by completing the following
procedures:
1. On a Windows operating system, click Start → All Programs → IBM DB2 →
<DB2 Instance> → Command Line Tools → Command Line Processor to open a DB2
command-line interface. For all other supported platforms, log on to the DB2 instance ID
and open a shell window, then issue the command DB2.
2. Enter the SQLCODE. For example, if the DB2 SQLCODE is -2033, issue the command as
documented in Example 5-21.
Explanation:
106 The specified file is being used by another process. You tried
to read from or write to a file that is currently being used by
another process.
168 Password file is needed, but user is not root. This message is
often generated when the DSMI_DIR environment variable points
to a directory that contains a 32-bit version of the dsmtca
program, yet the DB2 instance is 64-bit, or vice-versa.
106 Ensure that you specified the correct file or directory name,
correct the permissions, or specify a new location.
In general, the sqlcode -2033 errors are all setup-related errors. As we did not do any
configuration for the database backup to work through the API, we have to do that now. If you
use the Tivoli Storage Manager Server Instance Configuration wizard to create a server
instance, configuration is done automatically. If you are configuring an instance manually, for
the back up to work, you must meet the following requirements:
Tivoli Storage Manager API is installed on server machine (done by COI install).
Tivoli Storage Manager API has the correct client option settings.
DSMI_DIR, DSMI_CONFIG, DSMI_LOG environment variables are set in DB2 instance
process.
DSMI_DIR, DSMI_CONFIG, DSMI_LOG point to the correct places:
– Api executables
– Api configuration files
– Api log file directory
The correct password is set.
Here we guide you through a manual configuration, so first we need to resolve the DB2
SQLCODE -2033 SQLERRMC 406. SQL error message code 406 requires that the following
issues be resolved:
The DSMI_CONFIG environment variable points to a valid Tivoli Storage Manager options
file.
The instance owner has read access to the dsm.opt file.
The DSMI_CONFIG environment variable is set in the db2profile.
2. We set the DSMI_ api environment-variable configuration for the database instance.
Open a DB2 command window, and on the Windows operating system, click Start → All
Programs → IBM DB2 → <DB2 Instance> → Command Line Tools → Command
Window. This will open a window in the DB2 install directory. From here you can submit
the db2set command, as shown in Example 5-23.
To verify the current setting of the DB2_VENDOR_INI variable, you can submit the
command as shown in Example 5-24.
3. Now we create the tsmdbmgr.opt file in the \program files\tivoli\tsm\server1 directory with
the contents shown in Example 5-25.
The $$_TSMDBMGR_$$ node is a special hidden node for the purpose of the database
backup. You cannot query the node on the server using the QUERY NODE command.
4. Next we make the changes active to DB2. We use the command window as shown before
and submit the following commands as shown in Example 5-26.
C:\Program Files\Tivoli\TSM\db2\BIN>db2stop
SQL1064N DB2STOP processing was successful.
C:\Program Files\Tivoli\TSM\db2\BIN>db2start
SQL1063N DB2START processing was successful.
It is important to stop and restart DB2 to activate the changes. Failing to do so will result in
the backup not working later.
You can also use the dsmapipw command to propagate the password to the Tivoli Storage
Manager Server as shown in Example 5-28. Change the directory to the servers
db2\adsm directory, for example, to C:\Program Files\tivoli\TSM\db2\adsm, and then
submit the command.
C:\Program Files\tivoli\TSM\db2\adsm>dsmapipw
*************************************************************
* Tivoli Storage Manager *
* API Version = 6.1.0 *
*************************************************************
Enter your current password:TSMDBMGR
6. Finally, we can submit the database backup, so let us try the backup database command
again. Example 5-29 documents the results and shows that we have completed the task.
Note that the configuration for a UNIX server database backup through the API differs slightly
from the example for the Windows platform that we went through here. If you want to
configure for a different environment, refer to the chapter, Preparing the database manager
for backup, available from the Information Center at:
http://publib.boulder.ibm.com/infocenter/tsminfo/v6/index.jsp?topic=/com.ibm.itsm.
srv.install.doc/t_srv_prep_dbmgr.html
Remember the SET DBRECOVERY command that we submitted with Example 5-18 on
page 87. If you submit the BACKUP DB command with a different device class, a warning
message ANR4976W will be reported, reminding you about the DBRECOVERY default device
class as shown in Example 5-30.
Note: We recommend that you use the GUI post-install configuration, dsmicfgx, or
dsmicfgx.exe, to configure the server for database backup. You can thus avoid some
configuration steps that are complex when done manually.
Summary
You do not need to submit DB2 commands to back up a Tivoli Storage Manager V6.1
database. You can still use the known backup commands for manual database backups, as
you did with earlier versions. However, you need to configure the server to be able to back up
through the API used by DB2 to complete the task.
Backup methodologies
If the Tivoli Storage Manager server database or the recovery log is unusable, the entire
server is unavailable. If a database is lost and cannot be recovered, all of the data managed
by that server is lost. If a storage pool volume is lost and cannot be recovered, the data on the
volume is also lost.
To back up the database and storage pools regularly, you define administrative schedules. If
you lose your database or storage pool volumes, you can use offline utilities provided by IBM
Tivoli Storage Manager to restore your server and data.
With the proprietary database, it has been a widely used practice to configure for regular full
backups followed by a sequence of incremental backups (Figure 5-25). The maximum
number of incremental backups you can run between full backups is 32. Assuming that you
scheduled for weekly full backups, the scenario would look like the one shown in Figure 5-29.
V5 database backup
Now let us compare the V5 style database backup schedule with the same approach adapted
to a Tivoli Storage Manager V6.1 server. Figure 5-30 shows how the space requirements rise
with each incremental backup.
In DB2 terms, the backup methodology introduced for the Tivoli Storage Manager server is an
incremental cumulative backup. The incremental database now represents all database data
that has changed since the most recent, successful, full backup operation. If you plan to
restore to Wednesday’s backup, you only need Sunday’s full backup, plus the last
incremental backup for Wednesday. If you want to complete the same task in V5, you need
Sunday’s full backup and the incremental backup volumes from Monday, Tuesday and
Wednesday.
If you are used to the methodology of weekly full database backups and in between
incremental backups, however, you should notice the difference when staying with this
approach:
Only full backups allow for the deletion of archive log volumes.
The incremental backups will not free archive log space, requiring more space in the
archive log directories.
The incremental database backups result in increased volume utilization to include the
additional archive log information.
You define the scheduled database backup as an administrative schedule as you are used to
doing; see Example 5-31 for the exact command.
With the foregoing setup, we protect the server by running database backups at least once
per day. Even more frequent backups might be needed if the server handles high numbers of
client transactions. You need to monitor the activity log for triggered database backups
starting and messages indicating that a database backup is required; for this, see Table 5-9
on page 100. The observation will allow for proper scheduling. A best practice is to schedule
regular backups of the server database, and verify that they occur as scheduled. We do this in
Example 5-32.
With DB2 automatic backup enabled, DB2 can decide to perform database backups based on
a number of different criteria or thresholds, including these:
Has the minimum required number of full backups been performed?
Has the time interval between database backups been exceeded?
How much log space has been consumed since the last database backup?
After evaluating the DB2 automatic database backup capabilities, it was determined that the
database backup processing is not consistent with the typical server administration model
used by Tivoli Storage Manager.
Tivoli Storage Manager triggers full and incremental database backup as a result of the
following criteria:
Log space consumed since the last backup:
The DB2 API db2GetSnapshot() function is used to get the first (firstActiveLogFileNum)
and the las active logfile number (lastActiveLogFileNum).
The logSpaceUsedSinceLastBackup is calculated by counting the number of log files
used since the last backup (lastActiveLogFileNum - (firstActiveLogFileNum + 1)) and
multiplying by the log file size (512).
If this value is greater than the maximum log size, the ACTIVELOGSIZE parameter
configured with the [load]format command, a full database backup will be started. This
represents the same trigger that would be used by automatic DB2 backup trigger:
“How much log space has been consumed since the last database backup?”
The following message is reported when this condition is met:
ANR4531I: An automatic full database backup will be started. The last log
number used is last log used and the first log number used is first log used. The
log file size is log file size megabytes. The maximum log file size is maximum
log file size megabytes.
Log utilization ratio:
The DB2 API db2GetSnapshot() function is used to get totalActiveLogSpaceUsed and
totalActiveLogSpaceAvailable.
The log utilization ratio is calculated with the following formula:
logUsedRatio = totalActiveLogSpaceUsed / ( totalActiveLogSpaceUsed +
totalActiveLogSpaceAvailable )
Use the QUERY LOG F=D command to monitor the log usage, the output changed from
earlier versions. For details, see Example 5-33. If the server is not running, you can use the
DSMSERV DISPLAY LOG utility. Refer to Example 5-40 on page 107 for details.
Note that the above configuration is not recommended; the active log and the archive log, as
well as the database directories, should not be configured to a single disk. Table 5-7 explains
the new fields for the QUERY LOG F=D command.
Total Space(MB) Specifies the maximum size in megabytes of the active log.
Used Space(MB) Specifies the total amount of active log space that is used in the
database in megabytes.
Free Space(MB) Specifies the amount of active log space in the database that is not
being used by uncommitted transactions in megabytes.
Active Log Directory Specifies the location where active log files are stored. When you
change the active log directory, the server moves all archived logs
to the archive log directory and all active logs to a new active log
directory.
Mirror Log Directory Specifies the location where the mirror for the active log is
maintained.
Archive Failover Log Directory Specifies the location into which the server saves archive logs if
the logs cannot be archived to the archive log directory.
Archive Log Directory Specifies the location into which the server can archive a log file
after all the transactions that are represented in that log file are
completed.
For triggered backups, if you are used to the DBBACKUPTRIGGER option, be aware that this
configurable is no longer available. Table 5-8 compares the Tivoli Storage Manager V5
database backup trigger options with the new database trigger functionality.
DEVCLASS Device class used for FULL Device class V6.1 triggered backups use the
database backups configured with the device class specified with the
SET DBRECOVERY SET DBRECOVERY command.
command. This is essentially equivalent
behavior to what was previously
available in V5.x.
LOGFULLPct When log utilization reaches this None If the log space used since the
percentage, an automatic last backup exceeds the
database backup starts. configured ACTIVELOGSIZE, a
full database backup is triggered.
An incremental backup is
triggered when the log utilization
ratio exceeds 80%.
INCRDEVCLASS Device class used for triggered None V6.1 triggered backups use the
incremental backups. device class specified with the
SET DBRECOVERY command.
MININTERVAL The minimum time to elapse None V6.1 server checks every 10
between allowing triggered minutes if a triggered backup is
backups. required.
MINLOGFREEpct Minimum percentage of log None While the database backups with
space that must be freed by the V6.1 do impact log utilization and
automatic backup before it will be the DB2 archival log operations,
performed. the differences in log
management and overall log
capacity mitigate this difference.
ANR0293I The server is performing an online reorganization for the table referenced in the
message.
ANR0294I The online reorganization for the table referenced in the message has ended.
ANR0295I The active log space used exceeds the log utilization threshold.
ANR0297I The log space used since the last database backup exceeds the maximum log file
size.
a. For some configurations, this message can be ignored. See the following Technote for details:
http://www.ibm.com/support/docview.wss?uid=swg21380107
Online table reorganization consumes log space. As a result of the log use because of
database reorganization, a database backup might become necessary to manage the
available active log space. Example 5-34 shows the messages being issued for the
BF.Aggregated.Bitfiles and the Backup.Objects table.
The VOLUMEHISTORY server option lets you specify backup volume history files. Then,
whenever the server updates volume information in the database, it also updates the same
information in the backup files. If you try to back up a database without a valid
VOLUMEHISTORY option configured to the dsmserv.opt file, the backup will fail with
message ANR2639E as shown in Example 5-35.
With Tivoli Storage Manager V6, the log is no longer circular, so the need to free a specific
amount of log space during a database backup is no longer significant. Database backup with
Tivoli Storage Manager V6.1 is oriented more toward protecting the database than was
previously the case.
While the V5 database backup triggers are no longer supported, Tivoli Storage Manager still
triggers automatic backups. Depending on which trigger is met, a full or an incremental
backup is executed.
Figure 5-31 explains the new process flow for a restore of the Tivoli Storage Manager server
database.
TSM TSM
DB2 Server Volhist
Database
3
In this section we provide an example of a restore scenario and show which requirements to
meet for a successful restore.
Restore prerequisites
To restore your database, the following information is required:
You must have copies of the volume history file and the device configuration file.
You must have copies of, or you must be able to create, the server options file and the
database and recovery log setup information (the output from detailed queries of your
database and recovery log).
The server needs information from the volume history file. Volume history information is
stored in the database, but during a database restore, it is not available from there. It is critical
that you make a copy of your volume history file and save it. The file cannot be recreated.
The database volumes we created with the database backup from Example 5-29 on page 93
are listed by the volume history file as shown in Example 5-36.
There is new information in the volume history file, which is now needed for a restore of the
database:
Database Backup LLA:
This is provided by DB2 and used by the restore process to determine the DB2 backup
time stamp.
Database Backup Home Position:
This is the home position for the tape used by the restore processing to know where the
database data starts. This is only valid for tape volumes. For file device class, it is 0.
Database Backup Total Data Bytes:
This is the total number of DB data bytes in this database backup.
Database Backup Total Log Bytes:
This is the total number of bytes for recovery log in this database backup.
Database Backup Log Block Number:
This is the starting block number where the backup recovery log starts. This is only valid
for tape volumes. For file device class, this is -1. Tivoli Storage Manager needs this
information because the database backup and restore are done in two sessions. One
session is for the database data and the other is for the recovery logs.
Important: It is essential to save your volume history file. Without it, you cannot restore
your database.
To ensure the availability of volume history information, it is extremely important to take one
of the following steps:
Store at least one copy of the volume history file offsite or on a disk separate from the
database
Store a printout of the file offsite.
Store a copy of the file offsite with your database backups and device configuration file.
Store a remote copy of the file, for example, on an NFS-mounted file system.
The VOLUMEHISTORY server option lets you specify a backup of volume history files. Then,
whenever the server updates volume information in the database, it also updates the same
information in the backup files.
You can also back up the volume history information at any time, by entering the backup
volhistory command.
If you do not specify file names, the server backs up the volume history information to all files
specified with the VOLUMEHISTORY server option.
To increase the amount of log space available to the server, evaluate the directories and file
system assigned to the ACTIVELOGDIR, ARCHIVELOGDIR, and ARCHFAILOVERLOGDIR.
An out of log space condition might occur because the ACTIVELOGDIR location is full. Or, an
out of log condition might occur if the log files in the ACTIVELOGDIR that are no longer active
cannot be archived to the ARCHIVELOGDIR and ARCHFAILOVERLOGDIR locations. If
necessary, a larger ARCHIVELOGDIR or ARCHFAILOVERLOGDIR can be specified by
updating this option in the dsmserv.opt file and then restarting the server.
We take the ANR0130E as a hint for a reorganization of our server database and log setup.
Our plan is to extend the log space to allow the server to start again. Then we take a
database backup, add new file systems to the box, and finally restore the database to the
new file systems.
First we free up some space on the G: drive by deleting unused files we were able to identify.
Next we create the new archive log directory under the new target, d:\tsm\server1\archivelog.
We change the dsmserv.opt file to reflect the new location of the archivelogdir temporarily to
the ARCHLOGDirectory located at d:\tsm\server1\archivelog and start the server to activate
that change as shown in Example 5-38. We use the DSMSERV DISPLAY DBSPACE
parameter to the dsmserv command to verify the current database setup.
Location: d:\gallium_server1\db
Total Size of File System (MB): 102,398.65
Space Used on File System (MB): 35,141.96
Free Space Available (MB): 67,256.69
Location: g:\tsm\server1\database
Total Size of File System (MB): 59,388.70
Space Used on File System (MB): 31,426.66
Free Space Available (MB): 27,898.03
This start activated our change to the archive log, and an archivelog structure was created
under the new location. Now we copy all the old logfiles to this new location, and when this is
completed, we delete the old archivelog directory (see Example 5-39).
D:\TSM\server1\archivelog>del /s /q G:\TSM\server1\archivelog\*
Deleted file -
G:\TSM\server1\archivelog\archmeth1\SERVER1\TSMDB1\NODE0000\C0000000\S0000025.LOG
..
lines deleted
..
Deleted file -
G:\TSM\server1\archivelog\archmeth1\SERVER1\TSMDB1\NODE0000\C0000000\S0000047.LOG
Deleted file - G:\TSM\server1\archivelog\RstDbLog\SQLLPATH.TAG
In addition, Example 5-40 documents the log layout that we collect with the DSMSERV
DISPLAY LOG command.
We will use the d: drive for the archivelog temporarily only, later we separate the log again
from the d: drive which we use for file device class volumes. Example 5-41 documents the
backup process that we initiate next.
We create the directories for the database, active log, and archivelog using the commands
documented in Example 5-42.
D:\>md j:\tsm\dbdir2
D:\>md k:\tsm\activelog
D:\>md l:\tsm\archivelog
Again we apply a change to the dsmserv.opt file to now reflect the final target directory for the
archive logs:
ARCHLOGDirectory l:\tsm\archivelog
Before we can start the restore, we create a file for the database directory locations to be
used with the restore command. We enter each location on a separate line. In our scenario
the dbdir.txt file looks as documented in Example 5-43.
The DSMSERV REMOVEDB command we use in Example 5-44 removes the information
about table TSMDB1 from the DB2 database. This command deletes all user data and log
files, as well as any backup/restore history for the database. If the log files are needed for a
roll-forward recovery after a restore operation, or the backup history required to restore the
database, these files should be saved prior to issuing this command. The REMOVEDB
command uses the DB2 API sqledrpd command, the DB2 equivalent is the command:
db2 drop db <db_alias>
At the time you submit the command, the database must not be in use.
Example 5-45 documents the restore process. Notice that we had to fully qualify the On
parameter to the restore command for this to work successfully. This parameter specifies a
file listing the directories to which the database will be restored; see Example 5-43 on
page 108 for our definitions.
Now our restore is complete, but did we reach our goal, the relocation of the database and log
volumes? We start the server and use the QUERY DBSPACE and the QUERY LOG
command again to verify this. The result is documented in Example 5-46. The database is
spread over drives J: and K:. The active log is on drive K: and the archive log was relocated to
the L: drive.
Location: i:\tsm\dbdir1
Total Size of File System (MB): 5,114.41
Space Used on File System (MB): 708.98
Free Space Available (MB): 4,405.43
Summary
In this chapter we discussed an example of a database restore scenario. You should now
know how to prepare and complete a database restore. In addition, you now understand that
the V5 approach of adding and deleting databases volumes for Tivoli Storage Manager
database relocation no longer works. Relocating server volumes requires a database restore
and results in server downtime. This makes planning in the beginning more important.
Remaining tasks at this point include the configuration of a mirror for the active log or the
definition of an archive log failover directory. However, at this point you should be able to
complete those steps with the information from the administrator’s reference guide and the
manual.
If you are interested in a review of other restore scenarios, refer to 6.3, “Recovery of a V6.1
Tivoli Storage Manager server” on page 128, where we discuss a DRM restore to a new
hardware box.
Location: j:\tsm\dbdir2
Total Size of File System (MB): 5,114.41
Location: i:\tsm\dbdir1
Total Size of File System (MB): 5,114.41
Space Used on File System (MB): 708.98
Free Space Available (MB): 4,405.43
Location: m:\tsmdb\dbdir3
Total Size of File System (MB): 5,114.41
Space Used on File System (MB): 28.22
Free Space Available (MB): 5,086.19
TSM:TIRAMISU>q db f=d
ANR2017I Administrator SERVER_CONSOLE issued command: QUERY DB f=d
Example 5-47 on page 111 documents how we submitted the EXTEND DBSPACE command
and subsequent QUERY DBSPACE and QUERY DB commands to show the directory
assigned to the database. However, in Example 5-48, you can see that the directory is just
empty.
Directory of m:\tsmdb\dbdir3
To further explain this scenario, assume that DB2 manages your database consisting of four
LUNs, 50 GB each, as shown in Figure 5-32. Each LUN is assigned its own volume group on
the host and each volume group has one file system.
With this configuration, DB2 uses separate I/O threads for each directory/file system.
Now if you are running out of database space, you can use the EXTEND DBSPACE
command to add space to the database. In this example, you create two new 50 GB LUNs,
assign them to the volume group, and create a separate file system for each LUN. After you
add the space to your Tivoli Storage Manager server, with the next restart, DB2 runs through
a reorganization of the database as shown in Figure 5-33.
Note: Adding a new database directory after initial load will cause a REORG of the
database. Because this is expensive and disruptive, it should be avoided.
To prevent this from happening, we recommend that you carefully configure your DBSPACE
and, if possible, just extend the file spaces, because this is transparent to your Tivoli Storage
Manager server and the underlying database.
DBREPORTMODE option
You use the SET DBREPORTMODE for the V6.1 Tivoli Storage Manager server to set the
database error reporting level. Possible values are:
None:
No database diagnostic reporting is to be done.
Purpose: If some pervasive symptom is encountered that is flooding the activity log with
messages, disable reporting by setting to NONE can eliminate the “noise”.
Partial (Default):
Report exception cases or items that are likely issues that need to be considered.
Purpose: Report the exception cases, those cases where something unexpected occurs.
Full:
Report all available information when an exception case is encountered.
Purpose: This can be enabled to pursue additional information if a given exception case
does not report enough information initially; lighter weight, then full trace enablement.
As an example, if a database deadlock might be encountered, with DBREPORTMODE set
to FULL, additional server transaction and lock information will be reported.
Additional database related information is available by the QUERY STATUS command. You
can verify the current DBREPORTMODE setting as shown in Example 5-49.
Node type = Enterprise Server Edition with local and remote clients
The level parameters specify the type of errors that will be recorded.
Use the UPDATE DBM CFG command to update either of the values:
db2 attach to <instance-name>
db2 update dbm cfg using <parameter-name> <value>
db2 detach
For example:
On platforms supporting the Korn shell, the DB2 team provides a script to archive and
maintain DB2 message logs and diagnostic data. For a complete description, go to the
following URL:
http://www.ibm.com/developerworks/data/library/techarticle/dm-0904db2messagelogs/i
ndex.html?S_TACT=105AGX11
Diagnostic information about errors is recorded in the administration notification file. This
information is used for problem determination and is intended for DB2 technical support.
The administration notification file contains text information logged by DB2 as well as DB2
Spatial Extender. It is located in the directory specified by the DIAGPATH database manager
configuration parameter. On Windows NT®, Windows 2000, and Windows XP systems, the
DB2 administration notification file is found in the event log and can be reviewed through the
Windows Event Viewer.
The information that DB2 records in the administration log is determined by the DIAGLEVEL
and NOTIFYLEVEL settings. Use a text editor to view the file on the machine where you
suspect a problem to have occurred. For Windows operating systems, you can use the Event
Viewer to view the administration notification log.
The most recent events recorded are the furthest down the file. Generally, each entry
contains the following parts:
A timestamp.
The location reporting the error. Application identifiers allow you to match up entries
pertaining to an application on the logs of servers and clients.
A diagnostic message (usually beginning with “DIA” or “ADM”) explaining the error.
Any available supporting data, such as SQLCA data structures and pointers to the location
of any extra dump or trap files.
If the database is behaving normally, this type of information is not important and can be
ignored.
db2diag.log
The db2diag.log file is intended for use by DB2 support for troubleshooting purposes, it is
available on all platforms supported by DB2. By default, it is found in the directory identified
by the diagpath database manager configuration parameter and is created automatically
when the instance is created.
You can format the db2diag.log using the db2diag utility. When the utility is run without any
option, as we do in Example 5-51, it reminds you to submit the command db2diag -h to
review available formatting and filter options.
The returned db2diag output always starts with DB2 installation specific information and gives
you additional details about the running system, see Example 5-52.
EDUID : 1520
Summary
With the transition to DB2, there are some additional log files that need to be reviewed in case
you are investigating an unwanted symptom or have to do some troubleshooting. You now
know how to adjust reporting levels and diagnostic file path information for your server.
As in previous releases, the Disaster Recovery Manager feature (license) is included with
Tivoli Storage Manager Extended Edition only.
There are a few enhancements in Tivoli Storage Manager V6.1 that affect your disaster
recovery planning:
Active data storage pools
Deduplication storage pools
DB2 database and recovery log enhancements
The Disaster Recovery Manager (DRM) will play a role in the management of your copy pool
volumes, building your recovery plans efficiently, and providing a central point of packaging
recovery information. In this chapter we discuss strategies designed around using Tivoli
Storage Manager’s Disaster Recovery Manager as part of the building blocks in providing
secure disaster recovery capabilities within your company.
When deploying the data deduplication feature, there are additional considerations, which we
present in 6.4, “Data deduplication considerations” on page 137. This section discusses copy
pool protection, and how its best deployed for both performance and recovery.
Within the scope of disaster recovery, there are multiple ‘tiers of disaster”, which we discuss
further in 6.5, “Seven tiers of disaster recovery solutions” on page 140. More specifically, we
also discuss a concept of data protection for “local” system disasters. These would include
the Tivoli Storage Manager server, library manager, or the connected media, and data related
to these components.
If disaster strikes, specifically related to a local data protection system, understanding the
recovery processes is a critical success factor. Understanding, documenting, and testing are
essential to sound disaster recovery preparedness.
Because Tivoli Storage Manager is now utilizing an external database product, a multi-phase
recovery will be required. At the time of writing this chapter, there have been no changes to
the DRM plan to adapt to the recovery of the DB2 database.
At the recovery site, use the Tivoli Storage Manager Instance Configuration Wizard or manual
steps that you have recorded in RECOVERY.INSTRUCTIONS.*
A typical scenario would be for ADP volumes to be made available as soon as possible at the
disaster site for high priority clients to begin immediate restore followed by availability of copy
pool volumes.
Clients restoring directly from ADP volumes can run concurrently with clients restoring from
lower priority copy pools because ADPs have a higher priority. This means that clients
restoring from ADPs will not access their data that exists in copy pools, preventing potential
thrashing and mount point conflicts.
The parameters included are the active data pool names. The usage is separate multiple
names with commas and no intervening spaces. You can use wildcard characters. The
specified names will overwrite any previous settings.
Example 6-1 demonstrates the syntax to add multiple activedata storagepool entries.
MOVE DRMEDIA
The MOVE DRDEMDIA command has been updated so that Active Data Pool (ADP) media
can be cycled off site/on site per underlying server policies and processes. The command
has been updated with the ACTIVEDATASTGPOOL parameter, which allows for override of
the previously enabled SET ACTIVEDATASTGPOOL, or one-time movement of volumes
from the mountable state.
If the ACTIVEDATASTGPOOL parameter not used, only those ADPs enabled by the SET
command are processed. Multiple ADP names can be entered by a comma delimited list, as
shown in Example 6-1.
Example 6-1 DRM move data command using the ACTIVEDATAStgpool parameter
MOVE DRMedia * WHEREstate=Mountable ACTIVEDATAStgpool= fileactivepool
ANR0609I MOVE DRMEDIA started as process 13024.
ANR6682I MOVE DRMEDIA command ended: 13024 volumes processed.
QUERY DRMEDIA
The SQL Engine and QUERY DRMEDIA have been updated so that ADP media can be
tracked and managed appropriately.
PREPARE
The PREPARE command will be updated to provide the ability to include ADP volumes in the
scripts, macros and documentation included in the recovery plan file. The existing command
has been updated with the ACTIVEDATASTGPOOL parameter. This new parameter allows
for the override of previously enabled SET DRM ADPs or one-time processing of eligible ADP
volumes.
If the ACTIVEDATASTGPOOL parameter is not used, only those ADPs enabled by the SET
command are processed. If no ADPs have been set, only the ADP volumes marked on site at
the time the prepare ran will be processed. These volumes are marked unavailable. Multiple
ADP names can be entered through a comma delimited list, as shown in Example 6-2.
Example 6-2 Disaster Recovery Manager command to set up active data storage pools
set drmactivedatapool activedatastgpool1,activedatastgpool2,activedatastgpool3
After setting the drmactivedatapool values, then run the PREPARE command, as shown in
Example 6-3 to review the differences in the recovery plan.
The externals of DRM are the same as V5.5 for DB and copy storage pool volumes. For the
internals, copy storage pool volume tracking was untouched, and with regards to DB volume
tracking the following is true:
Tivoli Storage Manager is still responsible for tracking DB backup volumes, not DB2!
Tivoli Storage Manager Volume History is still used to track DB backup volumes.
DB backup volume expiration is the same as V5.5.
AUTO_DEL_REC processing is used for DB2 hygiene, so DB2 keeps minimal records,
ideally just latest backup series.
The design objective was keeping DB2 backup volume records to a minimum, as DB2 has its
own “Volume History” called recovery history. However, Tivoli Storage Manager does not use
it for tracking DB backup volumes, and maintains use of its own volume history file.
There have been some changes to the volume history file to accommodate DB2 information,
which are required by the dsmserv restore DB command, as shown in Example 6-4.
Example 6-4 Volume history file demonstrating the field differences to accommodate DB2
Volume Type: DBSNAPSHOT
* Location for volume G:\TSM\SERVER1\FILECLASS\44844610.DSS is: ''
Database Backup LLA: FULL_BACKUP.20090612150940.2
Database Backup HLA: \NODE0000\
Volume Name: "G:\TSM\SERVER1\FILECLASS\44844610.DSS"
Backup Series: 2
Backup Op: 0
Volume Seq: 2
Device Class Name: FILE
Database Backup ID: 0 , 3076
Database Backup Home Position: 0
Database Backup Total Data Bytes : 0 , 381800459
Database Backup Total Log Bytes: 0 , 27291659
Database Backup Log Block Number: -1 , -1
Recovery plan file stanza changes or deletions are shown in bold type in the following list:
SERVER.REQUIREMENTS (changed)
RECOVERY.INSTRUCTIONS.GENERAL
RECOVERY.INSTRUCTIONS.off site
RECOVERY.INSTRUCTIONS.INSTALL
RECOVERY.VOLUMES.REQUIRED
RECOVERY.SCRIPT.DISASTER.RECOVERY.MODE script (changed)
RECOVERY.SCRIPT.NORMAL.MODE script
LOG.VOLUMES (removed)
DB.VOLUMES (removed)
DB.STORAGEPATHS (added)
LOGANDDB.VOLUMES.INSTALL (removed)
LICENSE.REGISTRATION macro
COPYSTGPOOL.VOLUMES.AVAILABLE macro
COPYSTGPOOL.VOLUMES.DESTROYED macro
PRIMARY.VOLUMES.DESTROYED macro
PRIMARY.VOLUMES.REPLACEMENT.CREATE macro
PRIMARY.VOLUMES.REPLACEMENT macro
STGPOOLS.RESTORE macro
VOLUME.HISTORY.FILE
DEVICE.CONFIGURATION.FILE
DSMSERV.OPT.FILE
LICENSE.INFORMATION
SERVER.REQUIREMENTS changes
This stanza now reflects the same information as output of the query db and query log
admin commands. The changes hold the appropriate DB2 information instead of V5.5 Tivoli
Storage Manager db and log volume information. This is still a useful reference point at the
recovery site for determining the amount of disk space required on a replacement machine
and db and log directory names for pre-creation.
RECOVERY.SCRIPT.DISASTER.RECOVERY.MODE
For this stanza, the LOGANDDB.VOLUMES.INSTALL.CMD invocation was removed, and
the invocation of the altered dsmserv restore db command was added, however, it is
“remarked out” in the script. It demonstrates how to restore DB to a new location, not using
paths recorded in the DB2 backup image.
RECOVERY.SCRIPT.DISASTER.RECOVERY.MODE
The DB.STORAGEPATHS stanza is populated as a result of a DB2 query.
ACTIVEDATASTGPOOL.VOLUMES.AVAILABLE
This stanza has been added to the recovery plan to support active data storage pools, and is
implemented to mark active data storage pool volumes as available for use in recovery, as
shown in Example 6-5.
ACTIVEDATASTGPOOL.VOLUMES.DESTROYED
This stanza has been added to the recovery plan to support active data storagepools, and is
implemented to mark destroyed active data storage pool volumes as unavailable, as shown in
Example 6-6.
RECOVERY.VOLUMES.REQURED
This stanza has been updated to provide information about activedata storage pool volumes
to include the recovery process, and is shown in Example 6-7.
RECOVERY.SCRIPT.DISASTER.RECOVERY.MODE
This stanza has been updated to include the ADP stanza for exploding into macros.
Example 6-8 An example for naming convention and command syntax for DRM instances
set drmplanprefix /home/<instance>/DRM/plans/<hostname>-<instance>
set DRMINSTRP /home/<instance>/DRM/instructions/<hostname>-<instance>
Example 6-9 UNIX DRM stanza exclusions from the V6.1 recovery plan
LOGANDDB.VOLUMES.CREATE script
LOG.VOLUMES
DB.VOLUMES
LOGANDDB.VOLUMES.INSTALL script
The exclusions for the Windows platform V5.5 recovery plan file is shown in Example 6-10.
Example 6-10 Windows DRM stanza exclusions from the V6.1 recovery plan
LOG.VOLUMES
DB.VOLUMES
LOGANDDB.VOLUMES.INSTALL script
These exclusions are an important phase in the rebuilding of the Tivoli Storage Manager
server, which must now be performed manually, prior to invoking the recovery scripts.
Steps 5-10 here are the new steps that are required for a V6.1 DRM based server recovery.
We provide details for the new steps that are platform independent.
Note: The UNIX command line (no graphic console) cannot run dsmicfgx, therefore the
options are X11 redirection or command line using the ‘-i console’ parameter.
The Tivoli Storage Manager disaster recovery instance, which is our UTAH-TSM1 instance,
will be recovered to our Vermont server, as the instance name utah-tsm1. The task details in
the following sections highlight the steps to accomplish this.
On UTAH-TSM2
We follow these steps:
1. def devc dbb_file devt=file maxcap=20G DIR=/tsmstg/dbb
2. set dbrecovery dbb_file
Example 6-11 AIX custom logical volume and JFS2 file systems for TSM2 on the DR system Vermont
Filesystem 1024-blocks Free %Used Iused %Iused Mounted on
/dev/dbdir5lv 2097152 2096504 1% 4 1% /tsm2/dbdir001
/dev/dbdir6lv 2097152 2096504 1% 4 1% /tsm2/dbdir002
/dev/dbdir7lv 2097152 2096504 1% 4 1% /tsm2/dbdir003
/dev/dbdir8lv 2097152 2096504 1% 4 1% /tsm2/dbdir004
/dev/actlog1lv 9371648 9369888 1% 4 1% /tsm2/actlog
/dev/actlog1mlv 9371648 9369888 1% 4 1% /tsm2/activelogm
/dev/archlog1lv 18808832 18805632 1% 4 1% /tsm2/archlog
2. For this setup purpose, we are configuring the instance ‘tsm2’ as the name of our DR
instance (utah-TSM2). The tasks involved the creation of the group, user ID and home
directory for our Tivoli Storage Manager instance, as shown in Example 6-12.
3. Next, log in using the user ID and password, and you will be prompted to change the
password for user ID, as shown in Example 6-13.
Example 6-13 Logging into AIX and changing the password for TSM1 new instance creation
You must change your password now and login again!
Changing password for "tsm2"
tsm1's Old password:
tsm1's New password:
4. Next, we edit the .profile file as seen in the existing TSM2 instance, or copy the file. This is
to maintain environment consistency with the instances. Performing this step, ensure any
specific path details are altered for the new instance.
5. Change the ownership of the newly created file system mount points. Changing the
ownership of the “tsm2” mounts and directories can be accomplished by issuing the
commands shown in Example 6-14.
# cd /
6. Create a new server options file (dsmserv.opt) by either referencing the recovery plan file,
or copy the existing instance TSM1 server option file, and edit any details required. Refer
to the IBM Tivoli Storage Manager for AIX Installation Guide V6.1, GC23-9781 for more
information regarding dsmserv.opt parameters and settings.
7. Increase the size of the instance directory for TSM2, which in this case is the /home file
system. Each instance requires approximately 420 MB of additional space.
8. The next step is to log out of AIX as the root user, and log in as the instance user ID
(tsm2). Upon completing this step, you will find that an environment for DB2 has been
established, as discussed in the previous steps.
9. Next update the default directory for the database to reflect the instance directory, by
running the db2 update command as shown in Example 6-15.
Example 6-15 Setting the default directory for the database to be the same as the instance directory
$ pwd
/home/tsm2
$ db2 update dbm cfg using dftdbpath /tsm2
DB20000I The UPDATE DATABASE MANAGER CONFIGURATION command completed
successfully.
10.Preparing the DB2 database and recovery logs is our next step. The db2icrt command
will update the instance details within DB2. The command is run as shown in
Example 6-16.
Example 6-16 Command to configure a Tivoli Storage Manager V6.1 server instance
11.Following the successful formatting, the next step will be to start the Tivoli Storage
Manager V6.1 server in the foreground. To perform this, we will use the new parameters
provided in the dsmserv command, as shown in Example 6-17.
Example 6-17 Starting the Tivoli Storage Manager V6.1 server instance in the foreground
$ /opt/tivoli/tsm/server/bin/dsmserv -u tsm2 -i /home/tsm2
2. Then log out, and back into the instance, or issue the command as shown in the
~/.profile to re-read the profile.
3. Next, create a file called tsmdbmgr.opt in the /home/tsm1 directory and add the following
line as shown in example Example 6-19.
servername TSMDBMGR_TSM2
commmethod tcpip
tcpserveraddr localhost
tcpport 1500
passwordaccess generate
passworddir /home/tsm2
errorlogname /home/tsm2/tsmdbmgr.log
nodename $$_TSMDBMGR_$$
*************************************************************
* Tivoli Storage Manager *
* API Version = 6.1.0 *
*************************************************************
Enter your current password:
Enter your new password:
Enter your new password again:
This db2 upd command establishes the db2 configuration for the tsmdb1 database, and can
be reviewed for the TSM2 instance by running an AIX command line db2 get snapshot for
database on TSMDB1, as shown in Example 6-22, to query the TSMDB1 database for the
TSM2 server instance.
Example 6-22 db2 get snapshot for database on TSMDB1 DB2 command
$ db2 get snapshot for database on TSMDB1 |grep tsm2
Database path = /home/tsm2/tsm2/NODE0000/SQL00001/
Automatic storage path = /tsm2/dbdir001
Automatic storage path = /tsm2/dbdir002
Automatic storage path = /tsm2/dbdir003
Automatic storage path = /tsm2/dbdir004
6. Next, stop the running Tivoli Storage Manager TSM2 instance using the dsmcadmc
command or ISC and Administration Center.
7. Drop the DB2 database tsmdb1 created by the configuration process we just followed.
Use the command DSMSERV REMOVEDB to drop the TSM1DB database from DB2, as
shown in Example 6-23.
Example 6-23 Dropping the DB2 database created for the DR instance.
$ dsmserv removedb TSMDB1
8. Next, ensure that the active log path and recovery directory are empty prior to the restore,
as shown in Example 6-24.
Example 6-24 Ensure the active log path and recovery directory are empty
$ rm -r /tsmstg/recoveryd/*
$ rm -r /tsm2/actlog/*
$ rm -r /tsm2/dirdb001/*
10.Now, review the output of the restore results as shown in Example 6-26.
Example 6-27 Output of a recovered AIX Tivoli Storage Manager V6.1 server on a DR instance
Tivoli Storage Manager for AIX
Version 6, Release 1, Level 2.0
TSM:UTAH-TSM1>
As we have highlighted in the output from Example 6-27, three processes are started
which are for the server dedup function which has been recovered at the DR site.
Summary
In this section we have shown a disaster recovery scenario of a V6.1.2 Tivoli Storage
Manager server instance running on Utah as TSM1, which is the deduplication instance in our
test lab. The recovery instance was on the DR server Vermont, into a newly created instance
called TSM2. An existing TSM1 was already in place, and the intent of this exercise was to
demonstrate that the DR target could be any system, with any path structure.
How does deduplication affect the use of copy storage pools and disaster recovery (DRM)
planning and management? Tivoli Storage Manager V6 introduced data deduplication that
might have planning or functional considerations relating to how a given server is managed
and, in particular, how it performs disaster recovery management (DRM). Our discussion
examines this in two primary ways. The first is to discuss typical DRM considerations without
data deduplication in the mix. And then, with the DRM foundation set, we discuss the
implications of data deduplication to those DRM activities.
6.4.1 Data life cycle for a Tivoli Storage Manager server and DRM
In the following topics, we describe a general view of the data life cycle within a Tivoli Storage
Manager server after it has been stored (either archived or backed up).
Storage pool backup is performed to one or more copy storage pools. This is done to provide
for on site and off site copies of the data depending upon how an administrator chooses to
manage the server. There are two primary possibilities here.
In the first case, there is a single copy storage pool being used. If this is the case, it provides
a single duplicate copy of the data, and if it is being used for offset protection/safety
purposes, then it is also being rotated or transition offset to support that.
In the second case, there are multiple copy storage pools being used for a given set of data.
The general model in this case is that a copy storage pool copy of the data is being used to
keep an on site (local) duplicate copy of the data to protect from media/device failure for the
primary storage pool. And the second or other copy storage pools are then used for offset
protection by rotating those volumes to an off site location in order to protect from a major
disaster that results in the loss of the primary data center or server hardware and devices.
Finally, movement of volumes for off site storage can now be performed. If using the Tivoli
Storage Managers DRM feature, this would be the MOVE DRMEDIA command and related
processing. The idea here is to take the current database backup along with matching
storage pool backup volumes and remove them from the primary location for the purpose of
transporting them to the off site data protection location.
Note that the sequence listed above is not an absolute. For example, if simultaneous write is
used when the data is stored, a copy storage pool copy of the data will be created in parallel
or concurrent with the store of the data into the primary storage pool. Alternatively, the use of
migration might not be used in your configured environment; you might simply be storing to a
primary location and then creating copy storage pool copies from that without any actual
storage hierarchy in place.
If two copy storage pools are being used, one for on site and one for off site, then we also
have populated and managed an on site pool that protects against media/device failure by
having on site and available duplicate copies of the data.
Then as data is expired based on policy retention values, reclaimable space is created on
those copy storage pool volumes (on site and off site). This reclaimable space can then be
reclaimed in one of two ways:
1. If the copy storage pool volumes are on site, meaning that they have not been moved to
an off site location and the server knows that these volumes are locally available, the
volumes will be reclaimed as a normal part of reclamation processing. The volumes will be
reclaimed by moving the available, still referenced, data to other volumes in the same
storage pool based on the percentage reclaimable space setting for that storage pool.
2. If the copy storage pool volumes are off site, meaning that they have been moved to an off
site location and the server knows that these volumes are not locally available, the
volumes will be reclaimed using the server's off site reclamation processing. This
processing will create new representations of these volumes to take off site while all
references to the data on the off site volume will be removed. This off site reclamation
processing avoids data movement or copying of data from an off site location by relying
upon copying data from locally resident primary storage pool locations.
6.4.2 Data life cycle for a Tivoli Storage Manager server and DRM including
deduplicated storage pools
There are a number of operational considerations when using deduplicated primary or copy
storage pools. One key consideration, though, is that deduplication is only allowed for storage
pools using a device class of DEVTYPE=FILE. As such, deduplicated copy storage pools do
not lend themselves to use by DRM for the off site protection of the Tivoli Storage Manager
server.
Note: This approach does not provide for an off site copy of the data or the use of Tivoli
Storage Manager's DRM feature. If the primary product server or data center were
damaged or destroyed, this might result in the loss of data or inability to recover that data.
The recovery time of any of the Seven Tiers of Disaster Recovery solutions is very much
dependent on the following considerations:
Recovery of the IT infrastructure
Recovery time for the data availability
Restoring the operational processes
Restoring the business processes
These Seven Tiers of Disaster Recovery solutions offer a simple methodology of how to
define your current service level, and to identify the target service level and the required
environment to meet your recovery requirements.
As a result there is less data loss when a disaster is declared. The transition from the state of
idle off site storage to becoming the primary recovery source is much faster than Tier 1 and
Tier 2, because the data is physically loaded and ready. This methodology is reliable and very
predictable for recovery times. Automation can be built in, and there is significantly less
manpower required for annual or semi-annual testing.
Normally all the components that make up continuous availability are situated in the same
computer room. The building, therefore, becomes the single point-of-failure. While you must
be prepared to react to a disaster, the solution you select might be more of a recovery
solution than a continuous-availability solution. A recovery solution must then be defined by
making a trade-off among implementation costs, maintenance costs, and the financial impact
of a disaster. These will all be reviewed as a result of performing a business impact analysis
of your business as part of a larger Business Continuity Plan.
For more information about Business Continuity planning, see IBM System Storage Business
Continuity: Part 1 Planning Guide, SG24-6547, and IBM System Storage Business
Continuity: Part 2 Solutions Guide, SG24-6548.
Deduplication is a technique that allows more data to be stored on a given amount of media
than would otherwise be possible. It works by removing duplicates in the stored version of
your data. In order to do that, the deduplication system has to process the data into a slightly
different form. When you need the data back, it can be reprocessed into in the same form as
it was originally submitted.
Deduplication and compression are closely related, and the two often work in similar ways,
but the size of the working set of data for each is different. Deduplication works against large
data sets, compared to compression (for example, real-world LZW compression often only
has a working data set under 1 MB, compared to deduplication, which is often implemented to
work in the range of 1 TB to 1 PB). With deduplication, the larger the quantity being
deduplicated, the more opportunity exists to find similar patterns in the data, and the better
the deduplication ratio can theoretically be, so a single store of 40 TB would be better than
five separate data stores of 8 TB each.
Deduplication is effective with many, but not all workloads. It requires that there are
similarities in the data being deduplicated: For example if a single file exists more than once
in the same store, this could be reduced down to one copy plus a pointer for each
deduplicated version (this is often referred to as a “Single Instance Store”). Some other
workloads such as uncompressible and non-repeated media (JPEGs, MPEGs, MP3, or
specialist data such as geo-survey data sets) will not produce significant savings in space
consumed. This is because the data is not compressible, has no repeating segments, and
has no similar segments.
In many situations, deduplication works better than compression against large data sets,
because even with data that is otherwise uncompressible, deduplication offers the potential to
efficiently store duplicates of the same compressed file.
To sum up, deduplication typically allows for more unique data to be stored on a given
amount of media, at the cost of the additional processing on the way into the media (during
writes) and the way out (during reads).
In order to process data quickly, many storage techniques use hash functions. A hash
function is a process that reads some input data (also referred to as a chunk), and returns a
value that can then be used as a way to refer to that data. An example of this is demonstrated
using the AIX csum command. We are able to return hash values for a given file with more
than one algorithm. The real-world application for csum is to check that a file has been
downloaded properly from a given Internet site, provided that the site in question has the MD5
Example 7-1 Hash values of two commonly used functions against the same file
# csum -h MD5 tivoli.tsm.devices.6.1.2.0.bff
05e43d5f73dbb5beb1bf8d370143c2a6 tivoli.tsm.devices.6.1.2.0.bff
A typical method of deduplication is to logically separate the data in a store into manageable
chunks, then produce a hash value for each chunk, and store those hash values in a table.
When new data is taken in (ingested) into the store, the table is then compared with the hash
value of each new chunk coming in, and where there’s a match, only a small pointer to the
first copy of the chunk is truly stored as opposed to the new data itself.
Typical chunk sizes could be anywhere in the range of 2 KB to 4 MB, although theoretically
any chunk size could be used. There is a trade-off to be made with chunk size: a smaller
chunk size means a larger hash table, so if we use a chunk which is too small, the size of the
table of hash pointers will be large, and could outweigh the space saved by deduplication. A
larger chunk size means that in order to gain savings, the data must have larger sections of
repeating patterns, so while the hash-pointer table will be small, the deduplication will find
fewer matches.
The hashes used in deduplication are similar to those used for security products; MD5 and
SHA-1 are both commonly used cryptographic hash algorithms, and both are used in
deduplication products, along with other more specialist customized algorithms.
With any hash, there is a possibility of a collision, which is the situation when two chunks with
different data happen to have the same hash value. This possibility is extremely remote: in
fact the chance of this happening is less likely than the undetected, unrecovered hardware
error rate.
Other methods exist in the deduplication technology area which are not hash based, so do
not have any logical possibility of collisions. One such method is called hyperfactor; this is
implemented in the IBM ProtecTIER® storage system.
The classic example of a deduplication engine’s prowess is with data like that contained in an
e-mail system. If we send an uncompressed 1 MB attachment to 100 people, the copies
would take up 100 MB on the e-mail server, plus the 1 MB “sent” copy in our sent folder. The
e-mail server would need 101 MB of free space for us to send that e-mail. When we come to
back up the e-mail server, we would separately back up all 101 copies as though unrelated,
using 101 MB of space.
If we were doing this with a deduplicating data store as the target, we would probably
consume less than 1 MB on deduplicated storage, depending on how deduplicatable the
original 1 MB attachment was. If we assume that it had 50% repeating patterns inside, the
Taking a very simple, traditional approach to backup (taking full backups daily) results in a
high proportion of redundancy in the data stored, which traditionally meant lots of tapes. On
tape we might be looking at 20 or more copies of the same files, each containing the same
redundancies. In the example above, we could multiply the 200:1 ratio by nearly 20 in such a
case. Deduplicating that sort of system is a very good idea indeed, from a space-savings
point of view: the ratios achieved would be high, the space savings large.
With Tivoli Storage Manager, IBM has always endeavoured to use storage more intelligently.
The progressive incremental backup method reduces the duplication inherent in backups, so
when we look at equivalent ratios for Tivoli Storage Manager backup data, they do not flatter
the deduplication equipment so much. What we are really seeing here is how much more
efficient Tivoli Storage Manager is at avoiding duplicates in the first place, than the traditional
approach. We have tried to avoid un-necessary duplication, and in some ways Tivoli Storage
Manager is still more efficient: for example, doing full backups every day still requires the
processor, disk and network resources to move all the data to the deduplication system. With
Tivoli Storage Manager and progressive incremental backups, we avoid reading a lot of that
data, so we avoid using the resources. Add subfile backups to the solution, and we only move
the parts of the files that have changed, further reducing the redundancy, before the data ever
gets to the Tivoli Storage Manager Server.
Tivoli Storage Manager has contained a duplication avoidance strategy since its inception as
WDSF in 1990—the progressive incremental backup methodology. This reduces the amount
of duplicates for backup data coming into the server, although in a fairly simple fashion. It only
backs up files that have changed—for example, one can simply change the modification data
of a file and Tivoli Storage Manager will need to back it up again. In terms of effect on stored
data, this is similar to data deduplication at the file level—we are reducing the redundant data
at source by not backing up the same file content twice.
Since Tivoli Storage Manager 4.1, there has been a feature called adaptive subfile backup.
This allows for the blocks of data changed within a file to be sent over the network to the Tivoli
Storage Manager Server, as opposed to all the blocks: essentially like a block-level
incremental backup. As such, it forms another type of duplication avoidance. It has some
limitations—it currently only works with files up to 2 GB, and the reconstruction of the data
during restore causes additional workload on the Tivoli Storage Manager Server over regular
incremental workloads. It is most useful for backups where the client has very limited network
access to the Tivoli Storage Manager Server, such as a branch office or a mobile device.
Tivoli Storage Manager V6.1 is capable of deduplicating data at the server. It performs
deduplication out of band, in Tivoli Storage Manager server storage pools. Deduplication is
only performed on data in FILE (sequential disk) devtype storage pools—it does not
deduplicate DISK (random disk) storage pools, or tape storage pools. In addition, data
deduplication has to be enabled by the Tivoli Storage Manager administrator on each pool
individually, so it is possible to deduplicate those types of data which will benefit most, as
opposed to everything. There is no requirement for it to be enabled for all pools.
If Tivoli Storage Manager V6.1 had implemented deduplication with a fixed chunk size, the
statistical probability of a collision would be to the order of 50% at 280 hashes. Because Tivoli
Storage Manager varies its chunk size and uses that as part of the comparison, the possibility
is made even more remote (something less remote than 50% probability at 2100 hashes)
whereas a single Tivoli Storage Manager server is currently architecturally capable of storing
263 bitfile objects, itself impossible because of limitations in other places (for example,
storage pool space to store the data from so many objects). The probability of a collision is
very remote indeed. The average chunk size is typically about 250 KB, although Tivoli
Storage Manager V6.1 varies this as needed between 2 KB and 4 MB.
Before Tivoli Storage Manager chunks the data at a bit file object level, it calculates an MD5
of all the objects in question, which are then sliced up into chunks. Each chunk has an SHA1
hash associated with it, which is used for the deduplication. The MD5s are there to verify that
objects submitted to the deduplication system are reformed correctly, because the MD5 is
recalculated and compared with the saved one to ensure that returned data is correct.
Table 7-1 shows an overview of data deduplication in Tivoli Storage Manager V6.1.
The answer depends on the size of the Tivoli Storage Manager solution being designed, and
the availability (or cost) of alternative deduplication products. Tivoli Storage Manager V6.1’s
deduplication system is implemented in the server code, and is included as a regular, base
feature of Tivoli Storage Manager Enterprise Edition so there is no additional license cost for
it. As with any software, there is a requirement to supply the correct hardware resources to
make it perform as required. When sizing a system such as this, we have to remember to
allocate processor resources, and also database space to the deduplication effort.
Experienced administrators already know that Tivoli Storage Manager database expiration
was one of the more processor-intensive activities on a Tivoli Storage Manager Server.
Expiration is still processor intensive, albeit less so in Tivoli Storage Manager V6.1, but this is
now second to deduplication in terms of consumption of processor cycles. Calculating the
MD5 hash for each object and the SHA1 hash for each chunk is a processor intensive activity.
In order to store the hash table that allows deduplication to work, one must also consider
extra Tivoli Storage Manager database and log space because this is where the hashes are
stored after they are calculated. Depending on the amount of data being deduplicated, this
could in some cases double the required space in the database, so it is something that really
requires consideration.
Sizing deduplication
Sizing the deduplication in Tivoli Storage Manager versus externally in a VTL or similar
method is a judgement related to the system performance required.
For environments with many Tivoli Storage Manager Servers, or with larger amounts of data,
we recommend looking at platforms such as the IBM ProtecTIER product as best practice.
Apart from scaling to very large sizes (at the time of writing, a single ProtecTIER cluster
handles 1PB of real (non-deduplicated) storage at access rates over 900 MB/sec), it also
opens the door to deduplication between separate Tivoli Storage Manager Servers,
something Tivoli Storage Manager’s internal deduplication does not currently allow, as well as
other benefits like inline deduplication, IP-based replication of stores and so on.
An additional recommendation is to use storage media with fast access characteristics for
Tivoli Storage Manager deduplication. We recommend SSD or SAS/FC over SATA disks, if
deduplication performance is an issue. Random read I/O is most of the profile seen during
deduplication processing, so concentrating on that has the best effect for deduplication.
In order to allow deduplication, it is important that the Tivoli Storage Manager client does not
encrypt the data before it comes to the Tivoli Storage Manager server. Should this happen,
deduplication would not achieve any useful results: on the contrary, it would simply waste
processor resources, IO resources, and database space. For this reason, Tivoli Storage
Manager has been improved so that encrypted files are marked as such in the Tivoli Storage
Manager database, and not processed for deduplication.
Customers who have Tivoli Storage Manager clients that are required to encrypt data (for
example, branch offices in an organization) are recommended to continue to store that data
in the current manner for the moment, using a storage pool without deduplication enabled.
Data can still be encrypted on tape drives supporting those features as they currently are,
unaffected by deduplication.
For customers who use client-side compression (for example, due to limited network
bandwidth), deduplication on the server will not be as effective as though the data was not
compressed, but it still works surprisingly well with most compressed files. The Tivoli Storage
Manager Client implements compression using an LZW-based algorithm with a 32 KB
working window, so where the changes are outside this window, there are good possibilities
for the Tivoli Storage Manager Server to be able to deduplicate chunks of similar compressed
files.
There are new server options and commands available to control deduplication on Tivoli
Storage Manager V6.1 servers. There is an option available to ensure that only data which is
already backed up into a Tivoli Storage Manager copy storage pool can be deduplicated. If
this is set to “yes” (which is the default), then data will only be processed for deduplication if it
has been safeguarded previously by a storage pool backup.
tsm: UTAH-TSM1>
Now we must set up a file-type device class. In the example, this is called “dedupe,” although
we could have named it any way we wanted as with any other device class. We then create a
new primary sequential storage pool called “dd”, on our new device class. Note that during
the creation of this, we specify the deduplicate=yes parameter, and the number of identify
processes along with the usual storage pool creation parameters (see Example 7-3).
We have set up an example with a regular random disk storage pool with 10 GB of space
available, under the default “backuppool” storage pool. We put 3.7 GB of backup data into it,
as shown in Example 7-4.
tsm: UTAH-TSM1>
Now we migrate from “backuppool” to “dd” using the “nextstg” parameter on the backuppool
pointing to our new “dd” pool.
tsm: UTAH-TSM1>
In terms of how deduplication processing works on the Tivoli Storage Manager Server, it
looks slightly different if compared with other processes. Running the query process
command on a server where a storage pool has deduplication enabled will always return
some identify processes. From Tivoli Storage Manager V6.1.2 onward, these are called
Identify Duplicates processes.
These processes look for chunks of duplicated data, and there can be from one to twenty of
them per storage pool. If there is no deduplication backlog, the deduplication processes will
show up as idle. The deduplication processes are different because they stay resident—they
do not terminate when they finish work, instead, they go to an “idle” mode until they are
needed again, as shown in Example 7-6.
Example 7-6 Deduplication “Identify” Processes in idle mode, Tivoli Storage Manager V6.1.0
tsm: UTAH-TSM1>q pro
tsm: UTAH-TSM1>
In addition to these new processes, deduplication has some effects on already existing
processes. When Tivoli Storage Manager deduplication runs, it dereferences objects no
longer required by Tivoli Storage Manager in a similar way as expiration does, and in
common with expired objects on sequential media, we must run reclamation in order to
recover the space on those volumes. As part of an administrative schedule or maintenance
plan, we would usually run reclamation after deduplication.
tsm: UTAH-TSM1>
The size of this table is proportional to the number of chunks and objects that Tivoli Storage
Manager is processing. If we have a Tivoli Storage Manager server with a 30 GB DB, and we
deduplicate every object in storage (because all of our objects are on a file-based storage
pool), we might end up with a 70 GB Tivoli Storage Manager database, so it is important to
have available database and log space when turning on deduplication in a live system.
Disabling deduplication does not delete the hash information for objects already chunked, so
there is no way of simply going back to the 30 GB version of our database. In order to do that,
we would have to remove the existing backup data either by expiring it, or by backing up the
data to new nodes and then deleting the relevant file spaces. Even if we delete file spaces
(thus de-referencing the objects in storage), Tivoli Storage Manager will still spend some time
resolving the deduplication hashes in the background. This takes time—for example, it might
take up to a day for changes to filter through on a very busy system.
The NQR reduces the memory requirement on Tivoli Storage Manager clients during restore
and is optimized for large file systems. In installations with clients backing up very large client
file systems or backup data spread over many volumes, performance issues were reported
when the NQR technique is used and only very few files qualify as restore candidates.
The changes made to the NQR restore process, by exploiting the DB2 database, address
these issues.
Phase 1 and phase 2, with the proprietary NQR implementation, might run simultaneously.
If a restore originates from a pre-Tivoli Storage Manager V6.1 storage agent, the data base
queries are translated to access Restore.Srvobj. The new NQR process still supports
restartable restores, but information is stored so that restarts occur only at a volume
boundary.
Note: During data base migration from Tivoli Storage Manager Version 5 to Version 6,
restartable restores are not migrated.
Next we provide examples from a synthetic test restoring 178 objects out from 16-node
backups of 86.655 million 1 KB files, across 230 file storage volumes. The numbers were
collected under lab conditions; numbers for production servers will differ of course.
Again, note that these numbers were collected for a synthetic test. Values for a production
environment will vary depending on server load, object distribution, and other effects that
might impact performance.
8.2 Summary
With the transition to DB2 and the changes made to the NQR process, you get comparable
restore performance without having to configure anything.
Search thread
Deletion thread
{
scan Expiring.Objects table { {
oldest entry forward wait for dispatch
apply policies to each entry while item in batch {
if entry eligible for deletion { delete object
add entry to batch. }
if batch full { }
dispatch to Deletion thread.
}
}
}
}
One significant change for Tivoli Storage Manager V6.1 and the inventory tables is the
elimination of the Expiring.Objects database table. This table was used to represent objects
that could be expired (deleted) by the expiration process. This table duplicated information
from the base Archive.Objects and Backup.Objects tables. Similarly, the existing expiration
algorithm and processing was tightly coupled to the layout of this table and the organization of
the data within this table.
As an example, the expiration processing relied upon the ordering of records in order to skip
(exclude) records that could not be expired based on current policy settings. The algorithm
and processing of the expiration code itself was coded to exploit this data ordering and some
existing (proprietary) database capability to exclude objects on a fetch records request.
Not only has the dispatch thread logic changed to take advantage of the new server database
schema, you now can also define how many threads you want to allow for expiration by the
RESOURCE parameter.
>>-EXPIre Inventory--+-------------------+---------------------->
'-Quiet--=--+-No--+-'
'-Yes-'
.-Wait--=--No------. .-SKipdirs--=--No------.
>--+------------------+--+----------------------+--------------->
'-Wait--=--+-No--+-' '-SKipdirs--=--+-No--+-'
'-Yes-' '-Yes-'
.-Node--=--*------------------.
>--+-----------------------------+------------------------------>
'-Node--=---node1,node2,...---'
>--+-------------------------------------+---------------------->
'-DOmain--=----domainName-------------'
.-REsource--=--4--------.
>--+-----------------------+------------------------------------>
'-REsource--=--value--+-'
.-Type--=--ALl---------.
>--+----------------------+------------------------------------->
'-Type--=--+-ALl-----+-'
|-ARchive-|
|-Backup--|
'-Other---'
>--+-----------------------+-----------------------------------?
'-DUration--=--minutes-'
Table 9-1 documents different combinations for the new parameters to the expiration
command, valid and invalid ones, and the expected results. Use the examples as a starting
point to review your current expiration schedules.
Table 9-1 Sample expiration process NODE and DOMAIN parameter combinations
Node value Domain value Result
NODE=DEPT_A* DOMAIN=xxx All nodes matching the pattern DEPT_A* that are
assigned to domain XXX will be processed.
From the Administration Center’s Tivoli Storage Manager view (see Figure 9-3), you want to
invoke expiration for server UTAH_TSM1. Select the target server from the Tivoli Storage
Manager view and click Server Maintenance.
Here you are looking at the expiration task only, so after the wizard’s Welcome panel, you are
guided through the definitions for database backup and storage pool migration before you are
presented with the Expire Stored Data window (see Figure 9-7). In this scenario we specify
the expiration process to run for 20 minutes with 8 threads. After this has completed, click
Next to continue.
For each node, file space, data type, and object type processed, the ANR0165I and ANR0166I
messages are reported as with Example 9-3.
When expiration processes backup sets, as with Example 9-4, a different set of messages is
reported, ANR0190I and ANR0191I. The summary records are cut each time an ANR0166I
message is issued.
The output returned by the QUERY PROCESS command has changed; see Example 9-5.
Remember that the number of nodes reported includes in-flight nodes.
The key difference between Tivoli Storage Manager V6.1 and earlier versions of the product
is that the expiration command allows for many variations that did not previously exists—and
as such, the restart tracking model is extended to track restart position information using the
command itself as the identifier for that information. The restart position is only honored if the
expiration commands are exactly the same.
This gives you the ability to have an expiration model that much better fits your requirements.
For example, you can do expiration of archives weekly while performing expiration of backup
data daily. In the event that the expiration of the archive data does not complete in a single
session, the next weekly run picks up where it left off without the intervening expiration of
backup data processes interfering.
As an example, you might have three different expiration commands scheduled throughout
the week. Each of the three commands would have its own restart position, if it was cancelled
prior to completion:
If a command is re-issued and there is a matching restart position, expiration finishes
processing those in-flight nodes and then any other nodes that had not been processed
and that were candidates to be processed for that expiration run.
If restart data exists for a given expiration and the command has not been re-executed for
two weeks (14 days), the “stale” restart data is deleted such that the next time this
expiration command is run, it will start from the beginning.
The deletion of the stale expiration restart information happens as a part of expiration
processing itself—at the end of the process while cleaning up the existing process and
information it checks for and deletes any stale restart information.
After all candidate nodes for that expiration have been processed, across however many
restarts that might be, the expiration process will complete and the restart tracking will be
cleaned up because there is no longer a restart position to track.
Example 9-6 shows the ANR4896I message issued in case of an expiration restart. We use the
domain parameter with the expire command to process only nodes for domain
OTHERDOMAIN.
Figure 9-9 explains the scenario we just described. For easier understanding, it assumes that
the expiration payload is the same for all nodes.
EXPIRATION RESTART
NODE A NODE B NODE C NODE D NODE E NODE F NODE G NODE H NODE I NODE J
WAIT WAIT WAIT WAIT WAIT WAIT
Cancel exp
Cancel exp
Now with all the new expiration parameters, you might wonder what happens if expiration is
trying to expire a single node from multiple processes in parallel. We tried to expire the same
node with two separate expiration commands and different parameters to make the
commands distinct. Example 9-7 shows that the condition is detected and only one process
expires the node’s data.
Example 9-7 Parallel expiration attempt for the same node
ANR2017I Administrator SERVER_CONSOLE issued command: EXPIRE INVENTORY
node=oldskool reso=1
ANR0984I Process 1 for EXPIRE INVENTORY started in the BACKGROUND at 10:26:02.
ANR0811I Inventory client file expiration started as process 1.
ANR2017I Administrator SERVER_CONSOLE issued command: EXPIRE INVENTORY
Then we tried the same procedure by defining a node group and we added the same node
plan to expire to that group. Example 9-8 shows that the condition is detected again, in
addition you see that you can invoke expiration on NODEGROUPS defined to the server.
If you see the ANR4298I message reported in your activity log, this might be caused by the fact
that you configured the same node and type to different expiration jobs.
For instance, if expiration retries a batch of files (such as 400), the expiration status message
looks like Example 9-9.
If the expiration process retries the same batch five times and still cannot acquire the
necessary locks, the expiration status message reflects this as shown in Example 9-10.
When this situation is encountered, expiration attempts to throttle back the number of files
contained within the deletion transaction batch in an attempt to delete as much as possible
and minimize risk of lock conflicts.
The expiration process is robust against any locking problems that it might encounter and will
retry the operation if a locking condition is met.
9.3 Summary
The Tivoli Storage Manager V6.1 enhancements to the server expiration process integrate
the new database schema and provide:
Improved efficiency, by abandoning the producer/consumer thread model. Each thread
involved examines and deletes its own candidate list independent of the other threads that
might be running. The workload is split up across the available number of threads
designated by the RESOURCE value on the command.
Improved flexibility, by giving you the ability to control for whom expiration is done and
what is expired.
The size of the aggregate depends on the sizes of the client files being stored, and the
number of bytes and files allowed for a single transaction. Two options affect the number of
files and bytes allowed for a single transaction. TXNGROUPMAX, located in the server
options file, affects the number of files allowed. TXNBYTELIMIT, located in the client options
file, affects the number of bytes allowed in the aggregate.
A transaction is the unit of work exchanged between the client and server. The client program
can move multiple files or directories between the client and server before it commits the data
to server storage.
A transaction can contain multiple files or directories. This is called a transaction group. Using
the TXNGROUPMAX server option, you can specify the number of files or directories that are
contained within a transaction group. A larger value for the TXNGROUPMAX option can
affect the performance of client backup, archive, restore, and retrieve operations. You can
use the TXNGROUPMAX option to increase performance when Tivoli Storage Manager
writes to tape. This performance increase can be considerable when a user transfers multiple
small files.
If you increase the value of TXNGROUPMAX by a large amount, you need to monitor the
effects on the recovery log. A larger value can increase utilization of the recovery log, as well
as increase the length of time for a transaction to commit. Also consider the number of
concurrent sessions to be run. It might be possible to run with a higher TXNGROUPMAX
value with a few clients running. However, if there are hundreds of clients running
concurrently, you might need to reduce the TXNGROUPMAX to help manage the recovery
log usage and support this number of concurrent clients. If the performance effects are
severe, they might affect server operations. See “Monitoring the database space” on page 54
for more information.
Based on the previous two examples, five concurrent transactions with a TXNGROUPMAX
setting of 2000 consume significantly more space in the recovery log. This increase in log
space usage also increases the risk of running out of recovery log space.
Table 10-1 shows a comparison of the examples of the preceding TXNGROUPMAX settings.
This example becomes more significant if a given log record takes 100 bytes.
Table 10-1 Example of log bytes that are consumed by five concurrent sessions
TXNGROUPMAX setting Number of log bytes consumed
TXNGROUPMAX=20 100.000
TXNGROUPMAX=2000 10.000.000
You should evaluate the performance and characteristics of each node before increasing the
TXNGROUPMAX setting. Nodes that have only a few larger objects to transfer do not benefit
as much as nodes that have multiple, smaller objects to transfer. For example, a file server
benefits more from a higher TXNGROUPMAX setting than does a database server that has
one or two large objects. Other node operations can consume the recovery log at a faster
rate. Be careful when increasing the TXNGROUPMAX settings for nodes that often perform
high log-usage operations. The raw or physical performance of the disk drives that are
holding the database and recovery log can become an issue with an increased
TXNGROUPMAX setting. The drives must handle higher transfer rates to handle the
increased load on the recovery log and database.
You can set the TXNGROUPMAX option as a global server option value, or you can set it for
a single node. Refer to the REGISTER NODE command and the server options in the
Administrator’s Reference. For optimal performance, specify a lower TXNGROUPMAX value
(between 4 and 512). Select higher values for individual nodes that can benefit from the
increased transaction size.
Do not expect the average logical files per physical file to be exactly the number specified
with the TXNGROUPMAX option. There are more factors involved, but at least the numbers
should be close enough. The 3571 files from this example are fine.
We now delete the client’s file system on the server and do exactly the same backup,
followed by a migration. The numbers are reported in Example 10-2.
Even with this simple example, the migration already took much longer than with the new
default.
HSM is a data storage system that automatically moves data between high-cost and low-cost
storage media. HSM exists because high-speed storage devices, such as hard disk drives,
are more expensive per byte stored than slower devices, such as optical discs and magnetic
tape drives. While it would be ideal to have all data available on high-speed devices all the
time, this is prohibitively expensive for many organizations. Instead, you can use HSM to
store the bulk of your enterprise's data on slower devices, and then copy data to faster disk
drives only when needed.
With the previous versions of the code, migration jobs are used to control which data is to be
migrated. These jobs contain the information, which files are to be migrated and which file
space on the Tivoli Storage Manager server is used. All files that match the criteria of a job
are migrated, and it does not matter, whether this job will migrate only few files or nearly all
files. One migration job can span over multiple volumes, if nested volumes are present. A
migration job can also span multiple volumes, if nested volumes are not present. For
example, you can add D:\dir1 and E:\Dir2 to one single migration job. In that case, this single
job will migrate files from two different volumes.
Migration jobs can be executed manually in the HSM for Windows GUI or on the command
line by running the dsmclc tool. It is not possible to schedule two jobs at the same time,
because every HSM for Windows executable can be started only once at a time. Alternatively
any scheduler can be used to start the dsmclc tool for migration. This provides some kind of
“automated” migration for HSM for Windows, but does not guarantee a certain amount of free
space in the volume, nor does it generally avoid out of space situations. The typical space
usage with this implementation is shown in Figure 11-1.
With automatic threshold migration, the typical space usage is shown in Figure 11-2.
Compare this against Figure 11-1. No scheduled or manual job must be run to achieve this.
The capability to automatically maintain a certain amount of free space in the file system is
similar to the Tivoli Storage Manager for Space Management threshold migration available on
UNIX platforms. If you are used to the UNIX methodology of threshold migration, the HSM for
Windows threshold migration differs from the one on UNIX platforms.
Note: The HSM for Windows 6.1 client does not support on demand migration or
premigration.
Threshold migration monitors space usage; there is not an out-of-space event as on UNIX.
And, because the monitoring is done in configurable intervals, such as every 5 minutes, it
might happen that the file system runs out of space in the meantime and applications get I/O
errors.
11.2.1 Installation
Threshold migration is supported on all environments on which HSM for Windows is
supported.
Figure 11-3 shows the IBM Tivoli Storage Manager HSM Client InstallShield Wizard Custom
Setup. Select the IBM Tivoli Storage Manager Monitor Service and click Next to continue.
The installation itself does not require any planning, you just invoke the setup.exe to start
installation. In order to configure HSM for Windows, you need to contact your Tivoli Storage
Manager server administrator to register the HSM client node and, if installed on a cluster, to
grant proxy for the individual nodes. This is not part of the setup, but the setup displays the
required commands.
The IBM Tivoli Storage Manager Monitor Service will be installed by default. Upon a
successful installation, you can check by the Windows task manager if the monitor service is
running. Search for hsmmonitor.exe as shown in Figure 11-4.
You can also use the HSM command line client, dsmhsmclc.exe to verify that the service is
running. Example 11-1 shows the output of a dsmhsmclc check command.
On the Threshold Migration Settings panel, select the drive that you want to be monitored for
threshold migration and click Configure. In Figure 11-7 you can see that we specified the
L:\ path.
Table 11-1 explains the configurable options that you have to set up threshold migration, the
volume mount path, and the server file space definition is required, all other options come
with a default.
You can use the command line client to configure the parameters:
dsmhsmclc -CONFIGUREThresholdmig D: -FILESPace HSMOldskool
All migration methods can run in parallel: threshold migration, job migration or file list
migration can run in parallel. Any combination of the following is possible:
Job migration: HSM GUI or dsmclc migrate <parameters>
Run migration based on file value (job migration)
File list migration: dsmclc migratelist <parameters>
Run migration based on external input (file list migration)
Threshold migration: hsmmonitor service
Run migration based on capacity (threshold migration)
11.3 Summary
Threshold migration is easy to understand and to configure and integrates seamlessly into
the existing HSM for Windows client. The threshold migration starts when a high threshold
(HT) is reached and continues to run until a low threshold (LT) is reached or until all
migratable files are migrated.
After threshold migration is configured for your system, for example, your file server will
almost always have free space. You can, however, use threshold migration in combination
with additional migration jobs, schedules, or manually.
With a central database at its core, Active Directory enables administrators to assign policies,
as well as helping them with software deployment and applying critical updates.
The Active Directory directory service stores network resource meta data for an entire domain
and centralized network (see Figure 12-1).
An active directory is a directory structure used on Microsoft Windows based computers and
servers to store information and data about networks and domains (see Figure 12-2). It is
primarily used for online information and was originally created in 1996 and first used with
Windows 2000.
An active directory (sometimes referred to as an AD) does a variety of functions, including the
ability to provide information about objects, helps organize these objects for easy retrieval
and access, allows access by end users and administrators, and allows the administrator to
set security up for the directory. An active directory can be defined as a hierarchical structure.
This structure is usually broken up into three main categories: the resources, which might
include hardware such as printers; services for end users, such as Web e-mail servers; and
objects, which are the main functions of the domain and network.
Note: When you delete an object in Active Directory, that object does not disappear
completely. Instead, the object becomes a deleted object, also known as a tombstone.
Active Directory objects can be restored from a Tivoli Storage Manager server.
No additional configuration or setup is required for this feature.
There are certain commonly used and standard Active Directory object types, as well as less
commonly used, or user defined types of objects. We are going to support “link recreation”
only for the following most common and built-in types of Active Directory objects:
User
Group
Organizational Unit
Computer
Printer
GPO
All other object types will be restored as is, without any additional processing.
Table 12-1 shows these options and explains how to use them.
Figure 12-5 Tivoli Storage Manager Client V6.1 - system state backup
Figure 12-6 Tivoli Storage Manager Client V6.1 - Active Directory database directory restore
Figure 12-7 Tivoli Storage Manager Client V6.1 - organizational unit restore
You can use this query options prior to a restore or retrieve operation to obtain information
about the number of files matching the pattern, number of distinct sequential access volumes
on which the data is stored, total number of bytes to be restored or retrieved, and memory
that would be consumed for a classic restore or retrieve.
As an example, we set up the file space in the test lab as shown in Example 13-1.
Example 13-2 shows the QUERYSUMMARY output for a sample restore from that file
system.
..
<lines deleted>
..
Summary Statistics
Total Files Total Dirs Avg. File Size Total Data Memory Est.
----------- ---------- -------------- ---------- ----------
68 2 18.07 KB 1.20 MB 26.62 KB
Part 5 Complimentary
products and NDMP
This part of the book covers the Tivoli Storage Manager V6.1 update information for Data
Protection for Mail - Exchange and NDMP.
Note: It is not necessary to upgrade to IBM Tivoli Storage Manager Version 6.1 Server in
order to use this new function.
This option specifies the IP address associated with the interface in which you want the
server to receive all NDMP backup data.
This option affects all subsequent NDMP filer-to-server operations, but does not affect NDMP
control connections, which use the system’s default network interface. The value for this
option is a host name or IPV4 address that is associated with one of the active network
interfaces of the system on which the Tivoli Storage Manager server is running. This interface
must be IPV4 enabled.
SETOPT command
You can update this server option without stopping and restarting the server by using the
SETOPT command. The syntax in the Tivoli Storage Manager server options file is:
NDMPPREFDATAINTERFACE ip_address
Parameters
ip_address
Specify an address in either dotted decimal or host name format. If you specify a dotted
decimal address, it is not verified with a domain name server. If the address is not correct, it
can cause failures when the server attempts to open a socket at the start of an NDMP
filer-to-server backup. Host name format addresses are verified with a domain name server.
There is no default value. If a value is not set, all NDMP operations will use the Tivoli Storage
Manager server’s network interface for receiving backup data during NDMP filer-to-server
backup operations. To clear the option value, specify the SETOPT command with a null
value, ““. For more information, refer to the IBM Tivoli Storage Manager Administrator’s Guide
for your platform, found at the Web site:
http://publib.boulder.ibm.com/infocenter/tsminfo/v6/index.jsp?topic=/com.ibm.itsm.
nav.doc/r_pdfs.html
If you get the error message ANR4794E (see Example 14-1) during NAS backup or your NDMP
backup seems to be very slow, you can verify it with an NDMP trace on your Filer and a Tivoli
Storage Manager trace on your Tivoli Storage Manager-Server.
Set up the NDMP Trace on the Filer and a Tivoli Storage Manager Trace on your Tivoli
Storage Manager Server to find out what is going wrong (see Example 14-2).
You have discovered that the TCP Address in the trace is 0.0.0.0 and the TCP Port is 0.
Set up the Tivoli Storage Manager trace on the server as shown in Example 14-3.
Example 14-3 Set up the Tivoli Storage Manager trace on the Server
trace disable *
trace enable spi spid sessremote addmsg
trace begin <pathandfilenamehere>
In this case it is necessary to define the dedicated network address to the server using the
NDMPPREFDATAINTERFACE server option. After you have set that option, the backup
starts without an error, as you can see in the trace (Example 14-4).
Example 14-5 Stop and disable the trace on Tivoli Storage Manager and Filer
on TSM:
trace flush
trace end
trace disable
on the Filer:
ndmpd debug 0
ndmpd debug verbose: 0
ndmpd debug stack trace: false
ndmpd debug screen trace: true
ndmpd debug file trace: true
Example 14-6 Sysstat shows data flow during full and differential backup of the NAS device
NAS1> sysstat -x 1
CPU NFS CIFS HTTP Net kB/s Disk kB/s Tape kB/s Cache
in out read write read write age
64% 0 0 0 47 1658 1668 20 0 0 >60
47% 0 0 0 67 2763 2476 8 0 0 >60
47% 0 0 0 64 2763 2656 0 0 0 >60
20% 0 0 0 50 2202 2124 0 0 0 >60
33% 0 0 0 37 1667 1564 0 0 0 >60
20% 0 0 0 59 2960 2785 0 0 0 >60
17% 0 0 0 42 1939 1940 8 0 0 >60
6% 0 0 0 26 1149 807 0 0 0 >60
8% 0 0 0 11 553 1008 0 0 0 >60
27% 0 0 0 56 2741 2627 0 0 0 >60
56% 0 0 0 56 2760 1982 104 0 0 >60
32% 0 0 0 32 1928 2084 64 0 0 >60
28% 0 0 0 59 3046 2732 4 0 0 >60
17% 0 0 0 29 1658 1900 8 0 0 >60
40% 0 0 0 61 3315 2940 0 0 0 >60
25% 0 0 0 19 1106 1076 0 0 0 >60
36% 0 0 0 36 2208 2046 8 0 0 >60
15% 0 0 0 0 156 167 0 0 0 >60
15% 0 0 0 11 393 254 0 0 0 >60
10% 0 0 0 0 0 207 0 0 0 >60
Tivoli Storage Manager integrates to issue the NDMP commands to move the SnapMirror
image from the NetApp filer to a Tivoli Storage Manager server managed storage target for
fast creation of a DR image. An overview of this function is shown in Figure 14-1.
Nseries
TSM Server
Figure 14-1 SnapMirror to tape support with Filer-to-Server and Filer-to-Tape
SnapMirror to Tape provides an alternative method for backing up very large NetApp file
systems. Because this backup method has limitations, use this method when copying very
large NetApp file systems to secondary storage for disaster recovery purposes. You can back
up very large NetAppfile systems using the NetAppSnapMirror to Tape feature. Using a
block-level copy of data for backup, the SnapMirror to Tape method is faster than a traditional
Network Data Management Protocol (NDMP) full backup and can be used when NDMP full
backups are impractical.
You would use the NDMP SnapMirror to Tape feature as a disaster recovery option for
copying very large NetAppfile systems to secondary storage. However, for most NetApp file
systems, use the standard NDMP full or differential backup method, the new SnapDiff-API
incremental backup, the Snapshot functions of the filer, or a combination of these.
Using a parameter option on the BACKUP and RESTORE NODE commands, you can back
up and restore file systems using SnapMirror to Tape. There are several limitations and
restrictions on how SnapMirror images can be used. Consider the following guidelines before
you use it as a backup method:
You cannot initiate a SnapMirror to Tape backup or restore operation from the Tivoli
Storage Manager Web client, command-line client, or the Administration Center.
You cannot perform differential backups of SnapMirror images.
You cannot perform a directory-level backup using SnapMirror-to-Tape, because Tivoli
Storage Manager does not permit a SnapMirror to Tape backup operation on a server
virtual file space.
14.2.1 How to set up, use, and control SnapMirror to Tape for backup
The only difference between a normal NDMP backup and a SnapMirror to Tape backup is to
specify the additional option, type=snapm, in the backup node and restore node
administrative server command. Next we describe the syntax for this option.
TYPE
This specifies the backup method used to perform the NDMP backup operation.
The default value for this parameter is BACKUPIMAGE, and it should be used to perform a
standard NDMP base or differential backup. Other image types represent backup methods
that might be specific to a particular file server.
BACKUPImage
This specifies that the file system should be backed up using an NDMP dump operation. This
is the default method for performing an NDMP backup.
The BACKUPIMAGE type operation supports full and differential backups, file-level restore
processing, and directory-level backup.
SNAPMirror
This specifies that the file system should be copied to a Tivoli Storage Manager storage pool
using the Network Appliance™ SnapMirror to Tape function.
SnapMirror images are block level full backup images of a file system.
Typically, a SnapMirror backup takes significantly less time to perform than a traditional
NDMP full file system backup. However there are limitations and restrictions on how
SnapMirror images can be used. The SnapMirror to Tape function is intended to be used as a
disaster-recovery option for copying very large Network Appliance file systems to secondary
storage.
For most Network Appliance file systems, use the standard NDMP full or differential backup
method. See the IBM Tivoli Storage Manager Administrator’s Guide for your platform for
limitations on using SnapMirror images as a backup method. The Tivoli publications can be
found at the Web site:
http://publib.boulder.ibm.com/infocenter/tsminfo/v6/index.jsp?topic=/com.ibm.itsm.
nav.doc/r_pdfs.html
An example of the TYPE option to start the SnapMirror to tape backup is shown in
Example 14-7.
ANR0986I Process 16 for NAS SNAPMIRROR BACKUP running in the BACKGROUND processed
1 items for a total of 6,344,704 bytes with a completion state of SUCCESS.
You will find the result of this task in several logs, queries and tables. An example of the Tivoli
Storage Manager-Server NAS-Backup query is shown in Example 14-8.
Included in the Filer logs is the snapmirror log (see Example 14-10), which is always created
during normal NDMP backup and SnapMirror to tape backup.
14.3 The snapdiff option for NFS data stored on NetApp filers
To back up your file systems on your NAS-Filer, you have several choices. One of them is to
use the traditional way of incremental backups. Therefore, you have to mount the NFS or
CIFS share to a Windows or UNIX system, where the Backup/Archive Client is running.
The problem of doing backups in this way is that it takes too long due to the compare of the
objects to find what has changed and what needs to be backed up. For mounted file systems
you cannot use journal based backup.
There is the ability to restore files on a file level basis and use the traditional Tivoli Storage
Manager storage hierarchy. This function is only available for N-Series or NetApp filers with
ONTAP 7.3.
File level restore is limited to 7 bit ASCII characters in file and directory names. Global
character set support requires an update to ONTAP and Tivoli Storage Manager client. It is
available with Tivoli Storage Manager Client 6.1 for Windows and AIX running against Tivoli
Storage Manager Server Version 5.x or 6.1 (see Figure 14-2).
The snapdiff option is for backing up NAS/N-Series file server volumes that are NFS or CIFS
attached.
Use this option with an incremental backup of a NAS filer volume instead of a simple
incremental or incremental with snapshotroot whenever the NAS filer is running ONTAP V7.3
or later, for performance reasons. Do not use the snapdiff and snapshotroot options together.
The first time that you perform an incremental backup with this option, a snapshot is created
(the base snapshot) and a traditional incremental backup is performed using this snapshot as
the source. The name of the snapshot that is created is recorded in the Tivoli Storage
Manager database.
The second time an incremental backup is run with this option, a newer snapshot is either
created or an existing one is used to find the differences between these two snapshots. This
second snapshot is called the diffsnapshot. Tivoli Storage Manager then incrementally backs
up the files reported as changed by snapdiff to the Tivoli Storage Manager server. The file
space selected for snapdiff processing must be mapped or mounted to the root of the volume.
You cannot use the snapdiff option for any file space that is not mounted or mapped to the
root of the volume. After backing up data using the snapdiff option, the snapshot that was
used as the base snapshot is deleted from the .snapshot directory. Tivoli Storage Manager
does not delete the snapshot if it was not created by Tivoli Storage Manager. You can also
perform a snapdiff incremental backup with the -DiffSnapShot=Latest option.
For NAS and N-Series filers running ONTAP 7.3 or later, you can use the snapdiff option
when performing a full volume incremental backup. Using this option reduces memory usage
and speeds up the processing. However, similar to using the incremental-by-date method,
the following considerations and situations apply:
A file is excluded due to an exclude rule in the include-exclude file. Tivoli Storage Manager
performs a backup of the current Snapshot with that exclude rule in effect. This happens
when you have not made changes to the file, but you have removed the rule that excluded
the file. NetApp will not detect this include-exclude change because it only detects file
changes between two Snapshots.
If you have added an include statement to the option file, that include option will not take
effect unless NetApp detects that the file has changed. This is because Tivoli Storage
Manager does not inspect each file on the volume during backup.
You have used the dsmc delete backup command to explicitly delete a file from the Tivoli
Storage Manager inventory. NetApp will not detect that a file has been manually deleted
from Tivoli Storage Manager. Therefore, the file remains unprotected in Tivoli Storage
Manager storage until it is changed on the volume and the change is detected by NetApp
signalling Tivoli Storage Manager to back it up again.
Policy changes such as changing the policy from mode=modified to mode=absolute are
not detected.
The entire file space is deleted from the Tivoli Storage Manager inventory. This causes
the snapdiff option to create a new Snapshot to use as the source, and a full incremental
backup will be performed.
Tivoli Storage Manager will not control what constitutes a changed object, that is
controlled by NetApp.
For more information, see IBM Tivoli Storage Manager for Windows Backup-Archive Clients
Version 6.1, SC23-9792 and IBM Tivoli Storage Manager for UNIX and Linux Backup-Archive
Clients 6.1, SC23-9791.
Subsequent incrementals with the snapdiff option process follow the following steps:
1. The name of the previous Snapshot is retrieved from Tivoli Storage Manager server.
2. Tivoli Storage Manager client creates a new Snapshot version. You can use the
diffsnapshot option to use the most recent externally created Snapshot.
3. The Snapshot Differencing API compares previous and new Snapshot versions and
reports file and directory differences to the Tivoli Storage Manager client.
4. The Tivoli Storage Manager client backs up files identified in the report.
5. The new Snapshot name is stored in the Tivoli Storage Manager server for use in the next
incremental backup.
6. The Tivoli Storage Manager client deletes the previous Snapshot version, if you have not
used the difsnapshot option to use the most recent externally created Snapshot.
Figure 14-3 shows how the Tivoli Storage Manager Client interacts with the Snapshot
Differencing API.
Figure 14-3 How Tivoli Storage Manager Client interacts with Snapshot Differencing API
To enable Snapshot Difference processing, set up a user ID and password on the Tivoli
Storage Manager client. First use the dsmc set password command to establish a user ID
and password. The user ID and password must have administrative authority, such as
administrator, or equivalent. Use the administrator authority level when you map or mount the
file server volume. See Example 14-11, which shows how to set up a password.
Example 14-11 How to set up a password with dsmc with UNC name
tsm> set password -type=filer sim1 administrator
Please enter password for user id "administrator@sim1": *****
Re-enter the password for verification:*****
ANS0302I Successfully done.
The result will be stored in the Windows Registry (see Figure 14-4).
For AIX, you will use the same command, but the name resolution of the IP address should
be possible, so you have to check the /etc/hosts. Example 14-13 shows how to set up the
password for the AIX Backup/Archive client.
You can find the reason of the error, using a TSM Client trace. To do this put the trace options
in the dsm.opt client option file as shown in Example 14-16.
Example 14-16 AIX BA-Client Option File dsm.opt
Tracefile \tmp\tracefile.out
Tracemax 2048
Tracesegsize 256
Traceflags enter exit general snapshot hci hci_detail diskmap diskmap_detail hdw
hdw_detail
The output in the trace file as a result of entering the failed incremental command is shown in
Example 14-17.
Example 14-17 Trace-File output
06/04/09 18:53:21.764 : snapcommon.cpp ( 281): Entering
nsGetNasVolumeInfo(): with: inputPath: </unix01>.
06/04/09 18:53:21.765 : PsDiskMapper.cpp (3531): dmMapNasVolume: statvfs()
for </unix01>. vfs_num <19>. type<nf
s3>. fsid<7>
06/04/09 18:53:21.765 : PsDiskMapper.cpp (3312): psCollectMountTableInfo:
DevId:<19> NFS Mount point:</unix01>
NFS Volume:</vol/unixvol01> NFS Host Name:<192.168.111.190> NFS Mount Options: <>:
06/04/09 18:53:21.769 : PsDiskMapper.cpp (3795): psGetHostName():
gethostbyaddr() failed. hostname: <192.168.11
1.190>. Error: <1>.
06/04/09 18:53:21.769 : PsDiskMapper.cpp (3626): dmMapNasVolume():
psGetHostName() failed. hostname: <192.168.1
11.190>. Error: <6201>.
For the related options to the incremental command, see Tivoli Storage Manager for
Windows Backup-Archive Clients Version 6.1, SC23-9792 and Tivoli Storage Manager for
UNIX and Linux Backup-Archive Clients 6.1, SC23-9791.
Example 14-18 Incremental Backup with snapdiff option using the Command Line Interface (dsmc)
tsm> inc -snapdiff=yes /unix01
The first time, a full incremental backup has to be taken, to establish a base Snapshot. To
verify the Snapshots on your filer, enter the snap list command on your filer interface as
shown in Example 14-19.
The second time an incremental backup is run with this option, a newer Snapshot is either
created or an existing one is used to find the differences between these two Snapshots. This
second Snapshot is called the diffsnapshot. Tivoli Storage Manager then incrementally backs
up the files reported as changed by NetApp by the Tivoli Storage Manager server. The file
space selected for snapdiff processing must be mapped or mounted to the root of the volume.
You cannot use the snapdiff option for any file space that is not mounted or mapped to the
root of the volume. After backing up data using the snapdiff option, the Snapshot that was
used as the base Snapshot is deleted from the .snapshot directory. Tivoli Storage Manager
does not delete the Snapshot if it was not created by Tivoli Storage Manager. You can also
perform a snapdiff incremental backup with the -DiffSnapShot=Latest option. This will be
documented in the statistics at the end of the backup, that no objects were inspected (see
Example 14-20).
If you are monitoring this backup on your filer, you will find that a Snapshot will be created
and after successful completion, the previous one will be deleted.
Now, when we are looking for existing Snapshots after we did the backup with NetApp
Snapshot Difference, we will see the Snapshot created and referenced by Tivoli Storage
Manager as shown in Example 14-21.
Example 14-21 list snapshots after backup with NetApp Snapshot Difference
sim1*> snap list winvol01
Volume winvol01
working...
Figure 14-5 select the mounted file system for incremental backup
The next step is to select the backup method, use the new function incremental (snapshot
difference) instead of traditional incremental (see Figure 14-6).
When you click the Backup button, you will get another window that asks you, if you want to
create a new snapshot or if you want to use an existing one that you have created before
manually on the filer. In our case we select Create (see Figure 14-7).
After a successful backup, you get the statistics showing that 11 new files are backed up, but
no files were inspected. This is because we are using NetApp (see Figure 14-8).
Figure 14-8 Detailed statistics report from backup with NetApp Snapshot Difference
Example 14-22 shows how you enter the command on your filer and what result you will get:
Note: Backup of mixed style volumes is not supported with Tivoli Storage Manager backup
archive client, even if it works.
In this case, the password was set to the name and we are trying to do the backup against the
IP address of the filer. So this is a dependency you must consider. This might not appear on
AIX systems, when the IP Address can be resolved.
Note: We could not use the Tivoli Storage Manager Web Client GUI, because mapped
drives are not visible under the Network node in the Backup Tree Window of the Tivoli
Storage Manager Web client GUI when connecting to Tivoli Storage Manager clients on
Windows XP and Windows 2003. For more information, refer to the Technote, Unable to
view/back up network drives using Tivoli Storage Manager Web client on Windows:
http://www-01.ibm.com/support/docview.wss?uid=swg21385371
Note: The performance data contained in this document was measured in a controlled
environment. Results obtained in other operating environments can vary significantly
depending on factors such as system workload and configuration. Accordingly, this data
does not constitute a performance guarantee or warranty.
NetApp Snapshot Difference was implemented for customers with so-called “Big Fat Filers”
housing millions of files, with unacceptable backup windows. Using incremental backup by
NetApp, the Tivoli Storage Manager Client does not need to crawl the file space looking for
changed files, but instead queries the ONTAP 7.3 OS on a filer for what files have changed
since the last -snapdiff or -diffsnapshot backup.
Test environment
Our test environment consisted of the following components:
Filer: N5300 with Data ONTAP Release 7.3
Tivoli Storage Manager Server: 5.5.1.0 on a 2-way 3.4 GHz, 4 GB Windows Server 2003
Tivoli Storage Manager Client: 6.1.0.0 connected to local Tivoli Storage Manager Server
Storage Pool: File pool on fibre-attached DS8K
Small Workload: 8 thousand 100 KB files
Large Workload: 1.2 million 10 KB files
Huge Workload: 12 million 10 KB files
Test conclusions
Here we discuss our test conclusions:
In the Huge Workload test, when 10% data changed, using NetApp Snapshot Difference
improved throughput from 199 KB/sec to 537 KB/sec (170% improvement).
As an example of a reduced backup window, NetApp reduced the backup window of this
test from almost 17 hours to about 6.25 hours (37% of the time).
The improvement is quite significant when only a small percentage of the data changes.
Specific results will vary depending on the hardware configuration. If the percentage of
changed data is lower than 10%, an even greater improvement will be expected. Refer to:
http://publib.boulder.ibm.com/infocenter/tsminfo/v6/index.jsp?topic=/com.ibm.itsm.
nav.doc/r_pdfs.html
The Tivoli Storage Manager for Mail 6.1 product updated the Data Protection for MS
Exchange component. This product has been updated to include Individual Mailbox Restore
(IMR), using the GUI.
In this chapter we explain how you can use this Mailbox Restore feature to perform individual
mailbox and item-level recovery operations in Microsoft Exchange Server 2003 or Microsoft
Exchange Server 2007 environments using Data Protection for Exchange backups.
In the following topics, we highlight the hardware and software requirements for Data
Protection for Microsoft Exchange V6.1.
15.1.1 Data Protection for Microsoft Exchange V6.1 on Windows for x86
Data Protection for Microsoft Exchange on Windows for x86 requires the following hardware
and software:
Intel® Pentium® 166, or later processor (or equivalent) with at least 20 MB of available
disk space and 96 MB of RAM is required.
One of the following operating system options is required:
– Windows Server 2003 with SP2, or later: Standard, Enterprise, or Data Center editions
– Windows Server 2003 R2 with SP2, or later: Standard, Enterprise, or Data Center
editions
– Windows Server 2008 Standard, Enterprise, or Data Center without Hyper-V editions
Note: Microsoft Cluster Server (MSCS) and Veritas Cluster Server (VCS) are supported.
Refer to the User's Guide for details on MSCS and VCS configuration. Running in a
Microsoft Virtual Server 2005 R2 SP1 or later x86 guest is supported.
15.1.3 Data Protection for Microsoft Exchange V6.1 on Windows for x64
Data Protection for Microsoft Exchange on Windows for x64 requires the following hardware
and software:
Intel EMT64, or AMD Opteron, or equivalent x64 processor with at least 20 MB of
available disk space and 96 MB of RAM.
Note: Microsoft Cluster Server (MSCS) and Veritas Cluster Server (VCS) are supported.
Refer to the User's Guide for details on MSCS and VCS configurations.
You cannot restore Data Protection for Exchange Version 1 backups with later versions of
Data Protection for Exchange (including 6.1). You must retain Data Protection for Exchange
Version 1 for as long as you maintain Version 1 backups.
Chapter 15. IBM Tivoli Storage Manager Data Protection for Mail: Exchange 6.1 237
Copy Backup (Proprietary only):
A copy backup is similar to a full backup except that transaction log files are not deleted
after backup, and does not affect the full/incremental backup sequencing.
Database Copy Backup (Proprietary only):
A database copy backup only backs up the specified database as well as its associated
transaction logs.
Chapter 15. IBM Tivoli Storage Manager Data Protection for Mail: Exchange 6.1 239
Offers full Tivoli Storage Manager integration:
– Full automation and handling of Exchange recovery
– Simple command-line interface – tdpexcc restoremailbox <mailbox>
– Simple GUI - GUI panel allows for easy user selection
– Globalization and localization
Can recover deleted or relocated users
Can run recovery from original server or alternate server
Maintains mailbox history (which backups contain which mailboxes)
Provides Active Directory-based security
Supports multiple restore destinations:
– Original location
– Alternate mailbox and folder
– Outlook data (.PST) file
Restores multiple object types:
– Messages
– Calendar entries
– Contacts
– Notes, tasks
– User folders
Has user-selectable restore granularity:
– Multiple mailboxes
– Single mailbox
– Multiple messages or contacts
– Individual message or contact
Offers advanced filtering capability based on:
– Subject
– Sender
– Message date/time
– Attachments
– Other text, such as message body or folder name
We launch the Mailbox Restore window from the DP interface as shown in Figure 15-1. In this
scenario, we need to restore two mailboxes; “Elton John” and “The First Storage Group”.
We use the listbox element named Mailbox and the Add button to the right to place the
restore requests into the list on the right.
On the Mailbox Restore window, you can set filters and also change the destination location
for the Restore. After setting all parameters, click Restore.
Examples of Mailbox restore are as follows:
– Restore a user’s mailbox that was accidentally deleted.
– Restore a user’s mailbox as it existed on December 31, 2007.
– Restore Andy Pettite’s “HGH” mailbox folder as it existed on 12/31/2007.
– Restore all messages received from “Roger Clemens” on 1/18/2008.
Chapter 15. IBM Tivoli Storage Manager Data Protection for Mail: Exchange 6.1 241
15.2.4 Tivoli Storage Manager Mailbox Restore limitations
The following restrictions apply to the Mailbox Restore function:
PST files must be non-Unicode and limited to 2 GB.
There is no support for public folders.
Use the command-line interface when you must use the mailboxoriglocation optional
parameter to specify the server, the storage group, and the database where the mailbox was
located at the time of backup. The following additional command line parameters are required
for this recovery:
server-name: Name of Exchange Server where the mailbox resided at the time of backup
sg-name: Name of storage group where the mailbox resided at the time of backup
dbname: Name of database where the mailbox resided at the time of backup
To further understand the changes in the upgrade process, refer to the Tivoli Storage
Manager Server Upgrade Guide, SC23-9554.
For the changes in the installation process, refer to the Tivoli Storage Manager Installation
Guide for your particular platform as listed in “Related publications” on page 627.
Our intent here is to give you an understanding of what the requirements are, to put into
perspective the resources, time, and effort that are required to install or upgrade to Tivoli
Storage Manager V6.1 using the DB2 database.
We begin our discussion about strategy and what the upgrade is, in terms of the resource
time and effort to get there. The basic things to consider include the upgrade process itself,
which is a resource intensive process, as well as a number of considerations in planning for
the upgrade. In certain ways, this upgrade process is similar to previous upgrades, but
because of the time and resources required, it can become complicated, and planning is
really the solution.
Some considerations would be the moving of data from an original V5 server database to the
V6.1 database. This process will use a large percentage of a system’s processor and
requires a high amount of I/O activity. You have options for how to perform this task, whether
this is across a network connection or utilizing storage media.
In your planning, consider testing the upgrade on non-production systems. Testing gives you
information about how long the upgrade of the server database will take, which will help you
to plan for the time that the server will be unavailable. Some databases might take much
longer than others to upgrade.
Testing also gives you more information about the size of the new database compared to the
original, giving you more precise information about database storage needs.
If you have multiple servers, consider upgrading one server first, to get experience with how
the upgrade process will work for your data. Use the results of the first upgrade to plan for
upgrading the remaining servers.
16.2.1 What you can and cannot do with Tivoli Storage Manager V6.1
Next we list some examples of what you can and cannot do in Tivoli Storage Manager V6.1.
This list is subject to change and we suggest that you go to the Tivoli Storage Manager Wiki
for the latest updates:
http://www.ibm.com/developerworks/wikis/display/tivolistoragemanager/Home
Chapter 16. Installation and upgrade planning for Tivoli Storage Manager V6.1 247
16.2.2 Upgrade considerations
For Windows Tivoli Storage Manager Servers – if the Tivoli Storage Manager server being
upgraded contains multiple NICs, it might be necessary to disable all of them except one in
order to use the Database Upgrade Wizards. The NICs can be re-enabled after the DB
upgrade has completed.
If you have a shared library configuration, you must upgrade the server that is the library
manager first and follow this by upgrading the library clients. Library clients must be at least
level 5.4 or higher for compatibility with V6.1 Tivoli Storage Manager Server.
Testing also gives you more information about the size of the new database compared to the
original, giving you more precise information about database storage needs. The results will
feed into your production planning requirements.
If you have multiple servers, consider upgrading one server first, developing experience with
how the upgrade process will work (unique process building). Use the results of the first
upgrade to plan for upgrading the remaining servers.
If you are considering a consolidation of your Tivoli Storage Manager servers, this process
needs to be tested. After the initial upgrade, all other consolidation activities are essentially
using the server export command, which can result in extended durations for larger nodes. In
some cases where a large amount of archive data resides, consider possibly extracting
backup data only, then leaving the archive data to remain on the existing server until it is
convenient to migrate later, with less impact to the overall environment.
16.3 Preparation
To prepare for the installation or upgrade, you must first review a few sections and consider
developing a structured plan for this activity. The sub-sections immediately following provide
platform specific links for both installation and upgrade, and a few other useful components
that you must review prior to starting your tasks.
AIX
http://publib.boulder.ibm.com/infocenter/tsminfo/v6/topic/com.ibm.itsm.srv.install
.doc/r_srv_aix_sysreq_inst.html
HP-UX
http://publib.boulder.ibm.com/infocenter/tsminfo/v6/topic/com.ibm.itsm.srv.install
.doc/r_srv_hp_sysreq_inst.html
Linux
http://publib.boulder.ibm.com/infocenter/tsminfo/v6/topic/com.ibm.itsm.srv.install
.doc/r_srv_lnx_sysreq_inst.html
Solaris
http://publib.boulder.ibm.com/infocenter/tsminfo/v6/topic/com.ibm.itsm.srv.install
.doc/r_srv_sun_sysreq_inst.html
Windows
http://publib.boulder.ibm.com/infocenter/tsminfo/v6/topic/com.ibm.itsm.srv.install
.doc/r_srv_wnt_sysreq_inst.html
Upgrade platforms
For upgrade platforms, see the following links.
AIX
http://publib.boulder.ibm.com/infocenter/tsminfo/v6/topic/com.ibm.itsm.srv.upgrd.d
oc/r_srv_upgrd_aix_sysreq.html
HP-UX
http://publib.boulder.ibm.com/infocenter/tsminfo/v6/topic/com.ibm.itsm.srv.upgrd.d
oc/r_srv_upgrd_hp_sysreq.html
Linux
http://publib.boulder.ibm.com/infocenter/tsminfo/v6/topic/com.ibm.itsm.srv.upgrd.d
oc/r_srv_upgrd_lnx_sysreq.html
Chapter 16. Installation and upgrade planning for Tivoli Storage Manager V6.1 249
Solaris
http://publib.boulder.ibm.com/infocenter/tsminfo/v6/topic/com.ibm.itsm.srv.upgrd.d
oc/r_srv_upgrd_sun_sysreq.html
Windows
http://publib.boulder.ibm.com/infocenter/tsminfo/v6/topic/com.ibm.itsm.srv.upgrd.d
oc/r_srv_upgrd_win_sysreq.html
Table 16-1 Tivoli Storage Manager Client and Storage Agents compatibility
If you have a Tivoli Storage Manager client at It is compatible and is supported with these
this level: Tivoli Storage Manager Server/Storage Agent
levels:
Tivoli Storage Manager Version 5.5 Versions 6.1, 5.5 and 5.4
Tivoli Storage Manager Version 5.4 Versions 6.1, 5.5 and 5.4
Note: V5.4 clients do not include the special 5.3.6-level clients (Windows 2000, Solaris 8,
and Linux x86 RHEL 3.
Tivoli Storage Manager Version 6.1 Versions 6.1, 5.5 and 5.4
Note: Do not underestimate the log requirements when planning the capacity of the log
files. If any of the active, archive, or secondary archive logs fill up, the server will eventually
halt.
We recommend to define the database and recovery log directories on separate physical
volumes or file systems. Ideally, use multiple directories for database space and spread them
across as many physical devices or logical unit numbers (LUNs) as there are directories. With
Tivoli Storage Manager V6.1, we recommend for a production server to use at least four
database directories (disks), where the number reflects the database size, however, this can
grow up to 128 database directories for the DB2 database as required.
You can add DB2 directories with the new EXTEND DBSPACE command to an existing
installation and this requires a restart of the server for the change to become active. An
important item to mention is when we add a new database directory, after the initial load it will
cause a REORG, which needs to be avoided if possible. A REORG is expensive and
disruptive. One method that will not cause a REORG is to extend the existing file systems in
UNIX or disks in Windows if there is a need for database space. Adding the physical disk,
then extending the file systems (and subsequently the directories holding the DB2 databases)
will not cause the REORG to occur.
To review the V6.1 product capacity planning information, and access product planning
sheets, go to the following URL:
http://publib.boulder.ibm.com/infocenter/tsminfo/v6/topic/com.ibm.itsm.srv.install
.doc/t_srv_plan_capacity.html
With V6.1 the recovery logs are no longer owned by Tivoli Storage Manager; the database
and logs are owned by DB2. There are four different logs in DB2 that we describe next.
Chapter 16. Installation and upgrade planning for Tivoli Storage Manager V6.1 251
We recommend that you begin with an ACTIVE log size two times the size of the V5
maximum size (13 x 2 =26GB), monitor the space usage, and adjust the size of the ACTIVE
log as needed. We must ensure that the ACTIVE log has enough space. Because you should
have a log mirror on a separate disk, consider this in your planning.
Note: Even a 128 GB ACTIVE log can fill up in an extreme situation. We recommend that
you define the ACTIVE log no greater than 120 GB so you can extend it if necessary.
The initial directory of ACTIVE logs is determined by the ActiveLogDir parameter (on dsmserv
format / loadformat), this can be changed later in dsmserv.opt. If you make changes in the
ActiveLogDir parameter, the Tivoli Storage Manager must be restarted for the change to take
effect. Prior to making any changes to the active log directory path, perform a full database
backup.
The ACTIVE log is in Roll-forward mode only, and is a sequential IO access device. The way
this works in V6.1 is that the ACTIVE log is a part of the log that always contains the most
recent log records (in-flight transactions) data.The ACTIVE Log files are copied to the
ARCHIVE log directory when the ACTIVE log is full. DB2 creates files from the ACTIVE log
when the ACTIVE log is full. If the ARCHIVE log directory is full, ACTIVE log files cannot be
copied over to the ARCHIVE log directory. If the ACTIVE log files cannot be copied, they
cannot be deleted, which leads to an out-of-log space in the ACTIVE log and no new ACTIVE
logs can be created. Transactions can still be active when they are archived, but the active
log file cannot be deleted until ALL transactions within the log file are either committed or
aborted. This also applies to transactions that flow through an active log file.
You should plan on having up to three full backups worth of space for ARCHIVE logs or plan
on doing backups more often. The ARCHIVE logs are also sequential IO access in nature,
and they must be used in your configuration. The initial directory of ARCHIVE logs
determined by ArchiveLogDir parameter (on dsmserv format / loadformat) and can be
changed later in the dsmserv.opt file. Changing the ArchiveLogDir directory requires Tivoli
Storage Manager to be restarted.
Log files older than two full backups ago are removed after a database backup. If ARCHIVE
log directory becomes full, and no fail over ARCHIVE log location has been specified, then
Tivoli Storage Manager just keeps logs in the ActiveLogDir location and creates new ones. If
this fills, then Tivoli Storage Manager halts.
The size of the ARCHIVE log depends on the number of objects stored by client nodes over
the period of time between full backups of the database. A full backup of the database causes
obsolete ARCHIVE log files to be pruned, to recover space. The ARCHIVE log files that are
included in a backup are automatically pruned after two more full database backups have
been completed. Therefore, the ARCHIVE log should be large enough to contain the logs
generated since the previous two full backups. If you perform a full backup of the database
every day, the ARCHIVE log must be large enough to hold the log files for client activity that
occurs over two days. You can run several database backups on the same day to keep the
ARCHIVE logs from filling if you choose to do so.
The initial directory of ACTIVE log mirrors is determined by MirrorLogDir parameter (on
dsmserv format / loadformat) and can be changed later in dsmserv.opt. If you make changes
in the ActiveLogDir parameter, the Tivoli Storage Manager must be restarted to the change to
take effect.
The mirror log files are created in 512 MB sized files. If the mirror log directory becomes full,
a message is issued, the server continues to operate; it does not fail.
The use of this feature is optional, but highly recommended. Consider use of large NFS
mountpoint or large SATA disk for this. The IO access is sequential. You can set it with the
ArchFailOverLogDir parameter (on dsmserv format / loadformat), or add it later in the
dsmserv.opt file. Log files are removed after DB backup. Changing the ArchFailOverLogDir
directory requires Tivoli Storage Manager to be restarted before it takes effect.
Attention: It is extremely important that the directories for the recovery logs do not fill up.
You have to determine and configure the DB2 database and recovery log space that is
required before starting the install or upgrade process. You need unique, empty directories
(separate disk mount points is preferable) for the following components of the V6.1 server:
The database:
– Multiple disks or mount points recommended
The recovery log:
– Active log
– Archive log
– Optional: Log mirror for the active log
– Optional: Secondary archive logs (failover location for archive log)
The ARCHIVE log size depends upon how much activity you have and how often you back up
the database. As ACTIVE logs fill up, they are copied to the archive directory. They remain
there until the database is backed up, at which time the logs are appended to the database
backup. They get erased when the database backup completes. In V6.1.1, two days are held
due to a DB2 issue, which is currently being worked on.
Chapter 16. Installation and upgrade planning for Tivoli Storage Manager V6.1 253
If the ACTIVE log is too small to handle the workload, it will fill up. If the ARCHIVE log
directories are too small to contain two days worth of logs, they will fill up. If the ARCHIVE log
directories fill up, the ACTIVE log will eventually fill up. If the ACTIVE log fills up for any of
these reasons, the server will halt.
Place the database and the ACTIVE log on fast, reliable storage, with high availability
characteristics. Ideally, use multiple directories for database space and locate them across as
many physical devices or logical unit numbers (LUNs) as there are directories. Place the
database and recovery log directories on separate physical volumes or file systems.
The ACTIVE log must be on a high-speed reliable disk. The ARCHIVE log can be on a slower
disks. The failover archive log can be on even slower disks, assuming that it is used.
infrequently (you can even use the network file server (NFS) for the failover archive logs).
To maintain database integrity, ensure that the storage hardware can withstand failures such
as power outages and controller failure. You can improve database performance by using
hardware that provides a fast, nonvolatile write cache for both the database and logs.
If there is an error writing to either the primary or log mirror, the failing path to the log is
marked as bad and a message is written to the log. The writes continue to the remaining good
log volume until the current log volume is filled. When DB2 needs to open the next log file,
then the path is retested and reused if it is OK. If an error occurs in the remaining good path,
Tivoli Storage Manager will halt.
Ultimately, with the cost of disk storage continuing to drop, do not hesitate to over-configure
your database and log structure. The investment will pay dividends as compared to an
under-configured environment. Balancing your IO is a critical area, especially for heavily
loaded servers.
One of the things that can be done now is to query the database directly without the need to
go through the Tivoli Storage Manager administrator command line. This approach does
provide you some additional options.
Note: The DB2 database should be treated as read-only. Do not make any changes to the
schema, configuration, or content.
There are several reasons for converting to the DB2 database. The Tivoli Storage Manager
V5.5 database implementation is reaching its limits in terms of size, performance, and
function. Many of you are experiencing the impact of reaching those limits, creating multiple
instances of the Tivoli Storage Manager database to handle your workload.
The goal is equivalent performance compared to Tivoli Storage Manager V5.5, that is, the
overall throughput, for a representative set of operations, should be comparable to that with
the proprietary database. This is the first implementation of Tivoli Storage Manager database
using DB2 and as such, there might be some side effects of using the new database. Some
things will run faster and some might run a little slower and some things might just work
differently than they have in the past, though the goal is to make this as transparent as
possible.
In Tivoli Storage Manager V6.1 there is a significant increase in real memory utilization. The
recommended memory size is approximately four times the current recommended values, for
instance, in the past this has been 2 GB per instance on AIX, and now our recommendation is
8 GB per instance. This is not a minimum, nor is it a requirement, it is simply our
recommendation for a normal workload,
Another change to keep in mind is growth in the database itself. Not only will the overall size
of the database increase, but also DB2 might use additional disk space for temporary work
based on your workload. There are two places that we have seen significant growth in the
database, one is during the insertion of database entries during the upgrade process and the
other during the execution of certain queries.
The database recovery log space requirement also increases for this upgrade. DB2 is
managing this log space and you will need space for both an active log and for archive logs.
The DB2 database is in what in V5 we would refer to as “roll-forward’ mode.” There is no
support for circular logging.
We briefly describe the upgrade process here and then go into it in more detail later in this
material. Simply stated, the upgrade process prepares the existing V5 database, extracts its
contents, then inserts that extracted data into a newly created DB2 database.
The fall-back plan is basically the same as it has been in the past, but perhaps a little more
complex because of the addition of the DB2 installation. You will need to reinstall your
previous release of Tivoli Storage Manager and restore its database. There are no changes
to your existing storage pools, so the normal precautions for protecting previously backed
data are sufficient, such as disabling migration and reclamation.
Chapter 16. Installation and upgrade planning for Tivoli Storage Manager V6.1 255
Database creation
Estimate the extract and insert processes between 5 GB/hr and 10 GB/hr. This rate is
currently what has been experienced for the upgrade process, however that assumes a
“normal” Tivoli Storage Manager workload. It also assumes a pristine Tivoli Storage Manager
database. The number of objects has an impact on insert, however the extract is purely
sequential block reads.
This 5 GB/hr to 10 GB/hr rate is based on the amount of space that is actually used by the V5
database, not the allocated space. Your environment might produce different results. Testing
upgrade operations in your environment is especially important for Tivoli Storage Manager
servers that support essential systems.
Workload type
Consider the type of workload that the server has handled. A workload that consists of large
numbers of small files, or files with very long file names, can cause a relatively longer
upgrade time. An example of an abnormal configuration might be a Content Manager or
many large file servers, which tend to manage more objects per GB, so they might be
considerably slower when using GB/hr estimate.
Estimate the upgrade time to help plan for the amount of time that the server will be
unavailable. The time that is required to complete the upgrade depends on multiple factors.
The Tivoli Storage Manager V5 server is not available for use while data is being extracted
from the database. The network method for the data movement overlaps the extraction time
with the insertion time. Using the network method might help reduce the total time required for
the upgrade because of the overlap.
Note: If the Tivoli Storage Manager server is V5.5.x or higher, the existing Tivoli Storage
Manager server can restart, and if prior to V5.5.x, the DB must be restored if you need to
restart the Tivoli Storage Manager server after running the upgrade utility.
The backup of the server database requires as much space as is used by your Tivoli Storage
Manager V5 database. Store the backup on the form of sequential media that is convenient
for you, either tape or disk.
Additional space requirements depend on the method that you choose for moving the data
from the Tivoli Storage Manager V5 database:
Media method
You need media to store the data that will be extracted from the Tivoli Storage Manager V5
database. The media can be tape, or disk space that is defined as a sequential-access disk
device class. The space required for the extracted data is the same as the used space in your
database. If your database is safely backed up, and you are certain that you no longer need
to run the Tivoli Storage Manager V5 server, after you extract the data you can optionally
release the space used by the Tivoli Storage Manager V5 database and recovery log.
Network method
You must have the working copy of the Tivoli Storage Manager V5 database and recovery log
on the Tivoli Storage Manager V5 system. If you are working with a copy of the database that
was created for testing the upgrade process, you need enough space to hold the total
allocated size of the database; you can use the minimum size for a V5 recovery log.
You need unique, empty directories for the following items for the upgraded server:
The database
The recovery log:
– Active log
– Archive log
– Optional: Active log mirror
– Optional: Secondary archive log (archive failover log)
Chapter 16. Installation and upgrade planning for Tivoli Storage Manager V6.1 257
The instance directory for the server:
The instance directory is a directory that will contain files specifically for this server
instance (the server options file and other server-specific files). Locate the database and
the active log on fast, reliable storage, with high availability characteristics. Ideally, use
multiple directories for database space and locate them across as many physical devices
or logical unit numbers (LUNs) as there are directories.
Place the database and recovery log directories on separate physical volumes or file
systems. To maintain database integrity, ensure that the storage hardware can withstand
failures such as power outages and controller failure. You can improve database
performance by using hardware that provides a fast, nonvolatile write cache for both the
database and logs.
The amount of storage space for the database is managed automatically. The database
space can be spread across up to 128 directories. After you specify the directories for the
database, the server uses the disk space available to those directories as required.
Plan for 33 - 50% more than the space that is used by the Tivoli Storage Manager V5
database. Do not include allocated but unused space for the Tivoli Storage Manager V5
database in the estimate. Some databases can grow temporarily during the upgrade process;
consider providing up to 80% more than the space that is used by the V5 database.
Consider testing the upgrade of the database to get a more accurate estimate. Not all
databases will grow as much as the suggested 33 - 50% increase in space.
When the server is operating normally, after the upgrade process, some operations might
cause occasional large, temporary increases in the amount of space used by the database.
Continue to monitor the usage of database space to determine whether the server needs
more database space.
Future growth
For the best efficiency in database operations, anticipate future growth when you set up
space for the database. If you underestimate the amount of space that is needed for the
database and then must add directories later, the database manager might need to perform
more database reorganization, which can consume resources on the system. Estimate
requirements for additional database space based on 600 - 1000 bytes per additional object
stored in the server. For more information about estimating database space requirements,
see the Administrator’s Guide for your specific platform listed in the “Other publications” on
page 627.
Visit the product support site for the latest information and recommendations:
http://www.ibm.com/software/sysmgmt/products/support/IBMTivoliStorageManager.html
For more information about estimating recovery log space requirements, see the
Administrator’s Guide for your specific platform listed in the “Other publications” on page 627.
Active log
The default, minimum size of 2 GB is large enough to complete the upgrade process. The
maximum size of the active log is 128 GB.
When you begin normal operations with the server after the upgrade, you might need to
increase the size of the active log. The required size depends on the amount of concurrent
activity that the server handles. A large number of concurrent client sessions might require a
larger active log.
For simple backup and archive activity with no data deduplication, 26 GB for the active log is
more than adequate. If you use data deduplication, and if you de duplicate very large objects
(for example, image backups), use an active log size that is 20% of the database size.
Archive log
The size required depends on the number of objects stored by client nodes over the period of
time between full backups of the database. Remember that a full backup of the database
causes obsolete archive log files to be pruned, to recover space. The archive log files that are
included in a backup are automatically pruned after two more full database backups have
been completed.
If you perform a full backup of the database every day, the archive log must be large enough
to hold the log files for client activity that occurs over two days. Typically 600 - 4000 bytes of
log space are used when an object is stored in the server. Therefore, you can estimate a
starting size for the archive log using the following calculation:
objects stored per day x 3000 bytes per object x 2 days
For example:
5,000,000 objects/day x 3000 bytes/object x 2 days = 30,000,000,000 bytes, or 30 GB
Chapter 16. Installation and upgrade planning for Tivoli Storage Manager V6.1 259
Archive failover log (secondary log)
If the archive log becomes full, the failover archive log is used. Specifying a failover archive
log is useful only if you locate it on a different physical drive or file system than the archive
log.
Specifying a failover directory can prevent problems that occur if the archive log runs out of
space. If the drive or file system where the archive log directory is located becomes full and
either there is no archive failover log directory or it also is full, the log files that are ready to be
moved to the archive log instead remain in the active log directory. If the active log becomes
full, the server stops.
The directory for the archive failover log can be a remote directory if local disk space is
limited. Using a remote directory might be slower than a local disk or directory, but because
the directory is used only if the archive log becomes full, the performance is not as important
as for the other logs.
In addition to the space required for the upgraded server itself, some additional disk space is
needed for the upgrade process. For example, if you are upgrading the server on the same
system where it is currently located, you need enough space for two copies of the database
during the upgrade process.
The space requirements for the upgraded, Tivoli Storage Manager V6.1 server depend on the
size of the Tivoli Storage Manager V5 database and other factors, as discussed in previously
in this chapter.
The space requirements for the upgrade process depend on how you move the data from the
Tivoli Storage Manager V5 database to the new database. You can move the data to the new
database using the media method or the network method, with the following requirements:
The media method requires sequential media. The sequential media can be tape or
sequential disk device class (FILE device type).
The network method requires a network connection between systems, if you are
upgrading on a new system.
Table 16-3 on page 261 shows basic tips for estimating each item, for each of the main
scenarios. For details about sizing the Tivoli Storage Manager V6.1 database and recovery
log, see “Space requirements for the V6 server system” on page 30.
Table 16-4 on page 262 shows a sample filled-in work sheet for a 100-GB, Tivoli Storage
Manager V5 database that has 80% space utilization, with the assumption that the database
increases by 33 - 50% when upgraded.
The database size of the Tivoli Storage Manager V6.1 database after the upgrade has
completed is different from database to database. If Content Manager is being used, or the
database contains many objects that have longer filenames, this space requirement will be
larger. It was seen in one case during the beta that for a Content Manager system, the space
requirement was twice the original size.
Table 16-3 contains tips for estimating space requirements. Select the scenario, then read
down the column.
V5 database: Sequential Space that is Space that is Space that is Space that is
final backup copy media used by the V5 used by the V5 used by the V5 used by the V5
database (based database (based database (based database (based
on% utilization) on% utilization) on% utilization) on% utilization)
V6.1 database: Disk Disk space that is Disk space that is Disk space that is Disk space that is
estimated size used by the V5 used by the V5 used by the V5 used by the V5
database plus 33 database plus 33 database plus 33 database plus 33
- 50% more - 50% more - 50% more - 50% more
V6.1 active log Disk 2 GB during the 2 GB during the 2 GB during the 2 GB during the
directory upgrade upgrade upgrade upgrade
process; a higher process; a higher process; a higher process; a higher
value might be value might be value might be value might be
needed for needed for needed for needed for
normal use normal use normal use normal use
V6.1 active log Disk If used, same If used, same If used, same If used, same
mirror (optional) size as active log size as active log size as active log size as active log
V6.1 archive log Disk Estimate based Estimate based Estimate based Estimate based
directory on client activity on client activity on client activity on client activity
and database and database and database and database
backup backup backup backup
frequency frequency frequency frequency
Chapter 16. Installation and upgrade planning for Tivoli Storage Manager V6.1 261
Table 16-4 shows a sample filled-in work sheet for a 100 GB, V5 database that has 80%
space utilization, with the assumption that the database increases by 33 - 50% when
upgraded.
V5 database: Sequential 80 GB 80 GB 80 GB 80 GB
final backup copy media
V5 database: Sequential 80 GB 0 80 GB 0
extracted data media
V6.1 database: Disk 106 - 120 GB 106 - 120 GB 106 - 120 GB 106 - 120 GB
estimated size
V6.1 Sequential 106 - 120 GB 106 - 120 GB 106 - 120 GB 106 - 120 GB
database:first media
backup
Total disk space Disk 307 - 320 GB 307 - 320 GB 307 - 320 GB 307 - 320 GB
required during
the upgrade (315 - 328 GB) (315 - 328 GB) (315 - 328 GB) (315 - 328 GB)
process
Total sequential Sequential 267 - 280 GB 187 - 200 GB 267 - 280 GB 187 - 200 GB
media required media
during the
upgrade
process
Total disk space Disk 195 - 208 GB 195 - 208 GB 195 - 208 GB 195 - 208 GB
for the V6.1
server after (203 - 216 GB) (203 - 216 GB) (203 - 216 GB) (203 - 216 GB)
upgrade and
cleanup
The database
Active log
Archive log
Except for the database extraction and insertion processes, the upgrade process is similar to
performing disaster recovery for a server. The server’s critical files (such as the server option
file, and device configuration file) must be available, and devices used for storage pools must
be made available to the upgraded server.
Chapter 16. Installation and upgrade planning for Tivoli Storage Manager V6.1 263
structure, the validity of the data is checked against constraints that are enforced in the
new database. The upgrade tools automatically correct some errors in the database.
Other errors might need to be corrected manually.
If you are using the wizard, you are guided to perform the upgrade steps in the correct
order. If you are performing the upgrade manually using utilities from a command line,
follow the procedure carefully.
6. Verify the upgrade by performing basic operations and querying information about the
system to confirm that all information transferred correctly. Review the information that
compares the methods for performing the upgrade, and the descriptions of the upgrade
scenarios, to help you decide how to perform the upgrade process for your servers.
Instance user ID
The instance user ID is used as the basis for other names related to the server instance. The
instance user ID is also called the instance owner.
For example: tsminst1
The instance user ID is the user ID that must have ownership or read/write access
authority to all directories that you create for the database, the recovery log, and storage
pools that are FILE device type.
Database name
The database name is always TSMDB1, for every server instance. This name cannot be
changed.
Server name
The server name is an internal name for Tivoli Storage Manager, and is used for operations
that involve communication among multiple Tivoli Storage Manager servers. Examples
include server-to-server communication and library sharing. The server name is also used
when you add the server to the Administration Center so that it can be managed using that
interface. Use a unique name for each server. For easy identification in the Administration
Center (or from a QUERY SERVER command), use a name that reflects the location or
purpose of the server.
For example: tsminst1
If you have more than one server on the system and you use the instance configuration
wizard, you can use the default name for only one of the servers. You must enter a unique
name for each server. For example:
Liam_SERVER1
Leon_SERVER2
16.7 Performance
For best performance, use multiple LUNs that map to multiple independent disks, or that map
to RAID arrays with a large stripe size (for example, 128 KB). Use a different file system on
each LUN. Table 16-6 shows an example of LUN usage.
If the disk storage is supplied by a virtualization device (high-end storage controller, or a SAN
virtualization device), ensure that none of the virtual LUNs are on the same physical disk
drive. Ensure that the directories in use are on different physical disk drives within the
virtualization device.
Chapter 16. Installation and upgrade planning for Tivoli Storage Manager V6.1 265
16.8 Upgrading an existing system versus a new system
Upgrading the Tivoli Storage ManagerV6.1 server on an existing system requires that the
system be unavailable for production use during installation and when the data is moved into
the new database. Moving the server to a new system when upgrading to the V6.1 server
gives you more flexibility in how to perform the upgrade, but with some additional costs.
Table 16-7 lists items to consider when deciding how to perform the upgrade for a server.
Software Software on the system must Software on the new system must
meet requirements for V6.1. meet requirements for V6.1.
The V6.1 server cannot Software on the original V5 system
coexist with other versions must meet requirements for the
on the same system. upgrade utilities (upgrade utilities
requirements are the same as for a
V5.5 server).
V5 server availability All V5 server instances on You can stage the upgrade of
the system are unavailable multiple servers, because the V5
after the V6.1 server server program can be left on the
program is installed. Data original system.
managed by a server A V5.5 server on the original system
instance cannot be accessed can be restarted after the database
until the upgrade process is extract completes. A V5.3 or V5.4
complete for that server server on the original system can be
instance. restarted, but its database must be
To revert to using the V5 restored first (using the database
server, you must reinstall the backup that was made before the
same level of the V5 server upgrade process).
program as before, and
restore the V5 database from
a backup that was made
before the upgrade process.
Database movement The database can be moved You must have either a network
method with a local-host network connection between the existing and
connection, or can be moved the new systems, or a device and
by using disk or external media available to store the
media. extracted database.
Storage devices and Existing attached devices The new system must have access
storage pools can be used. to all storage that is used by the
You must change ownership original system.
or permissions for all disk Definitions for devices such as FILE
space that is used for device types might need to be
storage pools (device types changed after the upgrade.
of FILE or DISK). The user ID You must change ownership or
that you will create to be the permissions for all disk space that is
owner of the upgraded used for storage pools (device types
server instance must be of FILE or DISK). The user ID that
given ownership or you will create to be the owner of the
read/write permission to the upgraded server instance must be
disk space for storage pools. given ownership or read/write
permission to the disk space for
storage pools.
Client and storage agent No changes are necessary. The network address on clients and
connections storage agents must be updated
after the upgrade, or network
changes made so that the new
system has the same address as the
original system.
To move the database, you must install the upgrade utilities package on the system where
the original server database is located. The utilities package is available from the FTP
downloads site for the Tivoli Storage Manager product. Installing the upgrade utilities
package is a separate task from installing the Tivoli Storage Manager V6.1 server.
You can move the database in one of two ways: the media method, or the network method.
Media method
You can extract data from the original database to media, and later load the data into the new
database. The new database can be located either on the same system or a different system.
Chapter 16. Installation and upgrade planning for Tivoli Storage Manager V6.1 267
Upgrading to an existing system (in place) using external media
This is a good method to choose if you are not upgrading to a new system, however you must
review the new hardware requirements. In this particular option, you might have both the
Tivoli Storage Manager V5 and Tivoli Storage Manager V6.1 server to use the same disk
storage space. This is depicted in Figure 16-1.
Figure 16-3 Export and import used for a staged Tivoli Storage Manager V5 to V6 node migration
Network method
The network method reduces the amount of storage that is required because there are no
requirements for disk or tapes to hold the data unloaded from the Tivoli Storage Manager V5
database. With either method, the original server cannot be running in production mode while
the data is being extracted.
Chapter 16. Installation and upgrade planning for Tivoli Storage Manager V6.1 269
Upgrading to an existing system (in place) using the network
You can simultaneously extract data from the original database and load the data into the
new database. In this scenario, the new database can be located on the same system and
connected using the network. See Figure 16-4.
Figure 16-6 Export and importing using server to server communication for a staged migration
Chapter 16. Installation and upgrade planning for Tivoli Storage Manager V6.1 271
DSMUPGRD standalone utility
The upgrade utilities prepare and extract data from a version 5.3, 5.4, or 5.5 server database
for insertion into an empty Tivoli Storage Manager V6.1 server database. The DSMUPGRD
utilities are run on the original database; this utility upgrades a server database version to
V5.5, and performs some cleanup to prepare for the extraction process.
The DSMUPGRD utility has the same code as the V5.5.2 server. The difference is that its
function is limited to upgrade related tasks; as such, it cannot be used to run a normal server.
However, it is the same code as the server, and the decision to function as upgrade utility or
server is based on the name.
The following major upgrade tasks are discussed in further detail in this chapter:
PREPAREDB
EXTRACTDB
QUERYDB
Emergency LOG and DB extension
UPDATE (for Windows Registry maintenance)
PREPAREDB
The PREPAREDB command prepares a V5.x database for upgrade to V6.1, and is the
required second step in the upgrade process. The first step is to back up the database in case
you need to roll back the database to a state after the utility upgrade (for example, the V5.4
database to V5.5.2). This will upgrade the database version to V5.5 as a dsmserv upgradedb.
This checks for some known database problems such as moving from asterisk to ASCI - USS
file space conversion status. The tool stops if it detects any USS file spaces that are not
converted. Currently it does not run any database repair utilities, but that might change.
When the database problem check is finished, it backs up device configuration information to
configured devconfig files and backs up the server instance’s registry entries on Windows.
Note: Tivoli Storage Manager V6.1 does not support the presence of NAS Backups with
TOCs and the presence of Backup Sets. This will be fixed in V6.1.2.
EXTRACTDB overview
The EXTRACTDB command extracts data from the V5.5 database, using sequential block
reads. It is the base V5 piece of the upgrade utility, by extracting out the V5 database objects,
for further use by the insertdb tool.
You can use the utility either to simultaneously extract and insert the data into a new
database over a network, or to extract the data to media for later insertion into a new
database. The data extraction operation can be run with multiple processes.
EXTRACTDB (media)
The EXTRACTDB command extracts data from the V5.5 database and writes extracted data
to sequential media.
Optional PARMs
VOLumenames=volume list
SCRatch=Yes|No
Saves volume list, devclass name in manifest file, and uses devclass definition from
database, not devconfig file.
EXTRACTDB (network)
This utility sends extracted data over network session, which uses server-to-server
communications and treats target server as a V6.1 version of itself. This utility is specific to
V6.1 upgrade, and skips tables which are not pertinent for V6.1 upgrade due to some
redundant tables in the V5 database. This utility also manipulates data to ease inserts into the
V6.1 database and thus positioning the data for the insertion process. This command
requires minimal authorization with hard coded source and target server names:
$UPGRADESOURCE$, $UPGRADETARGET$
Required PARMs
HLAddress=ipaddress (can be “localhost”)
LLAddress=portnumber
EXTRACTDB (general)
General PARMs
EVentbasedused=Yes|Never
Was event based archive retention ever used?
YES results in extraction of extra expiration info from db
Never say NEVER if there is any doubt
Has no effect on new V5.3.6, V5.4.2, V5.5.0 servers or if “REPAIR EXPIRATION *
TYPE=ARCHIVE EVENTBASEUSED=FIX FIX=YES” was run.
Undocumented PARMs
MAXStream=n (currently disabled, default=1)
MAXPRocess=n (n from 1 to 20, default=4
EXTEND DB I LOG
Emergency database and log extension, which behaves the same as DSMSERV EXTEND
DB/LOG in V5.5 server code. This is used when PREPAREDB database upgrade runs out of
database or log and will extend database or log even if database version is lower than V5.5.
UPDATE
This utility recreates “backup” copies of registry entries; it is required if upgrading to same
system and V6 instance directories are different than V5 instance directories, and the V6
LOADFORMAT run before PREPAREDB. This utility is run from the V5.x instance directory.
Syntax
DSMUPGRD [-k key_name] UPDATE
Chapter 16. Installation and upgrade planning for Tivoli Storage Manager V6.1 273
16.8.3 Tivoli Storage Manager V6.1 upgrade utilities
In this section we discuss the Tivoli Storage Manager V6.1 pieces of the upgrade, with
standalone commands within Tivoli Storage Manager V6.1 DSMSERV:
DSMSERV LOADFORMAT
DSMSERV INSERTDB
LOADFORMAT
This utility formats a new Tivoli Storage Manager V6.1 database for the upgrade’s use, and
uses the identical syntax and usage as DSMSERV FORMAT. This utility creates database
and logs, defines tables, and does not insert any default values into the V6.1 database.
INSERTDB
This utility reads data from media or from the network session. It inserts this data into a V6.1
database, and has an explicit knowledge of Tivoli Storage Manager V5.5 database schema.
The utility maps data from V5.5 schema to V6.1 schema, while validating the data before
inserting it into the database. In some situations, the insertdb will correct data before
inserting it and logs invalid rows that cannot be corrected.
INSERTDB (media)
This utility reads extracted data from sequential media. This utility will read a volume list,
devclass name from a manifest, and uses devclass definition from devconfig file. This utility
requires a DEVCONFIG option to be specified in dsmserv.opt and requires a copy of
devconfig file from source server.
Required PARMs
MANifest=filename
Optional PARMs
DEVclass=device class name
INSERTDB (network)
This utility reads extracted data from the network session, which is sequential in nature. This
also initializes the server and then waits for a connection.
Required PARMs
None
Optional PARMs
SESSWait=n (#minutes to wait, default=60)
ANR1336I INSERTDB: Ready for connections from the source server
INSERTDB (general)
This utility validates database schema before inserting data into DB and performs all
INSERTDB actions except insert into DB. This utility runs almost as fast as EXTRACTDB.
It can provide data, checks data for errors, and provides insight on the amount of data
transferred. The new parameter is targeted for V6.1.2.
Other PARMs
PREVIEW=Yes|No
In the database update phase, DSMSERV INSERTDB merges information from two
source tables into a single target table. The merge is performed as a single, long running
DB2 UPDATE operation. The UPDATE operation does not provide status until it
completes, which is why the ANR1525I message repeatedly shows 0 entries updated for
such an extended period of time.
INSERTDB merges multiple sets of tables during the database update phase. After each
set of tables is merged, the ANR1525I message will change to reflect the progress up to that
point. However, each set of tables can take a considerable amount of time to merge,
during which the status will remain the same. This should not be a cause for alarm; rather
an indication that INSERTDB is still alive and continuing to function.
When INSERTDB enters the database update phase, most of the remaining work will be
done by DB2. Unfortunately, only indirect methods are available to tell if it is making
progress. One such method is to use a system monitor such as topas on AIX to confirm
that the DB2 db2sysc process is operating. Consuming CPU cycles and performing I/O to
the database volumes are both good indications that the update phase is progressing.
Select the scenario that you are interested in from Table 16-8. The scenarios are presented in
overview form in this section, to summarize the steps that are performed in each case.
In the following sections, we provide the detailed procedures for each scenario.
Scenario 1 - for upgrading the server: new New system Media method
system, media method
Scenario 2 - for upgrading the server: new New system Network method
system, network method”
Scenario 3 - for upgrading the server: Same system as original server Media method
same system, media method
Chapter 16. Installation and upgrade planning for Tivoli Storage Manager V6.1 275
16.9.1 Scenario 1: New system, media method
In this scenario, some upgrade tasks are performed on the original system and some on the
new system. The database is extracted to media and later inserted into the V6.1 database.
You can use the wizard, or perform the upgrade by manually using the utilities. The wizard
offers a guided approach to the upgrade of a server. By using the wizard, you can avoid some
configuration steps that are complex when done manually.
You can use the wizard, or perform the upgrade by manually using the utilities. The wizard
offers a guided approach to the upgrade of a server. We strongly recommend using the
wizard; you can avoid some configuration steps that are complex when done manually.
Chapter 16. Installation and upgrade planning for Tivoli Storage Manager V6.1 277
16.9.4 Upgrading using the wizard
Here we provide an overview of the process to upgrade to Tivoli Storage Manager V6.1 on a
new system using the network method wizard (see Figure 16-8).
Figure 16-8 Upgrade to V6.1 on a new system using the network wizard method
You can use the wizard, or perform the upgrade by manually using the utilities. The wizard
offers a guided approach to the upgrade of a server. By using the wizard, you can avoid some
configuration steps that are complex when done manually.
Figure 16-9 Upgrade to 6.1 on a same system using the media wizard method
Chapter 16. Installation and upgrade planning for Tivoli Storage Manager V6.1 279
The following steps are a summary of the procedure for this scenario:
1. Perform all preparation tasks, which includes performing a database backup.
2. Install the upgrade utilities package (DSMUPGRD) on the system. The utilities package
must be installed whether you are using the upgrade wizard or performing the upgrade
with utilities.
3. Prepare the Tivoli Storage Manager V5 database using the DSMUPGRD PREPAREDB
utility.
4. Uninstall the Tivoli Storage Manager V5 server code.
5. Install the Tivoli Storage Manager V6.1 server code on the system.
6. Create the directories for the Tivoli Storage Manager V6.1 database and logs, and the
user ID that will own the server instance.
7. Start the upgrade wizard to configure the new server and upgrade the Tivoli Storage
Manager V5 database. With the wizard, you complete the following tasks:
a. Extract the V5 database to external media.
b. Create and format an empty database to receive the data.
c. Insert the data from the media to which it was extracted.
d. Configure the system for database backup.
8. Complete the post-installation tasks, including backing up the database and verifying the
database contents.
You can use the wizard, or perform the upgrade by manually using the utilities. The wizard
offers a guided approach to the upgrade of a server. By using the wizard, you can avoid some
configuration steps that are complex when done manually.
Chapter 16. Installation and upgrade planning for Tivoli Storage Manager V6.1 281
The following steps are a summary of the procedure for this scenario:
1. Perform all preparation tasks, which includes performing a database backup.
2. Install the upgrade utilities package (DSMUPGRD) on the system. The utilities package
must be installed whether you are using the upgrade wizard or performing the upgrade
with utilities.
3. Prepare the Tivoli Storage Manager V5 database using the DSMUPGRD PREPAREDB
utility.
4. Uninstall the Tivoli Storage Manager V5 server code.
5. Install the Tivoli Storage Manager V6.1 server code on the system.
6. Create the directories for the Tivoli Storage Manager V6.1 database and logs, and the
user ID that will own the server instance.
7. Start the upgrade wizard to configure the new server and upgrade the Tivoli Storage
Manager V5 database. With the wizard, you complete the following tasks:
a. Create and format an empty database to receive the data.
b. Move the data from the Tivoli Storage Manager V5 database to the Tivoli Storage
Manager V6.1 database.
c. Configure the system for database backup.
8. Complete the post-installation tasks, including backing up the database and verifying the
database contents.
Attention: The hybrid upgrade-migration method has not been tested by IBM. The method
involves the management of export data and timing-specific considerations that, if not
understood and carefully planned, might result in the loss of data. Specifically, data might
not be populated or transferred to the V6 target server.
Restrictions:
IBM System Storage Archive Manager (also known as Tivoli Storage Manager for Data
Retention) users do not use this method. Tivoli Storage Manager servers with retention
protection enabled do not allow import operations. Therefore, the method described in
this publication cannot be used with System Storage Archive Manager or Tivoli Storage
Manager for Data Retention servers.
Tivoli Storage Manager V5.x users with data residing on CENTERA devices must not
use this method. Importing data from a CENTERA device class is not supported.
However, files being imported can be stored on a CENTERA storage device.
The Upgrade Guide methods (called standard methods in this document) work well. The
methods are safe, for example, there is no inherent risk of loss of data or database objects
being orphaned. However, the time to migrate a database from V5.x to V6 might require a
production Tivoli Storage Manager server to be down for many hours or days. The length of
downtime depends on a number of factors, including:
The size of the database
The performance of the disk system containing the old and new database
You might be able to shorten the server downtime of the standard methods, using this hybrid
upgrade-migration method.
Before deciding to use the hybrid upgrade-migration method, estimate the time it takes to
migrate the data using one of the standard methods versus the time to migrate using a hybrid
upgrade-migration method. The time to extract and insert data from a database is dependent
on the performance of the storage system on which the old and new databases reside. It is
possible that the standard method applicable to your situation might complete in less time
than the hybrid upgrade-migration method.
You can estimate the times of the different methods in these ways:
Measure the time that it takes to extract the database (during a time when it is acceptable
for the server to be down), and then insert the data into a test V6 server.
Measure the time to export a sample of node data, and then import the data to the test V6
server.
Here is another method for estimating how long the upgrade process can take:
1. Restore a copy of the database to a test system.
2. Complete the upgrade process in the test environment using the restored copy of the
production server.
Chapter 16. Installation and upgrade planning for Tivoli Storage Manager V6.1 283
Your estimate of the time to migrate, using the Hybrid Upgrade method, is a function of the
exact steps of your plan.
For purposes of this testing, the storage devices (hierarchy) are not needed. However,
differences in hardware (processor, I/O performance to disk, and so on) might affect the time
measured for the upgrade process in the test environment. Use the same or equivalent
hardware to achieve results that most accurately represent what can be achieved for your
production Tivoli Storage Manager server upgrade.
Restrictions
To employ the hybrid upgrade-migration method, your operational situation for particular
Tivoli Storage Manager server instances must accommodate the following restrictions.
Important: If you are unable to accommodate these restrictions, do not use this method
because it can cause problems, including possible loss of data.
This method can only be used with the Upgrade Guide Scenarios 3 and 4, for example,
upgrading the V5.x server to V6 on a new system using either the media or the network
method.
Operational restrictions
The following set of restrictions consists of restrictions on, or changes to, the operation of
your Tivoli Storage Manager server during the duration of the migration when using this
method. Also, we will refer to this set of restrictions in the following discussions:
You should disable data migration between storage pools to keep the object pointers in
the V6 server database synchronized with the actual location of the objects. You can
disable migration for all storage pools by using the undocumented server option
NOMIGRRECL. Enable the NOMIGRRECL option by adding it to the server options file.
The advantage of using NOMIGRRECL is that it turns off all migration and reclamation all
at once, instead of having to issue commands to disable migration and reclamation for the
various storage pools. The disadvantage is that there is no documentation for additional
information.
If you want to use a documented method instead, disable migration by setting the
migration threshold to 100% on all primary storage pools with the following command:
update stgpool <stgpools-name> HIGHMIG=100
Disable reclamation for all storage pools using tape device classes and FILE type device
class to prevent orphaned objects in the V6 server database. As with migration, you can
disable reclamation by using the NOMIGRRECL server option.
If you want to use a documented method instead, reclamation can be disabled with the
command:
update stgpool <stgpools-name> REClaim=100
Disable database expiration processing from the point in time that data is extracted from
the V5.x server database until the V6 server is put into production. The reason for this
restriction is that the V6 database, built from the insert process, still has references to
client objects in storage pools of device class DISK. Expiration processing on the V5.x
server after the extraction would allow the V5.x server to reuse the space on DISK storage
pools. The V6 server would then have orphan objects in the database. If you use the
EXPINTERVAL server option to automatically expire data, set the option to
EXPINTERVAL 0. Alternatively, if you use scheduled administrative commands to expire
data, disable or delete that schedule. If you use an external automation or scheduling tool
to expire data, identify and stop that tool.
Do not move data between storage pools with the move data or move nodedata
commands, either manually or automatically as part of scripts or administrative schedules.
This restriction is necessary to prevent orphaned objects in the V6 server database.
Either disable backup storage pool operations, or audit the volumes used for storage pool
backups on the V6 server after it goes into production. Be aware that the storage-pool
backup volumes created after the extractdb operation will not be usable after the V6
server is put into production.
– Suspend using active data pools, that were created with the COPY ACTIVEDATA
command. Changes to active data pools during the interim period can cause the V6
server database to be out of synchronization with the actual storage pools.
– Do not make changes to policies or existing administrator ID definitions, or register any
new administrator IDs on the V5.x server.
Variations 2 and 3 might shorten the length of time that your Tivoli Storage Manager server is
out of production compared to variation 1, but the implementation of 2 or 3 is more complex.
To track or distinguish tape volumes used for the export process, consider defining a new
device class to be used with the EXPORT command.
Variation 1
Follow the upgrade steps documented in Tivoli Storage Manager V6 Upgrade Guide,
SC23-9554, “Scenario 3: Upgrading the server manually using utilities or Scenario 4 for
upgrading the server: New system, network method” - up to and including extracting the
database, with the extractdb command. However, instead of stopping the Tivoli Storage
Manager V5.x server operation after the database data is extracted, that server is put back
into operation.
If you used the NOMIGRRECL option to disable migration and reclamation, set that option in
the V6 server options file as well.
In parallel with the continued operation of the V5.x server, the remaining steps of upgrading
the server manually using utilities are performed. When the database backup is complete, the
V6 server is almost ready to be put into operation.
Chapter 16. Installation and upgrade planning for Tivoli Storage Manager V6.1 285
However, there might now be client data that was stored into the V5.x server while the
insertdb process was completing on the V6 server. This new data must be migrated. This is
accomplished by exporting the new client (that is: node) data. The data must be exported to a
media that can be used on the V6 server. Example 16-1 shows an export node command.
Note: If data shredding is in use with storage pools from which node data is to be
exported, you will need to use the additional allowshreddable=yes parameter.
The data exported from the V5.x server is imported into the V6 server using the import node
command in Example 16-2.
Variation 2
The basic improvement with Variations 2 and 3 over Variation 1 is to decrease the time for
export and import during server cut over.
In Variation 1, the production Tivoli Storage Manager server is down during the extractdb
process. The server is also down during the server cut over, which occurs after the insertdb
is completed. The cut over period includes time to:
1. Export all the new node data from the V5.x server.
2. Switch the attachment of the storage pool devices.
3. Import all the new node data to the V6 server.
The length of time to export data from the V5.x server and import the data to the V6 server
might be significant and undesirably long. This variation describes ways to decrease
downtime resulting from export and import, but the overall downtime might not be better than
variation 1. You must determine which variation is best for your environment.
In this variation, you perform multiple exports and one import - that is, import of multiple
exports in one operation - at time of cut over. These multiple exports are incremental node
data exports. This way, during the cut over period, a smaller amount of node data must be
exported, which will be the last remaining node data ingested since the last incremental
export:
1. Generate the incremental exports at regular intervals, for example, 4 or 8 hours, using the
EXPORT command fromdate and fromtime, and todate and totime parameters. The
parameters must be carefully specified so that the export increments are contiguous and
not overlapping.
2. When you are ready to put the V6 server into production, export the final node data.
3. Shut down the V5.x server.
4. Connect the storage devices (containing all the storage pools) to the V6 server, and start
the V6 server.
5. Import all the incremental exports.
The amount of time to perform the import operation (done at the point of production cut-over
to the V6 server) might be decreased further by having to import only the last incremental
export. However, this means that the other incremental exports must be imported while the
V5.x server is still in operation.
The data imported earlier goes to these new volumes. After the V6 server is put into
production, the client data on these volumes can be moved to the original storage-pool
volumes. When the data move is complete, those volumes can be deleted.
If you choose to take this approach, disable normal server operations and availability on the
V6 server when it is started. Taking these steps minimizes operational difficulties and error
messages when you use the V6 server before it is put into production, when performing the
early imports. The disabling of normal server operations can be done immediately after
starting the V6 server. However, it is easier and decreases risk of unwanted activities if you
disable the server operations before the V6 server is started. This can be accomplished by
disabling the functions on the V5.x server before the database extraction is performed.
Chapter 16. Installation and upgrade planning for Tivoli Storage Manager V6.1 287
As indicated in “Operational restrictions” on page 284, some of these operations should
remain disabled on the V5.x server even after the extractdb operation is performed.
Table 16-9 gives a list of the server operations to consider disabling, and the commands or
options to accomplish that.
Storage pool reclamation (set for all Storage pool parameter: Yes
storage pools) reclaim=100
(Not required if the NOMIGRRECL
option is used.)
16.10 Testing
Tivoli Storage Manager V6.1 has many new features, which have contributed to the need for
additional planning and testing. Some of the most significant feature changes include:
Deduplication
Disk structure for DB2, active logs and archive logs, storage pool volumes
Disaster recovery using the Disaster Recovery Manager feature
Reporting and monitoring
There are many additional changes to Tivoli Storage Manager V6.1, as described in
Chapter 4, “Commands, utilities, and option changes” on page 31.
In addition, there are many methods of upgrading and moving data during the upgrade.
Therefore, understanding all of your options and the cost of each is important (monetary and
downtime costs might vary based on each scenario).
To test with a copy of production data, or to test the upgrade process, you can use the
upgrade utilities to create a test server. Follow the normal upgrade procedure, with these
additional considerations:
To avoid affecting your original production server, you must install the V6.1 server on a
different system. Different versions of the server cannot be run on a system at the same
time. In addition:
– You will need to provide a scaled down copy of your V5.x server in the test scenario.
– Ensure that the storage devices for your production server are not available to the test
server. If the test server can detect the devices that your production server uses, it
might start operations such as issuing resets on tape drives or unloading tapes.
If you prefer not to add the second test server, then you must upgrade your current
production server to at least V5.5.2. Then the DSMUPGRD utility can complete the
upgrade to your test V6.1 server, and leave your production server untouched.
Always back up your Tivoli Storage Manager database prior to any upgrade activity.
If you do not want to upgrade your production server, nor add a test V5 server, then
consider using an extractdb of the production database to the test server using either
media or the network. The advantage of extracting the database to media is that you can
repeatedly load the test database without stopping your production server each time.
For example, if your tape drives are connected in a storage area network (SAN), you
might need to change the zones in your SAN to prevent the test server from detecting the
devices.
For testing, you can use one of the following methods to use a backup copy of the
database. The methods are given in outline form. See the detailed procedures for
instructions for each step.
Tip: If upgrading using media, ensure that the device class is valid on the test system. For
example, if you will be using a FILE device class for the extraction step, ensure that the
path for the device class is valid on the test system. The path that is in the server database
for the device class must be correct. If necessary, start the server and update the path. If
you will be using a tape device class for the extraction step, ensure that the device names
for the library and drives are correct.
Chapter 16. Installation and upgrade planning for Tivoli Storage Manager V6.1 289
4. From this point, you can use the detailed procedures in one of the following sections to
complete your test:
– Tivoli Storage Manager Planning Guide, Chapter 4, “Scenario 1: Same system, media
method,” located at the following URL:
http://publib.boulder.ibm.com/infocenter/tsminfo/v6/topic/com.ibm.itsm.srv.u
pgrd.doc/t_srv_upgrd_s1_ssmm.html
– Tivoli Storage Manager Planning Guide, Chapter 5, “Scenario 2: Same system,
network method,” located at the following URL:
http://publib.boulder.ibm.com/infocenter/tsminfo/v6/topic/com.ibm.itsm.srv.u
pgrd.doc/t_srv_upgrd_s2_ssnm.html
Continue through the end of the procedures in Scenario 3. If you use the command-line
instructions, skip the steps for preparing the database and extracting the data to media.
At the time of writing this chapter, there are no plans to support the Operational Reporting tool
for V6.1 servers, with the intention that the reporting and monitoring tool will be the
replacement. Rebuilding your Tivoli Storage Manager support processes will take time, which
is why we have provided this unofficial work around in this book.
This work around will allow Tivoli Storage Manager administrators to continue working with
the Windows Management Console (Operational Reporting) interface for a period of time,
until the transition to the new Reporting and Montitoring configuration can be implemented in
production.
This work around is provided “as-is,” and there are no plans to alter or enhance the existing
Windows Operational Reporting function in V5.5 to support V6.1 servers:
ftp://service.boulder.ibm.com/storage/tivoli-storage-management/patches/server/NT/
5.5.2.1/
This must be installed as a separate Windows machine or VM system to monitor the V6.1
servers. After installing the Management Console V5.5.2.x, install the newly adjusted
.xml files, by copying the replacement default_mon_eng.xml_6100.xml and the
default_rep_eng_6100.xml files, and saving the original default_rep_eng.xml and
default_mon_eng.xml as part of the process, as shown Figure 16-11.
Chapter 16. Installation and upgrade planning for Tivoli Storage Manager V6.1 291
Figure 16-11 Files to alter for in the console folder, for the Operational Reporting tool adjustment
These XML templates will be referenced after you have defined the Tivoli Storage Manager
V6.1 server computer to the ORF. Then create the Daily Report and Hourly Monitor, click the
properties of the Daily Report and Hourly Monitor, and as opposed to the default XML
templates, select the ones below (copy them first to the console directory).
When attempting to add and configure a new Tivoli Storage Manager V6.1 instance, using
Operational Reporting V5.5.2.1, you might receive some of the errors shown in Figure 16-12.
Figure 16-12 Operational Reporting errors when referencing a Tivoli Storage Manager V6.1 instance
Following is a portion of the process flow for one of the upgrade methods (upgrade to a new
system using the network) after Tivoli Storage Manager V6.1 installation is completed on an
AIX operating system:
1. First set up environmental variables for dsmupgrd utilities.
2. Next run the dsmupgrd preparedb command on source Tivoli Storage Manager 5.x
system, then check for errors.
3. Create the userid, groups, instance directories, and database/log files for the instance.
4. Login and reset the password for the instance user ID.
5. For all the directories that were created, ensure that the access permissions are set
correctly.
6. Change the access permissions for the storage disk pools so that the instance ID can
write to them.
7. Create the DB2 instance using the db2icrt command (under root).
8. Next, Logoff and then Login to the instance ID.
9. Format the new database using dsmserv loadformat, and check for error messages.
10.. Start the insert process on the target server (dsmserv insertdb), and wait for the
message ANR1336I indicating that the source server can be started.
11.. When ANR1336I is issued, start the source server (dsmupgrd extractdb).
12.. Monitor for completion, and then check for error messages.
13.If all completes correctly, configure the database backup for Tivoli Storage Manager V6.
Instead of doing the foregoing steps, you could simply use the database upgrade wizard. The
wizards are installed with the product, and are used to perform all the steps here (and more).
At the completion of the wizard, the Tivoli Storage Manager server is started and ready for
use. The wizard configures the database backup, but does not actually perform the backup.
You must still specify which devclass is to be used for the backups of the database and set up
the schedules of the database backups.
Note: There are only two steps that are not completed for DB Backup with the wizards.
A deviceclass has to be created (if needed) for use with DB Backups, and then that
deviceclass must be specified on the SET DBRECOVERY command.
Chapter 16. Installation and upgrade planning for Tivoli Storage Manager V6.1 293
16.13 An upgrade test
We took a database backup of a V5.5 production database and restored that into a VMware
server onto a new file system to test the upgrade to V6.1. When the restore was finished, we
completed the following preparation steps.
Note: These steps assume that the upgrade utilities package is already located on the
system where Tivoli Storage Manager V5 server is installed.
This will provide a slight performance boost for operating systems during the extract portion
of the DB Upgrade Process. If you have SSAM, then use EVENTBASEDUSED=YES for the
DB Upgrade. If you are not sure if event base retention has ever been used, take the default,
which is EVENTBASEDUSED=YES.
Note: To improve the upgrade time, we have discussed the Tivoli Storage Manager
database reorganization as an option. This option has an unload/load itself and requires
downtime, so in the end, you will not save time by doing this.
Applications such as CDP, Content Manager, and Space Manager assume that Tivoli
Storage Manager server is always available.
Customer databases might need to back up archive logs hourly.
Preparing space for the upgrade process could include these activities:
– Determine the amount and type of space that is required for the upgrade process
before beginning the process.
– Verify that the system has the amount of space that was estimated in the planning step.
Use the planning work sheet that you filled in with your information. Refer to “Space
requirements” on page 257.
This command fixes a problem that might exist in older Tivoli Storage Manager databases. If
the problem does not exist in your database, the command completes quickly. If the problem
exists in your database, the command might take some time to run.
Important: Do not skip this step. If your database has the problem and you do not run this
command now, the DSMUPGRD PREPAREDB utility fails when you run it. You must then
restart the V5 server and run the CONVERT USSFILESPACE command before continuing
with the upgrade process.
Just in case, for some reason, you might need to revert to the earlier version after the
upgrade to V6.1, review the steps for reverting to the earlier version of the server in the
section, “Reverting from V6.1 to the previous V5 server version” in IBM Tivoli Storage
Manager:Server Upgrade Guide, SC23-9554. The results of the reversion will be better if
you understand the steps and prepare for the possibility now.
Chapter 16. Installation and upgrade planning for Tivoli Storage Manager V6.1 295
Make the following adjustments to settings on your server and clients. These adjustments
must be done to make it possible for you to revert to the original server after the upgrade,
if problems occur:
– For each sequential-access storage pool, set the REUSEDELAY parameter to the
number of days during which you want to be able to revert to the original server, if that
becomes necessary. For example, if you want to be able to revert to the original server
for up to 30 days after upgrading to V6.1, set the REUSEDELAY parameter to 31 days.
– For each copy storage pool, set the RECLAIM parameter to 100 (meaning 100%).
– If you typically use a DELETE VOLHISTORY command to delete database backups,
ensure that the command does not delete database backups for at least the same
number of days that you set for the REUSEDELAY period for sequential-access
storage pools.
– For important clients that use the server, check that the value for the schedlogretention
client option is set to retain the client schedule log for a long enough time. Update the
option for clients if needed.
The entries in the client schedule log might be useful if the server must revert to the
original version. If the retention period for the schedule log is too short, the schedule
log information might be deleted too soon.
In preparation for the upgrade, prevent activity on the server by disabling new sessions.
Cancel any existing sessions. The commands in the following procedure are Tivoli
Storage Manager administrative commands.
To prevent all clients, storage agents, and other servers from starting new sessions with
the server, use the commands:
disable sessions client
disable sessions server
Prevent administrative activity from any user ID other than the administrator ID that is
being used to perform the upgrade preparation. Lock out other administrator IDs if
necessary:
lock admin administrator_name
Check whether any sessions exist, and notify the users that the server is going to be
stopped. To check for existing sessions, use the command:
query session
Cancel sessions that are still running. Use the command:
cancel session
Back up storage pools and the server database:
Immediately before upgrading the server, back up primary storage pools to copy storage
pools, and perform a full database backup:
– Back up primary storage pools to copy storage pools using the BACKUP STGPOOL
command. If you have been performing regular backups of the storage pools, this step
backs up only the data that was added to the primary storage pools since they were
last backed up.
– Back up the database using the following command. Use either a full or snapshot
backup type.
backup db type=type devclass=device_class_name
The device class that you specify must exist and have volumes that are available to it.
For example, to perform a snapshot backup of your database to the TAPECLASS
device class using scratch volumes, enter:
backup db type=dbsnapshot devclass=tapeclass
Chapter 16. Installation and upgrade planning for Tivoli Storage Manager V6.1 297
– After all sessions and processes are stopped, determine whether any tapes are
mounted. Dismount any tapes that are mounted. Use the commands:
query mount
dismount volume volume_name
– Stop the server using the command:
halt
Installing the upgrade utilities:
You must install the upgrade utilities on the system where the V5 server is located. The
installation package for the utilities must be downloaded from a Web site. You need an
upgrade version that is greater than or equal to the level of the Tivoli Storage Manager
server you are upgrading.
– First perform all preparation tasks on the original (source) system. Preparation
includes: For the storage pools set reuse delay to the number of days during you want
to be able to revert back to the original server if that becomes necessary. For each
copy storage pool, set the RECLAIM parameter to 100. Backup and make copies of
device configuration, volumehistory and server options files. Backup storage pools and
server database, create a summary of the database contents. You can use the
DSMUPGRD utility for this.
– Log on to source server with an administrator ID to install the upgrade utilities on the
system where the V5 server is located. and run the executable package. The default
location for the installation of the utilities is based on the location where the V5 server
was last installed. The package to install is available for download from the FTP
downloads site. The upgrade utilities are used to prepare and extract the database
from the original server.
Notes:
When you use the upgrade utilities and if you have multiple servers running on
the system, you must use the -k option to specify the name of the Windows
registry key from which to retrieve information about the server being upgraded.
Do not install the utilities in the same directory as the original server that is to be
upgraded, this is a restriction and is not allowed.
The utilities package must be installed whether you are using the upgrade wizard
or performing the upgrade manually with utilities.
When you use the upgrade utilities and if you have multiple servers running on
the system, you must use the -k option to specify the name of the Windows
registry key from which to retrieve information about the server being upgraded.
For example, if the V5 server was installed using the default path, C:\Program
Files\Tivoli\TSM\server, create a upgrade folder and the upgrade utilities should be
installed in path C:\Program Files\Tivoli\TSM\upgrade. After the upgrade utilities are
installed, continue with installing the V6.1 server on the target server.
– Log on to the target system as an administrator and change to the directory where you
placed the executable file. In the next step, the files are extracted to the current
directory. Ensure that the file is in the directory where you want the extracted files to be
located.
Note: You need an upgrade version that is greater than or equal to the level of the Tivoli
Storage Manager server that you are upgrading.
Performance tips depend on the method that you choose for moving the data from the V5
database:
Media method:
– If you are extracting the data to tape, use a high-speed tape device.
– If you are extracting the data to disk, use a disk device or LUN that is different than the
device in use for the V5 database and recovery log.
– If both the V5 database and the destination for the extracted data are on a virtualization
device (high-end storage controller, or a SAN virtualization device), ensure that the two
virtual LUNs are not on the same physical disk drive. Ensure that the space in use for
the V5 database and the destination for the extracted data are on different physical
disk drives within the virtualization device.
– If it is not possible to provide different LUNs for the V5 database and the extraction
destination, the extraction process will perform more slowly. The slower speed of
extraction might be acceptable, depending on the size of the database and your
requirements for the upgrade.
Network method:
– Use a high speed link if you are extracting the data to a different system.
– For upgrading a database greater than 2 - 3 GB, use at least a 1-Gb Ethernet network.
– If you are extracting the database on the same system, no external network
connections are required.
Chapter 16. Installation and upgrade planning for Tivoli Storage Manager V6.1 299
16.14.2 Performance tips for inserting data into the V6.1 database
The process for inserting the V5 extracted data into the V6.1 database is the longest-running
part of an upgrade process, and is the most sensitive to the configuration of the system. On a
system that meets the minimum requirements, the insertion process will run, but performance
might be slow. For better performance, set up the system as described in these tips.
Processors:
The insertion process is designed to exploit multiple processors or cores.
The insertion process will typically perform better on a system with a relatively small
number of fast processors than on a system with more but slower processors.
Disk storage:
The insertion process is designed to exploit high-bandwidth disk storage subsystems. The
speed of the process is highly dependent on the disk storage that is used.
For best performance, use multiple LUNs that map to multiple independent disks, or that
map to RAID arrays with a large stripe size (for example, 128 KB). Use a different file
system on each LUN. Table 16-10 shows an example of good usage of LUNs.
Note: If the disk storage is supplied by a virtualization device (high-end storage controller,
or a SAN virtualization device), ensure that none of the virtual LUNs are on the same
physical disk drive. Ensure that the directories in use are on different physical disk drives
within the virtualization device.
Part 7 Installation,
customization, and
upgrade of Tivoli
Storage Manager V6.1
Server and Client
This part of the book covers the Tivoli Storage Manager V6.1 installation and upgrade of the
server and client information.
To further understand the changes in the installation process, refer to Chapter 16, “Installation
and upgrade planning for Tivoli Storage Manager V6.1” on page 245.
Hardware
Table 17-1 describes the hardware requirements for your AIX system. For further details, refer
to the Tivoli Storage manager for AIX, Installation Guide, Capacity Planning, GC23-9781 for
assistance with your disk planning.
Table 17-1 Hardware requirements for Tivoli Storage Manager v6.1 for AIX
Type of hardware Hardware requirements
Table 17-2 describes the minimum software requirements for Tivoli Storage Manager v6.1
running on an AIX system.
Table 17-2 Software requirements for Tivoli Storage Manager V6 for AIX
Type of software Minimum software requirements
Operating system AIX 5.3 running in a 64-bit kernel environment with the following
additional requirements for DB@:
AIX 5.3 Technology Level (TL) 6 and Service Pack (SP) 2 plus
the fix for APAR IZ03063
Minimum C++ runtime level with the xlC.rte 9.0.0.8 and
xlC.aix50.rte 9.0.0.8 filesets. These filesets are included in the
June 2008 cumulative fix package for IBM C++ Runtime
Environment Components for AIX.
AIX 6.1 running in a 64-bit kernel environment requires the following
filesets for DB2:
Minimum C++ runtime level with the xlC.rte 9.0.0.8 and
xlC.aix61.rte 9.0.0.8 filesets. These filesets are included in the
June 2008 cumulative fix package for IBM C++ Runtime
Environment Components for AIX.
Drivers If you have an IBM 3570, IBM 3590, or IBM Ultrium tape library or
drive, install the most current device driver before you install Tivoli
Storage manager 6.1. You can find the device drivers at:
ftp://ftp.software.ibm.com/storage/devdrvr/
Important: Log in as the root user. If you do not log in as root, certain key Tivoli Storage
Manager functions will not work properly.
If you downloaded the executable file from Passport Advantage®, complete the following
steps:
1. First, change to the directory where you placed the executable file.
2. Then change the file permissions by entering the following command (see Example 17-1):
chmod a+x 6.1.0.0-TIV-TSMALL-platform.bin
Where platform denotes the architecture that Tivoli Storage Manager is to be installed on.
Example 17-1 AIX command line example for the chmod command
chmod a+x 6.1.2.0-TIV-TSMALL-AIX.bin
5. Disk consumption at this point for the COI directory is shown in Example 17-3, presented
in 1024 KB blocks.
6. Disk consumption for the DE directory is shown in Example 17-4, presented in 1024 KB
blocks.
Example 17-4 Disk consumption of the DE directory, following the extraction of the install package
# du -ks DE
125324 DE
Example 17-5 demonstrates the error that will occur if the Installation Wizard is invoked on a
system that has no graphic output capabilities.
Launching installer...
Stack Trace:
java.awt.HeadlessException:
No X11 DISPLAY variable was set, but this program performed an operation which
requires it.
at
java.awt.GraphicsEnvironment.checkHeadless(GraphicsEnvironment.java:196)
at java.awt.Window.<init>(Window.java:346)
at java.awt.Frame.<init>(Frame.java:452)
at java.awt.Frame.<init>(Frame.java:417)
at javax.swing.JFrame.<init>(JFrame.java:180)
at com.zerog.ia.installer.LifeCycleManager.g(DashoA10*..)
at com.zerog.ia.installer.LifeCycleManager.h(DashoA10*..)
at com.zerog.ia.installer.LifeCycleManager.a(DashoA10*..)
at com.zerog.ia.installer.Main.main(DashoA10*..)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
Note: If you have connected using telnet, and the system you are installing on has a
graphics console, you will not see this error, nor the graphical installation panels.
Example 17-6 AIX Tivoli Storage Manager command line console for installation
# ./install.bin -i console
Launching installer...
===============================================================================
Choose Locale...
----------------
1- Deutsch
->2- English
3- Español
4- Français
5- Italiano
6- Português (Brasil)
3. Following this selection, the next panel that is presented is the installation Welcome panel
as shown in Example 17-8.
===============================================================================
Tivoli Storage Manager Install
------------------------------
Welcome
Tivoli Storage Manager 6.1
Licensed Materials - Property of IBM Corp. (c) IBM Corporation and other(s)
1993, 2008. All rights reserved. US Government Users Restricted Rights --
Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM
Corp.
It is strongly recommended that you quit all programs before continuing with
this installation.
4. Next, after pressing <enter>, you are presented with the license agreement panel, which
also identifies options for traversing the text, as seen in Example 17-9.
===============================================================================
ISC password::*********
===============================================================================
*Verify password::*********
===============================================================================
Please Wait
..
completed: 1 ; total: 19
completed: 19 ; total: 19
Completed.
Completed.
Pre-Installation Summary
------------------------
Product Name:
Tivoli Storage Manager
Install Folder:
/opt/tivoli/tsm
Components:
TSM Server,DB2 9.5,TSM Client API,TSM License,eWAS,ISC,TSM Administration
Center
9. After the completion of the installation, the summary panel and exit prompt are presented,
as shown in Example 17-12.
Example 17-12 Tivoli Storage Manager v6.1 successfully installed summary panel
Installing...
-------------
[==================|==================|==================|==================]
[------------------|------------------|------------------|------------------]
===============================================================================
Installation Complete
---------------------
TSM Server
DB2 9.5
TSM Client API
TSM License
eWAS
ISC
TSM Administration Center
Log in as root user or administrator and open the local new-instance wizard,
dsmicfgx, located in the server installation directory.
Log on to a Version 6.1 Tivoli Storage Manager Administration Center and
start the Create New Instance wizard.
Configure the new instance manually. See the Tivoli Storage Manager
Information Center, or the Installation Guide.
For more information about any of these tasks, see the Tivoli Storage Manager
Information Center at:
http://publib.boulder.ibm.com/infocenter/tsminfo/v6
In order to export the graphic display of one UNIX machine to another machine, an X11
server/client must be used. Cygwin is a publicly available solution.
The Cygwin software that is freely available from www.cygwin.org can be used to export the
Tivoli Storage Manager GUI or any other graphic from the remote machine for that matter. In
the following instructions we give you an example of how to export the display from an AIX
machine to a Windows XP machine:
1. First, install Cygwin.
Go to http://www.cygwin.com and retrieve setup.exe to install Cygwin. After you run
setup.exe, you have to select which packages to install. You need to install the X11
packages, and under the net subsection you might want to include the ssh components. In
Example 17-13 we have included a list of minimum packages, based on the most recent
level of Cygwin at the time this was written.
2. Next, we create a separate logical volume for the target directory of /opt/tivoli/tsm, which
we create at 2.4 GB in size. This will hold all of the base installation software.
3. Next, start the installation steps by issuing the command shown in Example 17-15 from
the installation source directory.
Launching installer..
Now, at this point the X11 graphics redirection begins to the Windows system, as shown in
Figure 17-2.
6. Next, we select the components that we want to install (server, licenses, and
Administration Center). There is no default so you must make a selection or you will
receive an error message and be returned to the components page. Our selections are
shown in Figure 17-4.
7. Because we selected the Administration Center component, we are prompted for a user
name and password, as seen in Figure 17-5. We will use these later to log onto the
Integrated Solutions Console and Administration Center.
8. After selecting Next on the Administration Center panel, the pre-installation summary
panel appears, as shown in Figure 17-6.
9. After reviewing, we select the Install option to continue. Then, at the end of the
installation, a message is displayed on the summary page that Tivoli Storage Manager
successfully installed, and a summary is provided. If there were any errors during the
installation, the summary page lists the errors and directs you to an error log file. Fix the
errors before continuing. The installation log is stored in the following location:
/var/tivoli/tsm/log.txt
17.5.1 Preparing the AIX server for Tivoli Storage Manager instances
Tivoli Storage Manager will use default locations, however we recommend to preconfigure
your disk structure first, possibly considering a naming convention that facilitates multiple
instances. In AIX, this might imply Volume Group, Logical Volume, and mount points, which
reflect an instance naming convention.
Table 17-3 Disk paths required and the test environment setup
Test Separate
AIX volume AIX logical volume Naming convention Default size system disk
group (MB) sizing (MB) volume if
possible
The file systems created for this AIX system are shown in Example 17-17. This demonstrates
one of many ways to name and mount volumes for use with a tsm1 instance.
Example 17-17 AIX file systems, including all the custom logical volume and JFS2 created
Filesystem 512-blocks Free %Used Iused %Iused Mounted on
/dev/hd4 655360 289784 56% 11942 25% /
/dev/hd2 4587520 333488 93% 44675 49% /usr
/dev/hd9var 262144 103120 61% 4767 29% /var
/dev/hd3 655360 115128 83% 4035 23% /tmp
/dev/hd1 917504 916456 1% 14 1% /home
/dev/hd11admin 262144 261416 1% 5 1% /admin
/proc - - - - - /proc
/dev/hd10opt 1703936 1143584 33% 3153 3% /opt
/dev/download_lv 5242880 1144808 79% 17 1% /download
/dev/codelv 52428800 47126688 11% 2296 1% /code
/dev/tsmbinlv 4980736 460856 91% 26700 33% /opt/tivoli/tsm
/dev/tsmbinlv 4980736 394256 93% 26714 37% /opt/tivoli/tsm
/dev/dbdir001lv 4194304 3963168 6% 44 1% /tsm1/dbdir001
/dev/dbdir002lv 4194304 3963152 6% 44 1% /tsm1/dbdir002
/dev/dbdir003lv 4194304 3963144 6% 44 1% /tsm1/dbdir003
/dev/dbdir004lv 4194304 3963152 6% 44 1% /tsm1/dbdir004
/dev/actlog 8519680 1168120 87% 14 1% /tsm1/activelog
/dev/actlogm 8519680 8495760 1% 12 1% /tsm1/activelogm
/dev/archlog 16777216 9427808 44% 14 1% /tsm1/archlog
/dev/archlogf 8388608 8386600 1% 12 1% /tsm1/archlogf
Next, log in using the user ID and password, and you will be prompted to change the
password for that user ID, as shown in Example 17-19.
Example 17-19 Logging into AIX and changing the password for TSM1 new instance creation.
You must change your password now and login again!
Changing password for "tsm2"
tsm1's Old password:
tsm1's New password:
Change the ownership of the newly created file system mount points
Changing the ownership of the tsm1 mounts and directories can be accomplished by issuing
the commands shown in Example 17-20.
Example 17-20 AIX chown command and review of owner and group settings
# cd /tsm1
# pwd
/tsm1
# chown -R tsm1.tsmsrvrs *
# ls -l
total 0
drwxr-xr-x 4 tsm1 tsmsrvrs 256 Jun 01 12:33 activelog
drwxr-xr-x 4 tsm1 tsmsrvrs 256 Jun 01 12:41 activelogm
drwxr-xr-x 4 tsm1 tsmsrvrs 256 Jun 01 12:33 archlog
drwxr-xr-x 5 tsm1 tsmsrvrs 256 Jun 01 12:33 archlogf
drwxr-xr-x 4 tsm1 tsmsrvrs 256 Jun 01 12:30 dbdir001
drwxr-xr-x 4 tsm1 tsmsrvrs 256 Jun 01 12:30 dbdir002
drwxr-xr-x 4 tsm1 tsmsrvrs 256 Jun 01 12:30 dbdir003
drwxr-xr-x 4 tsm1 tsmsrvrs 256 Jun 01 12:30 dbdir004
Example 17-21 The command to run to configure a Tivoli Storage Manager V6.1 server instance
# /opt/tivoli/tsm/db2/instance/db2icrt -a SERVER -u tsm1 tsm1
DBI1070I Program db2icrt completed successfully.
Example 17-22 Space consumed by the creation of a Tivoli Storage Manager server instance
# pwd
/home/tsm1
# ls -l
total 168
-rwxr----- 1 tsm1 tsmsrvrs 415 May 26 19:30 .profile
-rw------- 1 root system 124 May 26 19:40 .sh_history
-rw------- 1 tsm1 tsmsrvrs 39 May 26 19:24 .vi_history
-rw-r--r-- 1 tsm1 tsmsrvrs 69607 May 26 19:23 dsmserv.opt
drwxrwsr-t 18 tsm1 tsmsrvrs 4096 May 26 19:30 sqllib
# du -ks sqllib
300588 sqllib
2. db2incrt also establishes the database instance within DB2, and the directory and file
hierarchy to manage this, as shown in Example 17-23.
Example 17-23 DB2 files to manage the new Tivoli Storage Manager instance
# ls -Rl tsm1
total 0
drwxrwxr-x 4 tsm1 tsmsrvrs 256 May 26 20:22 NODE0000
tsm1/NODE0000:
total 8
drwxr-x--- 4 tsm1 tsmsrvrs 4096 Jun 14 00:59 SQL00001
drwxrwxr-x 2 tsm1 tsmsrvrs 256 May 26 20:22 sqldbdir
tsm1/NODE0000/SQL00001:
total 13160
-rw-r----- 1 tsm1 tsmsrvrs 1770 Jun 13 18:56 DB2TSCHG.HIS
-rw------- 1 tsm1 tsmsrvrs 1280 Jun 11 21:15 SQLBP.1
-rw------- 1 tsm1 tsmsrvrs 1280 Jun 11 21:15 SQLBP.2
-rw------- 1 tsm1 tsmsrvrs 4096 May 26 20:22 SQLDBCON
-rw------- 1 tsm1 tsmsrvrs 16384 Jun 13 19:16 SQLDBCONF
-rw-r----- 1 tsm1 tsmsrvrs 9 Jun 13 18:58 SQLINSLK
-rw------- 1 tsm1 tsmsrvrs 24576 Jun 13 19:04 SQLOGCTL.LFH.1
-rw------- 1 tsm1 tsmsrvrs 24576 Jun 13 19:04 SQLOGCTL.LFH.2
drwxr-x--- 2 tsm1 tsmsrvrs 256 May 26 20:27 SQLOGDIR
-rw------- 1 tsm1 tsmsrvrs 8192 Jun 13 19:04 SQLOGMIR.LFH
tsm1/NODE0000/SQL00001/SQLOGDIR:
total 0
tsm1/NODE0000/SQL00001/db2event:
total 0
drwxr-x--- 2 tsm1 tsmsrvrs 256 Jun 12 01:42 db2detaildeadlock
tsm1/NODE0000/SQL00001/db2event/db2detaildeadlock:
total 4808
-rw-r--r-- 1 tsm1 tsmsrvrs 2096516 Jun 12 01:42 00000000.evt
-rw-r--r-- 1 tsm1 tsmsrvrs 352369 Jun 13 19:04 00000001.evt
-rw-r----- 1 tsm1 tsmsrvrs 39 Jun 12 01:42 db2event.ctl
tsm1/NODE0000/sqldbdir:
total 24
-rw-rw-r-- 1 tsm1 tsmsrvrs 1512 May 26 20:30 sqldbbak
-rw-rw-r-- 1 tsm1 tsmsrvrs 1512 May 26 20:30 sqldbdir
-rw-rw-r-- 1 tsm1 tsmsrvrs 540 May 26 20:23 sqldbins
3. The next step is to log out of AIX as the root user, and log in as the instance user ID
(tsm1). Upon completing this step, you will find that an environment for DB2 has been
established, as discussed in the previous steps.
4. In the next step we update the default directory for the database to reflect the instance
directory, by running the db2 update command as shown in Example 17-24.
Example 17-24 Setting the default directory for the database to be the same as the instance
directory.
$ pwd
/home/tsm1
$ db2 update dbm cfg using dftdbpath /tsm1
DB20000I The UPDATE DATABASE MANAGER CONFIGURATION command completed
successfully.
5. This db2 upd command establishes the db2 configuration for the tsmdb1 database, and
can be reviewed by running a db2 => get snapshot for database on TSMDB1, as shown
in Example 17-25.
Database Snapshot
6. Next, format the files for the database and logs, using the Tivoli Storage Manager server
command dsmserv format as shown in Example 17-26.
Example 17-26 dsmserv format command syntax for setting up a server instance in V6.1
$ dsmserv format dbdir=/tsm1/dbdir activelogsize=8192
activelogdir=/tsm1/active_log archlogdir=/tsm1/archive_log
archfailoverlogdir=/tsm1/archive_failover_log
mirrorlogdir=/tsm1/active_mirror_log
7. The successful output of the dsmserv format command is shown in Example 17-27.
Example 17-27 Output of the dsmserv format command in Tivoli Storage Manager V6.1
ANR7800I DSMSERV generated at 16:29:44 on Mar 13 2009.
8. Following the successful formatting, the next step is to start the Tivoli Storage Manager
V6.1 server in the foreground. To perform this step, we use the new parameters provided
in the dsmserv command, as shown in Example 17-28. Refer to Table 4-5 on page 36 for
information about the DSMSERV command.
Example 17-28 Starting the Tivoli Storage Manager V6.1 server instance in the foreground
$ /opt/tivoli/tsm/server/bin/dsmserv -u tsm1 -i /home/tsm1
9. The foreground output of the Tivoli Storage Manager V6.1 startup appears similar to that
of previous releases, with the exception of Database Manager startup, as shown in
Example 17-29.
Example 17-29 Tivoli Storage Manager server instance startup output on AIX
ANR7800I DSMSERV generated at 16:29:44 on Mar 13 2009.
10.The startup time for a new server instance is approximately 8 minutes, as shown by using
the AIX time command, and allowing for halt time, as shown in Example 17-30.
Example 17-30 Output of the AIX time command: V6.1 startup and immediate halt time
real 7m58.24s
user 0m4.02s
sys 0m2.27s
2. Then log out, and back into the instance, or issue the command ~/.profile as shown in
Example 17-32 to re-read the profile.
Example 17-32
$ ~/.profile
3. Next, create a file called tsmdbmgr.opt in the /home/tsm1 directory and add the following
line as shown in Example 17-33.
servername TSMDBMGR_TSM1
commmethod tcpip
tcpserveraddr localhost
tcpport 1500
passwordaccess generate
passworddir /home/tsm1
errorlogname /home/tsm1/tsmdbmgr.log
nodename $$_TSMDBMGR_$$
6. Next, set the API password for the instance TSM1, using the password (current and new)
as TSMDBMGR, as root user, as shown in Example 17-36.
7. Finally, we can submit the database backup; issue the backup database command.
Example 17-37 documents the results, and shows that we successfully completed the
task.
To reference the product manual for this process, refer to the following site.
http://publib.boulder.ibm.com/infocenter/tsminfo/v6/index.jsp?topic=/com.ibm.itsm.
srv.install.doc/t_srv_prep_dbmgr.html
Registering an Administrator
During this type of installation, there was no prompt for any Administrator ID or password.
1. After starting the server instance in the foreground, we must register an administrative
user and password, using reg admin admin admin1 and finally grant authority using the
command grant auth admin classes=system as a system user, as shown in
Example 17-38.
2. Following the administrator registration, we establish the automatic startup of the ISC/AC
by running the script: /opt/tivoli/tsm/AC/products/tsm/bin/setTSMUnixLinks.sh,
which will update the /etc/inittab with the following details, shown in Example 17-39.
3. Then, for the time, we will manually start up the ISC/AC server process (Example 17-40).
5. Then, the following step we will connect remotely to the ISC/AC using HTTPS, using the
connection address https://9.12.5.12:9043/ibm/console, as shown in Figure 17-7.
7. When you are logged in, at this point you can view the Integrated Solutions Console.
However, at this point, you can log out because this step is verified as working correctly.
Logout is shown in Figure 17-9.
It is generally not necessary to change this setting on a system that is dedicated to a single
Tivoli Storage Manager server. If there are other applications that require significant amounts
of memory on a system, changing this setting to an appropriate amount reduces paging and
improves system performance. For systems with multiple Tivoli Storage Manager servers,
changing this setting for each server is recommended. For example, this could be set to 25%
for each of three servers on a system. Each server could also have a different value for this
setting, as appropriate for the workload on that server.
4. Next, type in the connection details for the new TSM1 instance, as shown in Figure 17-13.
Figure 17-13 Adding a server connection for server instance within the Integrated Solutions Console
Figure 17-14 Summary page after adding a server connection in the Integrated Solutions Console
6. Finally, for test and review of the server connection, click the TSM1 server link, as shown
in Figure 17-15.
Figure 17-15 V6.1 Manage Servers panel in the Integrated Solutions Console interface
8. Lastly, you can observe the initial database size, with the preformatted log size, which was
input during the dsmformat stage of the installation, as shown in Figure 17-17.
Figure 17-17 Database and recoverylog size shown in Tivoli Storage Manager V6.1
Example 17-42 Entry into the /etc/inittab to startup Tivoli Storage Manager V6.1 instance
automatically
tsm1:2:once:/opt/tivoli/tsm/server/bin/rc.dsmserv -u tsm1 -i /home/tsm1 -q
>/dev/console 2>&1
Example 17-43 Entry into the /etc/inittab to startup V6.1 Admin Center service on AIX
IBMTSM:2:once:"/opt/tivoli/tsm"/AC/products/tsm/bin/rc.IBMTSM start
>/dev/console 2>&1
Launching installer...
4. After clicking OK, the introduction panel is displayed as shown in Figure 17-19.
5. Click Next, and the instance user ID and password are requested. This panel with our
response is shown in Figure 17-20.
6. Click Next, then continue by supplying the path of the instance files (Figure 17-21).
Note: If you have not updated the permissions for your instance directories
(chown -R tsm1.tsmsrvrs /tms1), you will receive an error, after which you then click
OK, fix the permissions, and click Next to retry the operation.
Note: You must have your file system slightly larger than the 8 GB required for the
recovery log, or this step might return an error.
After downloading the latest software, uncompress and expand it into a temporary directory,
as shown in Example 17-45.
The resulting output of the installation is shown in Example 17-47. Much of the text has been
removed to reduced the volume of output captured.
Example 17-47 AIX smitty update_all command example for applying V6.1.2 update
COMMAND STATUS
Text removed....
+-----------------------------------------------------------------------------+
Summaries:
+-----------------------------------------------------------------------------+
Installation Summary
--------------------
Name Level Part Event Result
-------------------------------------------------------------------------------
tivoli.tsm.server.msg.en_US 6.1.2.0 USR APPLY SUCCESS
tivoli.tsm.server.license.c 6.1.2.0 USR APPLY SUCCESS
tivoli.tsm.server 6.1.2.0 USR APPLY SUCCESS
tivoli.tsm.server 6.1.2.0 ROOT APPLY SUCCESS
tivoli.tsm.server.license.r 6.1.2.0 USR APPLY SUCCESS
Database Snapshot
Node number = 0
Memory Pool Type = Backup/Restore/Util Heap
Current size (bytes) = 65536
High water mark (bytes) = 65536
Configured size (bytes) = 213581824
Node number = 0
Memory Pool Type = Package Cache Heap
Current size (bytes) = 6881280
High water mark (bytes) = 8454144
Configured size (bytes) = 7864320
Node number = 0
Memory Pool Type = Catalog Cache Heap
Current size (bytes) = 1048576
High water mark (bytes) = 1048576
Configured size (bytes) = 4294967296
Node number = 0
Memory Pool Type = Buffer Pool Heap
Secondary ID = 4
Current size (bytes) = 101580800
High water mark (bytes) = 247398400
Configured size (bytes) = 4294967296
Node number = 0
Memory Pool Type = Buffer Pool Heap
Node number = 0
Memory Pool Type = Shared Sort Heap
Current size (bytes) = 65536
High water mark (bytes) = 1048576
Configured size (bytes) = 36831232
Node number = 0
Memory Pool Type = Lock Manager Heap
Current size (bytes) = 11534336
High water mark (bytes) = 13762560
Configured size (bytes) = 13828096
Node number = 0
Memory Pool Type = Database Heap
Current size (bytes) = 37093376
High water mark (bytes) = 37093376
Configured size (bytes) = 50266112
Node number = 0
Memory Pool Type = Application Heap
Secondary ID = 557
Current size (bytes) = 65536
High water mark (bytes) = 65536
Configured size (bytes) = 1048576
Node number = 0
Memory Pool Type = Applications Shared Heap
Current size (bytes) = 1835008
High water mark (bytes) = 1835008
Configured size (bytes) = 81920000
17.7.2 Logs
This section lists logs that will be useful in product status and problem determination:
Zip file containing all logs is located in:
/var/tivoli/tsm/logs.zip
Main log file is located in:
<install_location>/coi/plan/MachinePlan_localhost/logs/MachinePlan_localhost_
[INSTALL_0414_22.35].log
DB2 logs are located in:
<install_location>/coi/plan/MachinePlan_localhost/00002_DB2_9.5/DB2_9.5.log
<install_location>/coi/plan/tmp
In Tivoli Storage Manager V5.5, an extend log command would be required, provided that
you were not already at the 13 GB limit. However, in V6.1, we have a different recovery log
process, which helps to understand in order to correct this current system down condition.
There are four log functions with our new Tivoli Storage Manager and DB2 environment, so
which one might be the root cause in this situation? The space situation is shown in
Example 17-50.
Example 17-50 Vermont-tsm1 file system state after the log exhausted error.
$ df -k
/dev/tsmbinlv 2490368 194964 93% 26719 37% /opt/tivoli/tsm
/dev/dbdir1lv 2097152 293824 86% 44 1% /tsm1/dbdir1
/dev/dbdir2lv 2097152 293812 86% 44 1% /tsm1/dbdir2
/dev/dbdir3lv 2097152 293816 86% 44 1% /tsm1/dbdir3
/dev/dbdir4lv 2097152 293812 86% 44 1% /tsm1/dbdir4
/dev/actlog 4259840 504060 87% 14 1% /tsm1/activelog
/dev/actlogm 4259840 504060 87% 12 1% /tsm1/activelogm
/dev/archlog 8388608 40180 100% 22 1% /tsm1/archlog
/dev/archlogf 4194304 495100 89% 24 1% /tsm1/archlogf
As seen, none of the file systems that hold logs have 512 MB of available space, thus all
would be considered full to DB2. So, which one do we need to add file system space to in
order to successfully restart Tivoli Storage Manager, then perform a full database backup?
The answer to this question is the archlog (archive log), and this is because the recovery log
size was set during the installation process, and is essentially fixed at this point. So if we add
space to the archive log, which is the “overflow” log location, then when we start up Tivoli
Storage Manager and DB2, we have room to process additional transactions, long enough to
complete the full database backup.
Example 17-51 AIX command chfs -a which is used to expand the JFS2 file system
# chfs -a size=19777216 /tsm1/archlog
Filesystem size changed to 19791872
Now, we have more than 512 MB of space available on the /tsm1/archlog file system, and we
restart the Tivoli Storage Manager V6.1 server in the foreground as before, then configure for
a database backup to file, which will clear the logs. It will take two full backups to clear out the
archive logs. Prior to the startup in the foreground, we add three statements into the
/home/tsm1/dsmserv.opt file to ensure reclamation, expire inventory, and client schedules
(see Example 17-52).
Example 17-52 dsmserv.opt additions to ensure no database and recovery log activity will run
EXPINTERVAL 0
NOMIGRRECL
DISABLESCHEDS YES
Now, we are ready to start the foreground Tivoli Storage Manager server V6.1, as shown in
Example 17-53.
Example 17-53 Foreground V6.1 startup after correcting the log exhaustion problem
TSM:VERMONT-TSM1>
define devc dbb_file devt=file dir=/code/dbb_backups
ANR2017I Administrator SERVER_CONSOLE issued command: DEFINE DEVCLASS dbb_file
devt=file dir=/code/dbb_backups
ANR2203I Device class DBB_FILE defined.
TSM:VERMONT-TSM1>
set dbrecovery dbb_file
ANR2017I Administrator SERVER_CONSOLE issued command: SET DBRECOVERY dbb_file
ANR2782I SET DBRECOVERY completed successfully and device class for automatic
DB backup is set to DBB_FILE.
TSM:VERMONT-TSM1>
ba db type=full devc=dbb_file scratch=yes
ANR2017I Administrator SERVER_CONSOLE issued command: BACKUP DB type=full
devc=dbb_file scratch=yes
ANR4559I Backup DB is in progress.
ANR0984I Process 2 for DATABASE BACKUP started in the BACKGROUND at 11:47:17.
ANR2280I Full database backup started as process 2.
TSM:VERMONT-TSM1>
ANR0406I Session 1 started for node $$_TSMDBMGR_$$ (DB2/AIX64) (Tcp/Ip
loopback(32834)).
ANR8340I FILE volume /code/dbb_backups/45437242.DBV mounted.
ANR0511I Session 1 opened output volume /code/dbb_backups/45437242.DBV.
ANR1360I Output volume /code/dbb_backups/45437242.DBV opened (sequence number 1).
ANR8341I End-of-volume reached for FILE volume /code/dbb_backups/45437242.DBV.
ANR1362I Output volume /code/dbb_backups/45437242.DBV closed (full).
ANR0514I Session 1 closed volume /code/dbb_backups/45437242.DBV.
ANR8340I FILE volume /code/dbb_backups/45437299.DBV mounted.
ANR0511I Session 1 opened output volume /code/dbb_backups/45437299.DBV.
ANR1360I Output volume /code/dbb_backups/45437299.DBV opened (sequence number 2).
We have now worked around our exhausted log issue, with a few points the we would like to
further emphasize:
1. Ensure that your backup methodology and frequency aligns with your rate of new and
update activity.
2. Ensure that you position the database and logging file systems on technology that allows
simple expansion.
3. Plan for archive log and archive failover growth space.
4. Beyond these directly related points, always ensure that your volume history file is backed
up following your database backups.
If you have not already done so, stop and take the time to read this publication. You will gain
a better understanding of this product and be more successful if you take the time to plan and
design your total solution before you begin the installation.
Our intent here is to give you an understanding of what the requirements are to install Tivoli
Storage Manager V6.1 using the DB2 database.
We first cover what you get in this release, as well as what you do not get that you might have
been expecting. Then we go through the steps in preparation, regarding items that you need
to think about when you do the installation.
Let us now move forward to discuss the differences in the Tivoli Storage Manager V6.1
installation.
Restriction: You cannot install and run the Version 6.1 server on a system that already
has DB2 installed on it, whether DB2 was installed by itself or as part of some other
application. The Version 6.1 server requires the installation and use of the DB2 version
that is packaged with the Version 6.1 server. No other version of DB2 can exist on the
system.
Users who are experienced DB2 administrators can choose to perform advanced SQL
queries and use DB2 tools to monitor the database.
However, do not use DB2 tools to change DB2 configuration settings from those that are that
are preset by Tivoli Storage Manager, or alter the DB2 environment for Tivoli Storage
Manager in other ways, such as with other products. The Tivoli Storage Manager V6.1 server
has been built and tested extensively using the Data Definition Language (DDL) and
database configuration that Tivoli Storage Manager deploys.
Check that the system memory meets the server requirements. If you plan to run multiple
Tivoli Storage Manager instances of the V6.1 server on the system, each Tivoli Storage
Manager Instance requires the memory listed for one server. Multiply the memory for one
server by the number of Tivoli Storage Manager Instances planned for the system.
Table 18-1 Hardware requirements for Tivoli Storage Manager V6.1 for Windows
Type of hardware Hardware requirements
Disk space At least 3 GB of free disk storage (for a default installation) Plan for
more space for the database logs:
200 MB partition size in the C:\ drive
200 MB temporary directory space
300 MB in the Tivoli Storage Manager Instance directory
Additional disk space is required for database and log files.
The server is installed in the drive you select, and the database and
logs can be installed in another drive
18.2.2 Software
Table 18-2 describes the minimum software requirements for Tivoli Storage Manager V6.1
running on a Windows system.
Note: We strongly recommend for Windows 32 bit that you migrate into Windows 64 bit.
Table 18-2 Software requirements for Tivoli Storage Manager V6 for Windows
Type of software Minimum software requirements
Web browser A Web browser to log in and use the console. The Web browser can
be installed on the same or a separate system. The following
browsers are supported:
Microsoft Internet Explorer 6.0 SP1
Microsoft Internet Explorer 7.0
FireFox 1.5
FireFox 2.0
FireFox 3.0
Mozilla 1.7.8
Your browser must support the server code page. If your browser
does not support the server code page, the windows might be
unreadable. If your browser meets these requirements but does not
correctly display a Tivoli Storage Manager Web-based interface,
consider trying a different browser.
Detailed planning information can be found in “Database space requirements” on page 258
and Chapter 5, “IBM Tivoli Storage Manager database” on page 41.
The fix packs for Tivoli Storage Manager are available on the ftp site:
ftp://ftp.software.ibm.com/storage/tivoli-storage-management/
After you have extracted the package, the structure consists of the Composite Offering
Installer, Deployment Engine, PostFailureTask.xml, READme.htm, and install.exe files.
You can install the following components with Tivoli Storage Manager Version 6.1:
Tivoli Storage Manager server
Tivoli Storage Manager server languages
Tivoli Storage Manager licenses
Tivoli Storage Manager device driver
Tivoli Storage Manager storage agent
Tivoli Storage Manager Administration Center
Tivoli Storage Manager reporting and monitoring
It is advisable to install Tivoli Storage Manager Administration Center and Tivoli Storage
Manager reporting and monitoring on the same server.
Licenses (required) This component includes support Refer to the chapter on managing
for all Tivoli Storage Manager server operations in the
licensed features. After you install Administrator’s Guide Refer to
this package, you must configure “Other publications” on page 627
the licenses that you have for the exact book title and order
purchased. number for your platform.
Device driver (optional) This component extends Tivoli Refer to the chapter on adding
Storage Manager media devices in the Administrator’s
management capability. The Guide. A list of devices supported
Tivoli Storage Manager device by this driver is available from the
driver is generally preferred for Tivoli Storage Manager Web site:
use with the Tivoli Storage http://www.ibm.com/software/s
Manager server. It is required for ysmgmt/products/support/IBMTi
use with automated library voliStorageManager.html
devices and optical disk devices,
unless you are using Windows
Removable Storage Manager to
manage the media.
Tivoli Storage Manager fix packs are available on the ftp site:
ftp://ftp.software.ibm.com/storage/tivoli-storage-management/
Note: We highly recommend that you read the readme.first file to see the changes in the
Tivoli Storage Manager version that you are about to install, the hardware and software
prerequisites, and any additional installation steps that might be needed.
Using the Tivoli Storage Manager installation software, you can install the following
components:
Tivoli Storage Manager server
Tivoli Storage Manager server languages
Tivoli Storage Manager license
Tivoli Storage Manager device driver
Tivoli Storage Manager storage agent
Tivoli Storage Manager Administration Center
Note: The Tivoli Storage Manager client Application Programming Interface (API) is
automatically installed when you select the server component.
You can use the following steps to install the Tivoli Storage Manager components:
1. If you are installing the products using the Tivoli Storage Manager DVD, complete the
following steps:
a. Log on as an administrator. Insert the Tivoli Storage Manager server DVD.
b. If autorun is on, the DVD browser window opens.
c. If autorun is off, use Windows Explorer to go to the DVD drive, double-click the DVD,
and then double-click install.exe.
d. To access Windows Explorer, go to Start → Programs → Accessories or right-click
the Start button. The Tivoli Storage Manager server DVD browser window opens.
2. If you downloaded the executable file from Passport Advantage, complete the following
steps:
a. Change to the directory where you placed the executable file.
b. Either double-click the following executable file or enter the following command from
the Windows command line to extract the installation files:
- 6.1.0.0-TIV-TSMALL-platform.exe
Where platform denotes the operating system.
The extracted files will go into your current directory.
There are three basic ways to install Tivoli Storage Manager. We strongly recommend the
use of the Wizard, which will do all the work for you:
Installation wizard: See 18.6, “Installation wizard installation” on page 360.
Command-line console wizard: See “Command-line console wizard” on page 370.
Silent mode: See “Silent mode installation” on page 374.
Here we select the components to install. There is no default, so you must make a selection
or you will receive an error message and be returned to the components page. If you select
the Administration Center component, you are prompted for a user name and password. You
will use these later to log onto the Integrated Solutions Console and Administration Center.
Note: If you previously installed a server, ensure that you select the same directory when
you install a language pack, license, or device driver. If you previously installed a storage
agent, ensure that you select the same directory if you return to install a device driver. A
server and a storage agent cannot be installed on the same workstation.
The DB2 Version 9.5 is installed during the Tivoli Storage Manager server installation. You
are prompted to create and confirm a password. Defaults are provided for the DB2 user
name and database name.
11.Here we continue to either configure a new server instance or upgrade an existing server
instance. Choose one of the following methods:
a. To configure a new server instance:
• Log in as an administrator and open the local new-instance wizard, dsmicfgx,
located in the server installation directory.
• Log in to a Version 6.1 Tivoli Storage Manager Administration Center and start the
Create New Instance wizard. Configure the new instance manually. See the Tivoli
Storage Manager Information Center, or the IBM Tivoli Storage Manager for
Windows Installation Guide V6.1, GC23-9785.
b. To upgrade an existing server instance:
• Log in as an administrator and start the upgrade wizard, dsmupgdx.exe file, located
in the server installation directory.
• You can also upgrade a server manually. See the Tivoli Storage Manager
Information Center, or the IBM Tivoli Storage Manager Server Upgrade Guide,
SC23-9554.
Next, we show you a basic Tivoli Storage Manager default installation and configuration and
provide images of the Console Mode wizard interface.
From the directory where you have downloaded the install package, submit install.exe -i
console to start the installation. Example 18-1 shows the starting panel where we have to
choose the appropriate locale for the installation.
Choose Locale...
----------------
1- Deutsch
2- English
3- Español
4- Français
5- Italiano
6- Português (Brasil)
Example 18-2 shows the Welcome panel. Press Enter to move to the next panel.
===============================================================================
Tivoli Storage Manager Install
Licensed Materials - Property of IBM Corp. (c) IBM Corporation and other(s)
1993, 2008. All rights reserved. US Government Users Restricted Rights --
Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM
Corp.
It is strongly recommended that you quit all programs before continuing with
this installation.
The next step will be to review the Software License Agreement, You must accept the license
agreement to proceed. Enter 1 as shown in Example 18-3.
Here we decide to install the Tivoli Storage Manager Server and the License components in
the default folder. Example 18-4 shows the related steps.
Example 18-4 Windows Command Line Installation: Install Folder and Component Selection
Choose Install Folder
---------------------
Where would you like to install?
Example 18-5 shows the parameters to specify during database server instance creation. We
choose to use the default user ID, db2user1, and the default database name of DB2. For the
password, use either the Tivoli Storage Manager DB2 administrative -user ID and password,
or a new user ID and password that you create now.
Example 18-5 Windows Command Line Installation: Create database server instance
DB2 Enterprise Server Edition
Enter the following information to create a DB2 database for your server instance.
Use either the Tivoli Storage Manager DB2 administrative-user ID and password, or
a new user ID and password that you create now.
===============================================================================
DB2 Password::*********
===============================================================================
Verify Password::*********
Example 18-6 shows the messages submitted during deployment engine initialization. For
easier readability, we removed some lines. After the engine is initialized, we press Enter to
move on with the installation.
Next you will see a message indicating that the install is progressing and finally complete (see
Example 18-7). You are provided a summary of the components that were installed.
[==================|==================|==================|==================]
[------------------|------------------|------------------|------------------]
===============================================================================
Installation Complete
---------------------
Log in as root user or administrator and open the local new-instance wizard,
dsmicfgx, located in the server installation directory.
Log on to a Version 6.1 Tivoli Storage Manager Administration Center and start the
Create New Instance wizard.
Configure the new instance manually. See the Tivoli Storage Manager Information
Center, or the Installation Guide.
You have the choices to continue, then either configure a new server instance or upgrade an
existing server instance:
1. To configure a new server instance, choose one of the following methods:
a. Log in as an administrator and open the local new-instance wizard, dsmicfgx.exe,
located in the server installation directory.
b. Log on to a Version 6.1 Tivoli Storage Manager Administration Center and start the
Create New Instance wizard.
c. Configure the new instance manually. See the Tivoli Storage Manager Information
Center, or the Installation Guide.
2. To upgrade an existing server instance, log in as root user or administrator and start the
upgrade wizard, dsmupgdx, located in the server installation directory.
You can also upgrade a server manually. See the Tivoli Storage Manager Information Center,
or the Tivoli Storage Manager Server Upgrade Guide, SC23-9554. For more information
about any of these tasks, see the Tivoli Storage Manager Information Center at:
http://publib.boulder.ibm.com/infocenter/tsminfo/v6
Note: After you start the silent installation, it immediately closes the foreground window
and runs in the background. To receive a return code from the silent installation, run it
using a batch script.
Note: It is important to use brackets if you have spaces in the path name.
Batch script
To run the silent installation so that you can see the progress of the installation, create a batch
script by completing the following steps:
1. Create a file and name it install.bat. The file name must end with .bat, not bat.txt.
2. Choose an installation option (with or without a response file) and enter the command into
the install.bat file and save it.
3. Open a command prompt to run the batch file. Issue the command: install.bat
4. After the installation is complete, issue the following command to retrieve the return code:
echo %ERRORLEVEL%
Example 18-12 shows that we have created an install.bat file with the required information
for the install.bat file.
Next we show you a basic Tivoli Storage Manager default installation and configuration and
provide images of Silent Mode interface using the install.bat file.
When using the echo %ERRORLEVEL% command you will get a return code and you are
returned to the command prompt as shown in Example 18-13.
C:\TSM_images\6.1_Server>echo %ERRORLEVEL%
0
C:\TSM_images\6.1_Server>
Example 18-13 shows that we got return code 0 and the silent installation was successful. For
return codes see “Debugging techniques” on page 402.
First we need to create the directories that the Tivoli Storage Manager server instance needs
for database and recovery logs using the mkdir command. You need unique, empty
directories for each of the items listed in Example 18-14 on page 376. Create the database
directories, the active log directory, and the archive log directory on various physical volumes.
Either use the configuration wizard to configure the Tivoli Storage Manager instance or
configure the instance manually.
After you install Tivoli Storage Manager Version 6.1, one of the options for configuring Tivoli
Storage Manager is to use the configuration wizard on your local system. By using the wizard,
you can avoid some configuration steps that are complex when done manually. Start the
wizard on the system where you installed the Tivoli Storage Manager Version 6.1 server.
Follow the instructions to complete the configuration. The wizard can be stopped and
restarted, but the server will not be operational until the entire configuration process is
complete.
3. The instance user ID is the ID that is used by the database manager to read and write the
database and log files. This user ID must have write permission to all directories
containing database and log files.
The instance user ID is not necessarily the same user ID that you will use to run the Tivoli
Storage Manager server.The user ID must already exist on the system, and must not be
disabled or locked.
The primary group of the specified user will become the administrative group of the
database. Any other users in this group can manage the database (including starting and
stopping the database manager). If you want to restrict this access, you should create a
separate group for the instance user ID, so that only the instance user ID can manage this
database.
4. To validate the user ID and password, a connection will be made to the local system using
either SSH, RSH, or REXEC. You must enable one of these protocols to allow the wizard
to proceed.
To validate the user ID and password, a connection will be made to the local system using
either SSH, RSH, REXEC, or SMB protocols. SMB is the interface used by File and Print
Sharing (also known as CIFS). In order to use the SMB protocol, you must make sure File
and Print Sharing is enabled, and that port 445 is not blocked by your firewall. Issue:
Figure 18-13 IBM Tivoli Storage Manager Instance Configuration Wizard - Instance User ID
Figure 18-14 IBM Tivoli Storage Manager Instance Configuration Wizard - Instance Directory
Figure 18-15 IBM Tivoli Storage Manager Instance Configuration Wizard - Database Directories file
The Tivoli Storage Manager database is stored in a series of directories managed by the
database manager. To improve data throughput, specify a large number of directories to
allow the database manager to spread the workload over multiple disks.
Figure 18-16 IBM Tivoli Storage Manager Instance Configuration Wizard - Database Directory
10.Now we specify the directories for the database and log volumes:
– The active log directory contains all transactions currently in progress on the server.
If the system crashes or the server stops, the active log is all that is required to restart
the server. The active log is broken up into files. After all transactions in a log file are
completed, the log file is copied to an archive log directory.
– If a log file cannot be copied to the primary archive log directory, it is copied to the
ArchiveLogFailover log directory, if specified. If a log file cannot be copied to either log
directory, it remains in the active directory.
– If the active log fills up, transactions will fail. Therefore, ensure that the archive log
directories are online with sufficient space to hold the log files. Logs in the archive log
directories can be copied to another location, but these logs must be returned to the
archive directory to perform a database restore operation.
– The mirror log directory contains the same contents as the active log directory, and is
used for redundancy in case of disk failure. If your active log directory resides on a disk
that is already mirrored or has other RAID protection, you might not need to specify a
mirror log directory.
Figure 18-17 IBM Tivoli Storage Manager Instance Configuration Wizard- Recovery Log Directories
Figure 18-18 IBM Tivoli Storage Manager Instance Configuration Wizard - Server Information
The administrator name must be 1-64 characters and must contain only alphanumeric
characters. The administrator name is not case sensitive.
Figure 18-19 IBM Tivoli Storage Manager Instance Configuration Wizard - Administrator Credentials
Figure 18-20 IBM Tivoli Storage Manager Instance Configuration Wizard - Server Communication
Figure 18-21 IBM Tivoli Storage Manager Instance Configuration Wizard - Configuration Summary
Figure 18-22 IBM Tivoli Storage Manager Instance Configuration Wizard - Configure Instance
Note: This initial backup is required by DB2 in order for Tivoli Storage Manager to set
the recovery log processing mode to ROLLFORWARD. At this point, this database
backup only contains the server schema Data Definition Language (DDL). This
database backup is performed to a file in the local file system. This database backup is
subsequently deleted by Tivoli Storage Manager because it only contains the server
schema definitions, which can be recreated by Tivoli Storage Manager anyway.
After completing the installation and configuration of the Tivoli Storage Manager server,
we recommended that you perform a FULL database backup. This database backup
and any subsequent database backups will be tracked in the server volume history, as
expected, and used as part of the server Disaster Recovery Manager (DRM)
processing, and such.
Figure 18-23 IBM Tivoli Storage Manager Instance Configuration Wizard - database backup
Figure 18-24 IBM Tivoli Storage Manager Instance Configuration Wizard - configuration complete
Figure 18-25 IBM Tivoli Storage Manager Instance Configuration Wizard - Configuration Successful
Important: Before you run the db2icrt command, ensure that the user and the instance
directory of the user exists. If there is no instance directory, you must create it.
The instance directory stores the following files for the server instance:
The server options file, dsmserv.opt
The dsmserv.v6lock file
Device configuration file, if the DEVCONFIG server option does not specify a fully
qualified name
Volume history file, if the VOLUMEHISTORY server option does not specify a fully
qualified name
Volumes for DEVTYPE=FILE storage pools, if the directory for the device class is not fully
specified, or not fully qualified
User exits
Trace output (if not fully qualified)
No other server activity is allowed while initializing the database and recovery log. After you
have completed setting up server communications, you are ready to initialize the database.
Do not place the directories on file systems that might run out of space. If certain directories
(for example, the archive log) become unavailable or full, the server stops. See the IBM Tivoli
Storage Manager for Windows Administrator's Guide V6.1, SC23-9773 for more details.
Note: The installation program creates a set of registry keys. One of these keys points to
the directory where a default server, named SERVER1, is created. To install an additional
server, create a new directory and use the DSMSERV FORMAT utility, with the -k
parameter, from that directory. That directory becomes the location of the server. The
registry tracks the installed servers.
Issue the DSMSERV FORMAT command to start the format of the database (see Example 18-16).
The output from the DSMSERV FORMAT command is shown in Example 18-17.
Complete the following steps before issuing either the BACKUP DB or the RESTORE DB
commands.
In the following commands, the examples use d:\server1 for the database instance and
C:\Program Files\Tivoli\TSM\server1 for the Tivoli Storage Manager instance directory.
1. Here we create the tsmdbmgr.env file in the C:\Program Files\Tivoli\TSM\tsminst1
directory with the contents in Example 18-18.
DSMI_CONFIG=C:\PROGRA~1\Tivoli\TSM\tsminst1\tsmdbmgr.opt
DSMI_LOG=C:\PROGRA~1\Tivoli\TSM\tsminst1
5. Issue the commands in Example 18-21 to set the API environmental variable.
Here are the results of the commands. Note that we have updated the password in the
registry (see Example 18-22).
Example 18-22
TSM Windows NT Client Service Configuration Utility
Command Line Interface - Version 6, Release 1, Level 0.0 1027FB
(C) Copyright IBM Corporation, 1990, 2008, All Rights Reserved.
Last Updated Oct 28 2008
TSM Api Version 6.1.0
Remember that starting the server is an operating system-level operation and has certain
restrictions. If you do not have the permissions to use the dsmserv program, you cannot start
it. If you do not have authority to read and write files in the instance directory, you cannot start
that instance of the server.
Note: If you receive a Windows error 216 message when you try to start the server, it is a
result of using a 64-bit package on 32-bit Windows. Retrieve the 32-bit Windows package
and reinstall Tivoli Storage Manager.
Starting the server using Windows services.msc from the Start menu → Run, or with the
DSMSERV utility to start the Tivoli Storage Manager server.
To start the server from the C:\Program Files\Tivoli\TSM server directory, enter the
command: dsmserv -k server_instance where server_instance is the name of your server
instance. Server1 is the default for the first instance of the Tivoli Storage Manager server on a
system. Example 18-23 shows the command to start our instance named tsminst1.
Note: If you start the Tivoli Storage Manager server as a service, after you stop it, the
database service continues to run.
After you have completed formatting the database and log, you are ready to create a
Windows service for your server instance.
1. Change to the C:\Program Files\Tivoli\TSM\console directory, or if you installed Tivoli
Storage Manager in a different location, go to the console subdirectory in your main
installation directory. An executable (install.exe) in this directory installs the Tivoli
Storage Manager server as a Windows service.
Registering licenses
Immediately register any Tivoli Storage Manager licensed functions that you purchase so you
do not lose any data after you start server operations, such as backing up your data.
To manage the system memory that is used by each server, use the DBMEMPERCENT
server option to limit the percentage of system memory that can be used by the database
manager of each server. Refer to the IBM Tivoli Storage Manager for Windows
Administrator's Guide V6.1, SC23-9773 for more information about DBMEMPERCENT.
If all servers are equally important, use the same value for each server. If one server is a
production server and other servers are test servers, set the value for the production server to
a higher value than the test servers.
When you start the server, the new options go into effect. If you modify any server options
after starting the server, you must stop and restart the server to activate the updated options.
Use the Server Options utility that is available from the Tivoli Storage Manager Console to
view and specify server communications options. This utility is available from the Service
Information view in the server tree. By default, the server uses the TCP/IP, Named Pipes, and
HTTP communication methods.
If you start the server console and see warning messages that a protocol could not be used
by the server, either the protocol is not installed or the settings do not match the Windows
protocol settings. For a client to use a protocol that is enabled on the server, the client options
file must contain corresponding values for communication options. From the Server Options
utility, you can view the values for each protocol. For more information about server options,
see the IBM Tivoli Storage Manager for Windows Administrator's Guide V6.1, SC23-9773.
TCPPORT
The server TCP/IP port address. The default value is 1500.
TCPWINDOWSIZE
Specifies the size of the TCP/IP buffer that is used when sending or receiving data. The
window size that is used in a session is the smaller of the server and client window sizes.
Larger window sizes use additional memory but can improve performance. To use the
default window size for the operating system, specify zero.
TCPNODELAY
Specifies whether or not the server sends small messages or lets TCP/IP buffer the
messages. Sending small messages can improve throughput but increases the number of
packets sent over the network. Specify YES to send small messages or NO to let TCP/IP
buffer them. The default is YES.
TCPADMINPORT
Specifies the port number on which the server TCP/IP communication driver is to wait for
requests other than client sessions. The default value is 1500.
SSLTCPPORT
(SSL-only) Specifies the Secure Sockets Layer (SSL) port number on which the server
TCP/IP communication driver waits for requests for SSL-enabled sessions for the
command-line backup-archive client and the command-line administrative client.
SSLTCPADMINPORT
Specifies the port address on which the server TCP/IP communication driver waits for
requests for SSL-enabled sessions for the command-line administrative client.
In this example, SHMPORT specifies the TCP/IP port address of a server when using shared
memory. Use the SHMPORT option to specify a different TCP/IP port. The default port
address is 1510. A shared memory setting is shown in Example 18-25.
For details about configuring SNMP for use with Tivoli Storage Manager, see the
Administrator’s Guide for your platform. The subagent communicates with the snmp daemon,
which in turn communicates with a management application. The snmp daemon must support
the DPI® protocol. The subagent process is separate from the Tivoli Storage Manager server
process, but the subagent gets its information from a server options file. When the SNMP
management application is enabled, it can get information and messages from servers.
Use the list of SNMP DPI options in Example 18-26 as an example of a SNMP setting. You
must specify the COMMMETHOD option. For details about the other options, see the IBM
Tivoli Storage Manager for Windows Administrator's Guide V6.1, SC23-9773.
Monitor the active log by querying the database log, to ensure that the size is correct for the
workload that is handled by the server instance. When the server workload is up to its typical
expected level, and the space that is used by the active log is 80 % of the space that is
available to the active log directory, you should increase the amount of log space.
Whether you need to increase the space depends on the types of transactions in the server’s
workload, because transaction characteristics affect how the active log space is used.
The number and size of files in backup operations can affect the space usage in the active
log, as follows:
Clients such as file servers that back up large numbers of small files can cause large
numbers of transactions that complete during a short period of time. The transactions
might use a large amount of space in the active log, but for a short period of time.
Clients such as a mail server or a database server that back up large chunks of data in
few transactions can cause small numbers of transactions that take a long time to
complete. The transactions might use a small amount of space in the active log, but for a
long period of time.
Backup operations that occur over relatively slower connections cause transactions that take
a longer time to complete. The transactions use space in the active log for a longer period of
time. If the server is handling transactions with a wide variety of characteristics, the space
that is used for the active log might go up and down by a large amount over time. For such a
server, you might need to ensure that the active log typically has a smaller percentage of its
space used. The extra space allows the active log to grow for transactions that take a very
long time to complete, for example.
Problem determination
If the Deployment Engine is not working properly and is causing the Tivoli Storage Manager
installs to fail, follow these instructions:
1. Check log.txt for the following entry return code: SI_UP_TO_DATE
– Windows: (Install directory)\log.txt
– UNIX: /var/tivoli/tsm/log.txt
2. If there is a problem with the Deployment Engine, ensure that the following files do not
exist:
– Windows:
(Install directory)\\_uninst\plan\inventory\inventoryCheck.properties
– UNIX:
/opt/tivoli/tsm/_uninst/plan/inventory/inventoryCheck.properties
The Tivoli Storage Manager clients work in conjunction with the Tivoli Storage Manager
server. Set up your Tivoli Storage Manager server to obtain backup or archive access to the
server. Refer to the server publications to install and configure a Tivoli Storage Manager
server:
http://publib.boulder.ibm.com/infocenter/tsminfo/v6/index.jsp?topic=/com.ibm.itsm.
nav.doc/c_product_overview.htm
Chapter 19. Tivoli Storage Manager V6.1 Backup-Archive Client update and installation changes 409
Availability of 64–bit binaries:
The client packages for Linux on POWER, Linux zSeries, and one of the AIX clients
contain 64-bit binaries.
Enhanced help facilities:
The command-line client help command is enhanced so that you can specify the
command, option, or message on which you want help information.
In the graphical user interface, message boxes are enhanced with a button that you can
click to see detailed message information.
Chapter 19. Tivoli Storage Manager V6.1 Backup-Archive Client update and installation changes 411
use the Version 5.3 or earlier IBM Tivoli Storage Manager Backup-Archive Client, you will
be unable to start the Tivoli Storage Manager Backup-Archive Client Scheduler service or
client acceptor daemon service.
The method for processing system state data changed in Tivoli Storage Manager Version
5.5 such that system state (and system service) backup from prior clients is supported but
no longer recommended. When you use the Tivoli Storage Manager Version 5.5 client,
you will generate new system state backups using the new methods. You cannot perform
the following operations:
– Generating a backup set with system state data. If you use the system state data
backed up with the Tivoli Storage Manager Version 5.5 client to generate a backup set,
you must be connected to a Tivoli Storage Manager Version 5.3.6 or later, Version
5.4.1 or later, or Version 5.5.0 server.
– Restoring system state and system services file spaces that were backed up by a Tivoli
Storage Manager Version 5.4.x or earlier client.
– Using a Tivoli Storage Manager Client prior to Version 6.1 to restore system state
backed up by a Tivoli Storage Manager Client Version 6.1 or above.
– Restoring the system state in certain situations: The Windows client can be regressed
from Tivoli Storage Manager Version 5.5 to Tivoli Storage Manager Version 5.4 without
any impact, except that system state backed up by the Tivoli Storage Manager Version
5.5 client cannot be restored by the Tivoli Storage Manager Version 5.4 client. If the
system had not yet been backed up by the Tivoli Storage Manager Version 5.5 client,
but was still the version backed up at the Tivoli Storage Manager Version 5.4 level, the
Tivoli Storage Manager Version 5.4 client would be able to restore the system state.
The system state would not be restorable by the Tivoli Storage Manager Version 5.4
client if the system state had already been backed up by the Tivoli Storage Manager
Version 5.5 client.
– Specifying systemservices in the domain statement (for example, domain
systemservices).
– Using the backup systemservices command.
– Using the restore systemservices command in normal production or recovery
scenarios. Instead, you should use restore systemstate <service name> to restore a
particular system service.
– Using the query systemservices command.
– Using the show systemservices command.
Important: If you do not follow the migration instructions properly, you might have two file
spaces, one Unicode and one non-Unicode, with different file space identifiers (fsID) for
the same client volume. In this case, the Tivoli Storage Manager client uses the
non-Unicode file space as the default file space for backup and restore operations.
Chapter 19. Tivoli Storage Manager V6.1 Backup-Archive Client update and installation changes 413
19.2.4 Additional migration information
This section explains some additional information that you need to know when migrating your
Tivoli Storage Manager client.
When you install the Web client, you must install the Web-client language files that
correspond to those languages you want to use. The Windows GUI has been migrated to a
Java application, and it is the default application. The non-Java Windows native GUI is
installed as the dsmmfc.exe file in the installation directory, but it has not been updated with
the new Tivoli Storage Manager Version 6.1 features.
To view the non-English online help from the Web Client applet, you must install the language
versions of the help files on the agent, the system where the Tivoli Storage Manager
Backup-Archive client was installed. If the language versions are not installed or are not
available, the online help will be displayed in English. A command-line administrative client is
available on all UNIX, Linux, and Windows client platforms. See the client_message.chg file
in the client package for a list of new and changed messages since the previous Tivoli
Storage Manager release.
Volume Shadow Copy Service (VSS) is also supported for OFS and online image operations.
You can enable VSS by setting the snapshotproviderfs and snapshotproviderimage options in
the dsm.opt file. If you use VSS, you do not need to install LVSA. Use the Setup wizard to
select NONE, VSS, or LVSA for each of the OFS and online image functions. If LVSA is
selected and it is not already installed on your system, it will be installed.
If you are migrating from a previous version of the Tivoli Storage Manager client where you
were using the LVSA for OFS or online image, and you decide during the installation to
continue to use the LVSA, then you do not need to explicitly set the snapshotproviderfs or
snapshotproviderimage options. Because you do not need to set these options, it is easier to
install the new client on a large number of systems, because the dsm.opt file will not need to
be updated to continue to use the OFS or online image functions.
For supported combinations of tape drive and tape library, refer to:
http://www.ibm.com/software/sysmgmt/products/support/IBMTivoliStorageManager.html
The Tivoli Storage Manager Windows client is included on the desktop client installation DVD
in the setup directory structure.
You can install the clients using any of the following methods:
Install directly from the DVD.
Create client images to install.
Transfer installable files from the DVD to a target workstation. You can copy all of the
clients to your server workstation so that client workstations can get the files from the
x:\tsmcli directory. The following sample command is for Windows:
xcopy h:\setup\* x:\ /s
Note: All of the examples in this chapter use the h drive as the DVD or mounted drive.
Substitute h with the DVD drive of your system.
Chapter 19. Tivoli Storage Manager V6.1 Backup-Archive Client update and installation changes 415
Language packs
The Tivoli Storage Manager client now makes use of language packs for non-English
language support. Each supported language has its own installation package that must be
installed in order to use Tivoli Storage Manager in a supported, non-English language. The
Tivoli Storage Manager client is a prerequisite for installing a Tivoli Storage Manager Client
Language Pack.
Additional considerations
Here are some more considerations regarding the various clients:
The Backup-Archive Client, the API, and the Web Client are interdependent. If you select
the Backup-Archive Client, you must also select the API. Similarly, if you select the Web
client, you must also select the Backup-Archive Client and the API.
The Backup-Archive Client component includes the client scheduler files.
The installer displays the exact amount of disk space that is required for each program
feature. Ensure that there is enough disk space on the destination drive for the files you
choose to install. The installation program will not install to a destination drive with
insufficient disk space.
Figure 19-1 Installation of the Tivoli Storage Manager Client Version 6.1.0.0 Main Menu
Figure 19-2 Install Products window of the Tivoli Storage Manager Client Version 6.1.0.0
3. The next window is the Choose Setup Language as shown in Figure 19-3. After selecting
your preferred language, click OK to continue.
Figure 19-3 Installation of the Tivoli Storage Manager Client Version 6.1.0.0
Chapter 19. Tivoli Storage Manager V6.1 Backup-Archive Client update and installation changes 417
4. The InstallShield Wizard opens as shown in Figure 19-4.
Figure 19-4 Instal Shield Wizard of the Tivoli Storage Manager Client Version 6.1.0.0
5. The next window allows you to specify the destination folder (see Figure 19-5). You can
accept the default or click Change in order to specify your installation destination.
Click Next to continue.
Figure 19-6 Setup Type for installation of the Tivoli Storage Manager Client
Chapter 19. Tivoli Storage Manager V6.1 Backup-Archive Client update and installation changes 419
8. The Custom Setup window is displayed as shown in Figure 19-7. You can select the
available features and click Next to continue.
Figure 19-7 Installation of the Tivoli Storage Manager Client Version 6.1.0.0
9. After specifying the install options, the Install Wizard is ready to install the Tivoli Storage
Manager Client as shown in Figure 19-8. You can either click Install to begin the Client
install, Back to go to previous panels and modify your selections, or Cancel to exit the
InstallShield Wizard. Here we have chosen to click Install.
Figure 19-8 Install Wizard is ready to begin the Tivoli Storage Manager Client installation
Figure 19-9 Installation of the Tivoli Storage Manager Client completion window
11.You will then be prompted to restart your system after the Tivoli Storage Manager Client
installation as shown in Figure 19-10.
Chapter 19. Tivoli Storage Manager V6.1 Backup-Archive Client update and installation changes 421
Figure 19-11 Client Options File Configuration Wizard
2. Your first choose whether to Create a new options file, Upgrade my options file, or Import
from an existing options file for use. Because this is a new client install, the option to
Update my options file is greyed out. We select to Create a new options file, and click
Next to continue as shown in Figure 19-12.
Figure 19-12 Select how to proceed with the Client options file
4. The TSM Client/Server Communications window is shown in Figure 19-14. One of the
most important purposes of the options file is to specify the communication protocol
necessary to establish communications between your backup client and the backup
server. For our installation, we select TCP/IP. Click Next to continue.
Chapter 19. Tivoli Storage Manager V6.1 Backup-Archive Client update and installation changes 423
5. After specifying TCP/IP, we need to enter the IP address of the Tivoli Storage Manager
Server or the address of the server as shown in Figure 19-15. We specified the Tivoli
Storage Manager server name and accepted the default port number. Click Next to
continue.
Figure 19-15 Tivoli Storage Manager server name or IP address and communication port
6. Figure 19-16 shows the recommended Include/Exclude list. You can select the entire list
or specific files. When you have made your choice, click Next to continue.
Figure 19-16 Installation of the Tivoli Storage Manager Client Version 6.1.0.0
Figure 19-17 Installation of the Tivoli Storage Manager Client Version 6.1.0.0
8. The Domain for Backup window is displayed. Here you select to back up all local file
systems or specify specific drives. You also specify the type of backup to perform (see
Figure 19-18).
Chapter 19. Tivoli Storage Manager V6.1 Backup-Archive Client update and installation changes 425
9. After you have made your choices for the Client Options file, you are given the option to go
back and make changes of apply the changes and save the TSM Client Options to disk
(see Figure 19-19). Click Apply to continue.
10.When the TSM Client Options file has been saved, you will be presented with the window
in Figure 19-20. Click Finish to close the wizard.
Figure 19-21 Log in to the Tivoli Storage Manager Client Version 6.1.0.0
12.Figure 19-22 shows the options you are presented based on the Tivoli Storage Manager
Client Options file.
Chapter 19. Tivoli Storage Manager V6.1 Backup-Archive Client update and installation changes 427
428 Tivoli Storage Manager V6.1 Technical Guide
Part 8
Tivoli Storage Manager Monitoring and Reporting is designed as a standalone package that
can install everything on one system (however, this system must be separate from your Tivoli
Storage Manager V6.1 servers). The code for the new system is a separate installable
package; it is not built into the Tivoli Storage Manager Server code.
Overall, the Tivoli Storage Manager V6.1 Monitoring and Reporting components are as
follows:
Tivoli Storage Manager Administration Center (Health Monitor): This component is used to
see the health status of a Tivoli Storage Manager Server. This feature does not need the
Monitoring and Reporting package to be installed in order to work. It connects to Tivoli
Storage Manager Server by a Tivoli Storage Manager administrative user connection.
Tivoli Storage Manager Administration Center (Reporting): This is a basic set of reports
that can be run directly on the Tivoli Storage Manager Server. This feature does not need
the Monitoring and Reporting package to be installed in order to work. It connects to Tivoli
Storage Manager Server by a Tivoli Storage Manager administrative user connection.
Tivoli Storage Manager Monitoring: This new feature consists of real-time reporting from
Tivoli Storage Manager. From a user perspective, it is available from the Tivoli Event
Portal (TEP) component of IBM Tivoli Monitoring (ITM). This feature needs the Monitoring
and Reporting package to be installed in order to be available. It connects to Tivoli
Storage Manager Server by a Tivoli Storage Manager administrative user.
Tivoli Storage Manager Reporting: This new feature consists of reports that are run
against a database of events captured by the Tivoli Storage Manager Monitoring
component. The reports are shown in a separate part of the ISC from Tivoli Storage
Manager, called “Tivoli Common Reporting”, and the reports are run against a database
installed for the purpose, called Tivoli Data Warehouse (TDW). This feature needs the
Monitoring and Reporting package to be installed in order to be available, and depends on
Monitoring being installed and working. It does not connect directly to a Tivoli Storage
Manager Server; instead it extracts data from Tivoli Storage Manager Monitoring.
This feature is most useful on standalone Tivoli Storage Manager Servers without the
optional Monitoring and Reporting server installed, because it provides a basic usage report,
and separate basic security report. It works by connecting straight to the Tivoli Storage
Manager Server by a Tivoli Storage Manager administrative connection, it does not require
any other infrastructure.
Chapter 20. Monitoring and reporting in Tivoli Storage Manager V6.1 433
20.1.3 Tivoli Storage Manager Monitoring and Reporting
Tivoli Storage Manager V6.1 Monitoring and Reporting is a new package that consists of
existing and new IBM products. All of them are new to the Tivoli Storage Manager product,
although most have been used in other Tivoli products.
The principal components of the new Tivoli Storage Manager Monitoring feature are:
IBM DB2
IBM Tivoli Monitoring (ITM)
IBM Tivoli Enterprise Portal (TEP)
– Tivoli Storage Manager/TEP Workspaces
For Tivoli Storage Manager Reporting, we also have the following features:
IBM Tivoli Data Warehouse (TDW)
Tivoli Common Reporting (TCR)
– Tivoli Storage Manager/TCR report definitions
– Business Intelligence and Reporting Tools (BIRT):
Allows users to create customized reports for use with TCR (optional)
Tivoli Storage Manager Integrated Solution Console and Administration Center (ISC)
– This hosts the TCR component, which reports on data in the TDW.
Monitoring and Reporting uses the standard Tivoli Storage Manager Deployment Engine for
installation, so while this list looks large, the package is actually straightforward to install. The
basic components (DB2, ITM, TEP, TDW) are included in one installation package for your
convenience. The additional BIRT package is separate, and BIRT is only required if you want
to create customized reports: it is not mandatory if the built-in reports are sufficient, or if you
are downloading preconfigured reports from another source (for example, from Tivoli OPAL).
Setup options
There are a couple of options when setting up Monitoring and Reporting:
For customers with an already existing licensed ITM setup, Tivoli Storage Manager can
make use of this, provided it is at level 6.2 FP1.
For customers with no existing ITM setup, Tivoli Storage Manager provides a complete
version, which is usable for monitoring Tivoli Storage Manager only. With this option, the
Monitoring and Reporting system must comply with the requirements listed on the
planning section of the Tivoli Storage Manager documentation: principally that the
Monitoring and Reporting system has a server of its own. Although is not supported to
install Monitoring and Reporting on the same system as the Tivoli Storage Manager 6.1
server, using a separate VM or LPAR would be fine.
Chapter 20. Monitoring and reporting in Tivoli Storage Manager V6.1 435
Do I need to write my own Monitors or Reports?
No, we deliver pre-configured Monitors and predefined Reports.
Note: The Tivoli Storage Manager V6.1 server for the AIX, HP-UX, Linux, and Sun
Solaris platforms now allows other products that deploy and use DB2 to be installed on
the same machine. The Tivoli Storage Manager V6.1 product publications originally
listed this as being a restriction. After further evaluation, this restriction is removed, and
a future revision to the publications will reflect this.
Note for Windows systems: The current restriction that is documented in the Tivoli
Storage Manager publications for the installation of other DB2 versions (or applications
that deploy and use DB2) on the same Windows system as a Tivoli Storage Manager
V6 server is still in place and is not affected by the support discussed here. Similar
support for Windows systems is currently being evaluated. Assuming that the testing
and evaluation do not uncover any issues to prevent it, the target is to remove this
restriction by year end 2009.
Chapter 20. Monitoring and reporting in Tivoli Storage Manager V6.1 437
20.3 Installing the Monitoring and Reporting feature
Before you can use the new Monitoring and Reporting feature, you have to install it. This
process currently requires a separate server for installation. While it can be installed on Linux,
AIX or Windows, for this demonstration example, we have deployed this feature on the
Windows platform.
Monitoring and Reporting uses the Deployment Engine with a common user interface to the
rest of the Tivoli Storage Manager V6.1 installers. The package for Monitoring and Reporting
is separate from the Tivoli Storage Manager Server package.
Figure 20-3 Tivoli Storage Manager for Reporting and Monitoring installer Welcome panel
Chapter 20. Monitoring and reporting in Tivoli Storage Manager V6.1 439
3. The next window is the DB2 Enterprise Server Edition, where you provide the information
required to create the DB2 database (see Figure 20-5). Enter the DB2 Administrator
password and click Next to continue.
Chapter 20. Monitoring and reporting in Tivoli Storage Manager V6.1 441
5. Figure 20-7 shows the Database Access Setup panel. Enter the password for the TEPS
and ITMUser user ID. Note it, and click Next to continue.
Figure 20-8 ISC User name and Password for Administration Center credentials
Chapter 20. Monitoring and reporting in Tivoli Storage Manager V6.1 443
7. Figure 20-9 is the Choose Install Folder window, where you specify the location of the
Install destination folder. Accept the default or enter the installation directory for your
server. Click Next to continue.
Chapter 20. Monitoring and reporting in Tivoli Storage Manager V6.1 445
9. Figure 20-11 shows the installation progress.
Figure 20-11 Installing Tivoli Storage Manager for Monitoring and Reporting
In order to generate customized reports for Tivoli Storage Manager, we use the BIRT editor to
generate a report template. When this template is run inside Tivoli Storage Manager, it
produces a report with data relevant to that timeframe (for example, the last 24 hours is a
common timeframe). Although the template does not change from day to day, the resulting
reports do. Each day the report reflects the activity of the previous day. We only actually use
the BIRT designer to produce new or modified templates, if we want to change the data in a
report (for example, to insert a new graph, so that the graph appears in the reports every day
from then on).
Chapter 20. Monitoring and reporting in Tivoli Storage Manager V6.1 447
Downloading
Downloading BIRT is fairly simple, given access to a Web browser with Internet access.
Download the All-In-One package from the BIRT Web site, which is available from:
http://download.eclipse.org/birt/downloads/
The actual package that we use in the Windows example is approximately 220 MB and is
v2.5.0 of BIRT, although previous versions are also known to have worked. A redirect link to
the code package is:
http://www.eclipse.org/downloads/download.php?file=/technology/epp/downloads/relea
se/galileo/R/eclipse-reporting-galileo-win32.zip
When the file is downloaded, simply unpack it to the location of your choice. After this is done,
set the Java Virtual Machine location for BIRT to use. There are a couple of different ways to
set this (for example, adding the java bin directory of the JVM to the PATH for the system).
An ideal way to do this under Windows is to create a shortcut to eclipse.exe, then modify the
“properties” of the shortcut, setting the -vm parameter of Eclipse. By setting the JVM this
way, you do not affect anything else on the system.
Note: BIRT is a Java tool, but it does not ship with a Java Virtual Machine (JVM). Without
a JVM, it cannot run. The IBM JVM is installed along with the Monitoring and Reporting
package from the Tivoli Storage Manager installation, so we can use that to run the BIRT
code.
The Administration Center Web-interface provides an easy way to manage multiple server
instances from a single browser window. It has been available as of Tivoli Storage Manager
V5.3, and is hosted within the Integrated Solution Console (ISC) framework. The ISC is a
general framework, supporting multiple modules that serve different purposes. The
Administration Center module enables you to manage and monitor your Tivoli Storage
Manager environment specifically.
This design shift was intuitive for new administrators, and a challenge for the existing more
experienced administrators, who struggled with locating tasks with in the GUI hierarchy. This
operational gap was handled by extending the life of the ADMIN or WEBADMIN GUI, which
was frozen at 5.2.x and its support extended until V5.5 new installations.
This release of the new ISC and Administration Center marks the first significant update in
functionality since it was first introduced in V5.3, and applies some much anticipated
improvements.
Figure 21-1 Down level error when Tivoli Storage Manager server is V6.1 and the ISC/AC is lower
Coexistence involves running the Administration Center V6.1 on the same machine at the
same time as you run the previous version. To support this coexistence, you need to provide
non-default port assignments. For upgrade scenarios involving the possibility of rolling back
to the previous version, you can choose to have the same port definitions and run either one
version or the other.
If your disk space permits, having the two versions of the Administration Center coexist is the
recommended upgrade strategy (using different IP ports). It lets users have a functioning
Administration Center during the time that it takes for the upgrade to complete. It also ensures
that the configuration of the previous Administration Center is still accessible during the
upgrade procedure.
Upgrade does not uninstall the previous version, which is still functional. After the upgrade
completes successfully, you can uninstall the previous Administration Center using the
documented process.
There has been significant change in the technologies on which the both the Integrated
Solutions Console V7.1 and the Administration Center V6.1 are built. As a consequence, you
must manually complete the upgrade of both packages by collecting the configuration
information and re-creating this in the new configuration.
When upgrading from an earlier version of the Administration Center to Version 6.1, you must
define your Integrated Solutions Console user IDs to the new Administration Center. In
addition, you must provide credentials for each of the Tivoli Storage Manager servers.
When installing the Administration Center Version 6.1, note the following considerations:
ISC user IDs are not recreated in the new Administration Center.
The Tivoli Storage Manager server's database file and tsmservers.xml are copied from the
earlier Administration Center, if located. The file format is compatible between versions.
Tivoli Storage Manager server credentials are not recreated in the new Administration
Center, thus you must manually duplicate the user configuration of your earlier
Administration Center. Ensure that you:
– Obtain the information about users and server credentials from the earlier
Administration Center.
– Define each ISC user previously defined to the earlier Administration Center.
– Define to each ISC user its set of Tivoli Storage Manager server connections.
– Uninstall the earlier Administration Center.
This panel indicates a successful installation of the Integrated Solutions Console. To learn
about console updates:
Start the ISC.
Click Help in the ISC banner.
In the Help navigation tree, click Console Updates.
Further discussion on the ISC updates can be found in the 21.4, “Integrated Solutions
Console changes” on page 487.
This version has also copied the server processes and sessions section to the Health Monitor
from the server properties notebook.
The reason for some of these changes was that server health details for V6.1 and higher
servers are very different from V5.5 and previous servers, thus they needed a new design.
This updated Health Monitor model is now aware of the server platform.
Figure 21-3 Choosing View Health Details for the vermont-tsm1 server instance
By selecting to View Health Details, you see details for schedule, database and recovery log,
current activities such as sessions, processes, and activity log messages, and storage pool
status. All of these are shown in Figure 21-4.
Figure 21-4 Health monitor panel highlighting multiple areas for review
Figure 21-6 Error message detail displayed using the hot link supplied in the Health Monitor portlet
21.3.10 Reporting
Reporting has a new feature that allows the creation of a .CSV file to be downloaded directly
to your connecting system:
1. The path to find this function is shown in Figure 21-8.
Figure 21-12 Selecting Manage Servers → Select Action → Create Server Instance process
Figure 21-13 Selecting Next to accept the default in the Create Server Instance process
3. In the next panel we provide the address and System Administrator user ID and
password. In our lab, we fill in the blank fields for the root login of the AIX Tivoli Storage
Manager server Utah, as shown in Figure 21-14.
Figure 21-14 Filling in the ip_address, admin ID and password in the Create Server Instance process.
Example 21-1 Setting up a user ID and password in the AIX server prior to the instance creation
# mkuser -a id=1003 pgrp=tsmsrvrs home=/home/tsm2 tsm2
# passwd tsm2
Changing password for "tsm2"
tsm2's New password:
Enter the new password again:
5. Next, log in using the user ID and password, and you will be prompted to change the
password as that user ID, as shown in Example 21-2.
Example 21-2 Logging into AIX and changing the password for TSM2 new instance creation
You must change your password now and login again!
Changing password for "tsm2"
tsm2's Old password:
tsm2's New password:
6. Fill in the details for the instance in the Configure Instance User ID panel. With AIX
(and UNIX platforms) you must have the User ID, group, and directory structure already
created on the target Tivoli Storage Manager server, as shown in Figure 21-15. Click Next
to continue.
Figure 21-15 Inputting the instance ID and password for the Create Server Instance process.
7. Input the instance path and database paths in the Instance Directories panel, as shown in
Figure 21-16. Click Next to continue.
8. With the next panel, you continue inputting the instance directories. Notice with this panel
how the shaded (colored) areas imply the mandatory fields, and the fields with no shading
are optional fields (see Figure 21-17). After entering the required information, click Next to
continue.
Figure 21-17 Input for the instance directory panels for the Create Server Instance process
Figure 21-18 Server information panel input for the Create Server Instance process
10.Next you specify a Tivoli Storage Manager administrator to create when configuring the
new server instance as shown in Figure 21-19. After specifying the required information,
click Next to continue.
12.The next panel is a summary of the information you specified in the previous panels (see
Figure 21-21). Review the information, and if you are satisfied with it, click Next to create
the new server instance.
Figure 21-22 Error shown due to insufficient space during the Create Server Instance process
14.We then increase the size of the instance file system to allow at least 400 MB of free
space, and then repeat all of the previous scenario from Figure 21-12 on page 460
through Figure 21-21 on page 465.
15.Following this correction, the configuration starts, and then completes successfully, as
shown here in Figure 21-23. When the instance is created, click OK.
Figure 21-23 .Completion panel for the Create Server Instance process
Figure 21-24 View of the completed instance under the Managed Server listing
For V6.1, a global view is added. The Client Nodes and Backup Sets tab will have three
portlets; the bottom portlet will be the backup sets collection. This feature for configuration of
backup sets has been added to the Client Nodes menu of the Administration Center.
In the following series of screen captures, we will add a client node, then create a backup set:
1. From the ISC console, select the Client Nodes and Backupsets option under the Tivoli
Storage Manager section, as shown in Figure 21-25.
3. The second configuration panel opened is the Client Node Groups panel, as shown in
Figure 21-27.
4. Then, the third panel is for backup sets configuration, as shown in Figure 21-28, which we
discuss in greater detail in the section “Backup sets” on page 471.
6. Then, select the Tivoli Storage Manager server instance the client will be associated with,
and the new group name. Provide an optional Description as shown in Figure 21-30.
Figure 21-30 Client Node Group configuration panel, choosing the server instance for the group
After filling out the input fields, click OK. The resulting panel is displayed as shown in
Figure 21-31.
Next, we move on to the creation of the client node. From the Select Action drop-down menu,
click Create a Client Node as shown in Figure 21-32.
7. Next, we fill in the fields required for our node configuration, as shown in Figure 21-33.
Click OK to continue.
After clicking OK, the ALL Client Nodes tab shows the completed task with the new client
node listed (see Figure 21-34).
Clicking the arrow next to the Client Node name will expand the section, displaying more
detailed information. This design has been implemented throughout the new Administration
Center interface.
Figure 21-36 demonstrates behavior changes within the wizard panels with this release. If
you right-click the newly created client Riley_E, you now have the ability to take action directly
from the object. In other areas of the Administration Center, you still have to select the object
requiring update, then use the Select Action drop-down menu selection.
Figure 21-36 Options now available using right-click within the Administration Center client node panel
Backup sets
The backup set wizard has two additional panels, one to allow the point-in-time and data type
information to be entered (the shredding choice was moved to this panel); and another for
TOC selection. The TOC panel will only be visited if the data type selection includes file.
For entry in the wizard off the new Nodes and Backup Sets page, there will be some
additional changes in navigation. After the General panel, the user will be allowed to select
the members; the first panel is a choice of nodes or node groups, the second panel is a table
of nodes or node groups, depending on the first panel selection. If the selection result is a
single node, then the navigation will go to the existing file space selection panels, but if more
than one node was selected, the navigation will proceed to the volume selection panel.
Figure 21-37 shows the flow of the backup set wizard navigation panels, which include many
changes.
Referencing the flow chart in Figure 21-37, notice the entry from the Backup set tab where
we create a backup for a number of client nodes and client node groups. There is a path
change when only a single node (either one node from the nodes table, or one node group
with only a single node) is selected. When multiple nodes are selected, all file spaces will be
backed up, using the BOTH flag for Unicode.
The Administration Center will use the terminology of Collection for a group of backup sets
that were generated with a single generate backup set command. These backup sets will
displayed as a single entity (a single row), and a hyper-link or row action can be used to
display the details for the collection, which will switch to another panel with a display of the
common information, as well as a row for each individual backup set in the collection.
As a best practice, the Administration Center will not allow different retention dates to be set
for a backup set in a collection, it will only update the retention for all backup sets in the
collection. The media for a backup set collection is treated as a single entity, so the volumes
do not expire in Tivoli Storage Manager until the backup set with the longest retention
expires. If there are multiple retention periods in a collection, the Administration Center will
display the longest on the collection page, and on the details page it will display a message
with the range of retention periods.
The backup set page in the Client node notebook will include an indication if the backup set is
part of a collection. The modify backup set panel launched from the client node notebook will
not allow any changes for backup sets that are part of a collection. The information on the
page will include instructions to go to the Client Nodes and Backup Set page to modify this
backup set.
The retention period and expiration date displayed the collection and details panels will be
based on the longest retention period in the collection, because this is how long the volumes
will be retained.
The collections panel will populate the table when a server is selected and the update table
button is pushed. The currently displayed server name will appear above the table.
The details panel will have a table with rows for node/data type combinations. Because all the
backup sets in the collection have the same name, creation date, volumes and description,
these items are displayed above the table with the node names.
The details panel retention/expiration information will be displayed above the table as well.
A message will be displayed if all backup sets do not have the same retention period. The
individual retention periods will only be visible from the client node notebook backup set tab.
The details panel table will have at least one row for each node that has a backup set in the
selected connection. If a node has multiple data types in the collection, then there will be
multiple table rows. For example, a node with an Image and File backup set with this backup
set name will have two rows in the table,
The Generate TOC command will cause the entire backup set to be read, and in many cases
will take longer than the Administration Center time out to execute (see the section regarding
increasing the IC time-out for V6.1). The server command does not have a background
process option, so the normal Administration Center handling of giving a process number is
not available. To handle this in the Administration Center, the command will be issued, the
results checked every ½ second for three seconds, then the command will be “abandoned”
by the Administration Center to run on the server. This will allow the Administration Center to
report any immediate results, or command failures. When the command is “abandoned” a
message will be displayed for the user to view the activity log for the results of the processing.
Breadcrumb navigation
The new backup set panels, and the existing backup set contents panel will have breadcrumb
navigation. Instead of having a close button on the bottom left corner, the upper left corner of
the panel will show the path. The user will be able to “go back” one or two panels by clicking
directly on the hyperlink for the desired panel.
The Display content action will have a confirmation message, indicating that this is a long
running command and might time out. This is new for Tivoli Storage Manager V6.1 because
this command will cause the backup set to be read, and can take a long time to process. It is
very likely that it will time out before the response from the server is received in most cases.
Table 21-1 shows the task paths to access the maintenance scripts shown previously in
Figure 21-37 on page 472.
Table 21-1 Server script task path for script creation and maintenance
Task Path
Figure 21-38 Selecting the Server Maintenance task in the Administration Center
2. In the next panel you will see a list of the servers that you have added to the console.
Select the server for which you want to add a maintenance script. From the Select Action
drop-down menu, select Create Maintenance Script as shown in Figure 21-39.
3. The next panel is the Welcome panel. Click Next to continue as shown in Figure 21-40.
Figure 21-41 Creation of a maintenance script wizard panel for predefined script selection
5. We now proceed to the creation of the Backup Server Database panel, where we are
presented with a backup scheme to protect the Tivoli Storage Manager V6.1 database.
We select the correct device class, and the default of six daily incrementals, then a weekly
full backup, as shown in Figure 21-42. Click Next to continue.
6. The wizard moves on to DRM processing as shown in Figure 21-43. Here you can define
the library volumes to be moved. Click Next to continue.
Figure 21-43 Library volumes to be moved by DRM within the server maintenance script wizard
Figure 21-44 Library volumes and DRM to state transition within the server maintenance script wizard
Figure 21-45 Recovery plan configuration panel within the server maintenance script wizard
9. In the next panel, you choose the appropriate migration details as shown in Figure 21-46.
Click Next to continue.
10.In the next panel, you specify the expiration process that removes backup and archive
data from server storage. After specifying the expiration process parameters as shown in
Figure 21-47, click Next to continue.
Figure 21-47 Expiration duration configuration panel within the server maintenance script wizard
Figure 21-48 Reclamation detail configuration in the server maintenance script wizard
13.The Summary page is presented as shown in Figure 21-50. The options that you specified
in the wizard are summarized. Click Finish to continue.
Figure 21-51 Generated script as a result of using the maintenance creation wizard
2. You are presented with general information about the maintenance script as shown in
Figure 21-53. In the left hand pane, click Back Up Database to continue.
Figure 21-54 Backup database selection panel using the server maintenance wizard
4. Following these actions, we then return back to the command line and compare the script
section that has been altered. First, the incremental database backup lines are shown in
Figure 21-55.
Figure 21-55 Database backup lines originally created in the maintenance script
5. Now, we view the newest update to the database backup lines in the server maintenance
script as shown in Figure 21-56.
In summary, our discussion of maintenance scripts and panel flows demonstrates some
significant improvements in the Tivoli Storage Manager Administration Center feature for
V6.1.
Figure 21-57 A second login attempt using the same user ID for the ISC is blocked
As a result of this change, now enforcing security in the ISC, along with its ability to handle a
single signon for other configured applications, we highly recommend that you configure a
separate user ID for each administrative user.
Clicking the Vermont-TSM1 server link, we can edit the existing key (see Figure 21-59).
Figure 21-59 Clicking the server link, the exact credential details can be updated
The Edit Credential Storage Key panel allows the ISC user to alter the Tivoli Storage
Manager server access credentials by changing the user ID and password for the selected
key as shown in Figure 21-60.
From the Credential Store panel, you can delete an old Tivoli Storage Manager instance
which is no longer required, as shown in Figure 21-61.
Figure 21-61 Old Tivoli Storage Manager server link is deleted using the Credential Store panel
Console administrators use Manage Global Refresh to configure portlet refresh settings for all
users of the console. Here are some of the tasks for which this feature is used:
Giving permission to console users to edit their own portlet refresh options.
Configuring default refresh settings for console modules. Administrators can set values for
refresh mode, refresh interval, and show timer settings. These settings become the
default values for the User Configure Portlet Refresh.
Setting the minimum refresh interval for each console module. Use this setting to prevent
the performance impacts of too many calls to the server to refresh content (see the
Minimum Refresh Interval description).
In this example, we are changing the tape library refresh from 600 to 1200 seconds, which
would reduce the number of queries to all servers for all the AC configured servers. This is
shown in Figure 21-63.
Figure 21-63 .Changing the global refresh for the tape libraries
Frequently asked questions about the Tivoli Storage Manager Administration Center can be
reviewed at the URL:
http://www-01.ibm.com/support/docview.wss?rs=663&tcss=Newsletter&uid=swg21193419
A quick reference chart showing the Tivoli Storage Manager Administration Center Wizard
reference can be reviewed at the URL:
http://www-01.ibm.com/support/docview.wss?rs=663&tcss=Newsletter&uid=swg21193327
A reference to the creation of Tivoli Storage Manager objects using the Administration Center
can be reviewed at the URL:
http://www-01.ibm.com/support/docview.wss?rs=663&tcss=Newsletter&uid=swg21193326
Path changes
In V5.3 through V5.5, the Integrated Solutions Console default location was:
[ISC root]\Tivoli\dsm\bin\ (Windows)
[ISC root]/Tivoli/dsm/bin/ (UNIX and Linux)
Example 21-3 demonstrates updating the default time-out from 30 to 120 minutes.
Example 21-3 Using the supportUtil.sh in AIX to change the ISC default time-out setting.
# ./supportUtil.sh
9. Exit
Enter Selection: 3
Enter Selection: 1
The session timeout setting determines how long a session can be idle before it
times out. After a timeout occurs the user must log in again. The default timeout
setting is 30 minutes. The minimum timeout setting is 10 minutes. To cancel this
operation enter an empty value.
Session timeout successfully updated. Restart ISC for changes to take effect.
Enter Selection: 99
9. Exit
To further understand the changes in the upgrade process, refer to the Tivoli Storage
Manager Server Upgrade Guide, SC23-9554.
Some considerations would be the moving of data from an original Tivoli Storage Manager V5
server database to the Tivoli Storage Manager V6.1 database. This process will use a large
percentage of a system’s processor and requires a high amount of I/O activity. You have
options regarding how to perform this task, whether across a network connection or utilizing
storage media.
In your planning, consider testing the upgrade on non-production systems. Testing gives you
information about how long the upgrade of the server database will take, which will help you
to plan for the time that the server will be unavailable. Some databases might take much
longer than others to upgrade.
Testing also gives you more information about the size of the new database compared to the
original, giving you more precise information about database storage needs.
If you have multiple servers, consider upgrading one server first, to get experience with how
the upgrade process will work for your data. Use the results of the first upgrade to plan for
upgrading the remaining servers.
Except for the database extraction and insertion processes, the upgrade process is similar to
performing disaster recovery for a server. The server’s critical files (such as the server option
file, and device configuration file) must be available, and devices used for storage pools must
be made available to the upgraded server.
Important: At this point you should read Chapter 16, “Installation and upgrade planning for
Tivoli Storage Manager V6.1” on page 245 before going any further. It contains information
that you need for a successful upgrade to Tivoli Storage Manager V6.1.
The DSMUPGRD PREPAREDB utility upgrades a server database version to V5.5, and
performs some cleanup to prepare for the extraction process.
Important: After a V5.3 or V5.4 server database is upgraded to V5.5, the database can no
longer be used by a V5.3 or V5.4 server. If you do not want the database on your
production server to be upgraded, you can restore the database backup on another
system, then upgrade that copy of the database.
The DSMUPGRD EXTRACTDB utility extracts the data from a server database. You can use
the utility to either simultaneously extract and insert the data into a new database over a
network, or extract the data to media for later insertion into a new database. The data
extraction operation can be run with multiple processes.
If a problem occurs during the database preparation or extraction, the DSMUPGRD EXTEND
DB and DSMUPGRD EXTEND LOG utilities are available to make more space available for
the upgrade process.
Refer to “Details of the database upgrade process” on page 271 for detailed information
about the DSMUPGRD upgrade utilities.
V5.3 or V5.4 servers might be running on platforms that are not supported by the upgrade
utilities. Therefore, you might need to update your system before you begin the upgrade
procedure. Use the information in Table 22-1 to determine whether you are using one of the
operating system versions that must upgraded.
Tip: It is not necessary to upgrade a V5.3 or V5.4 server to V5.5 level before upgrading to
V6.1 level.
AIX IBM AIX 5L™ V5.1 (32 or 64 bit) AIX V5.3 (64 bit only)
AIX V5.2 (32 or 64 bit) AIX V6.1 (64 bit only)
Linux on Power Red Hat Enterprise Linux 3 Red Hat Enterprise Linux 4
(supported on POWER5™ Red Hat Enterprise Linux 5
processors only) SUSE Linux Enterprise Server 9
SUSE Linux Enterprise Server and 10
8/UnitedLinux 1.0 (supported Asianux 2.0 - Red Flag DC 5.0 and
only on processors prior to Haansoft Linux 2006 or Asianux 3.0
POWER5) V2.3.3 or later of the GNU C
Miracle Linux 4.0 or Asianux 2.0 libraries that are installed on the
GNU C libraries 2.2.5-108 target system
Linux x86 Red Hat Enterprise Linux 3 Red Hat Enterprise Linux 4
(AS,WS, ES) Red Hat Enterprise Linux 5
SUSE Linux Enterprise Server SUSE Linux Enterprise Server 9
(SLES) 8 / UnitedLinux 1.0 and 10
V2.2.5-213 of the GNU C Asianux 2.0 - Red Flag DC 5.0,
libraries Miracle Linux 4.0, and Haansoft
Linux 2006 or Asianux 3.0
V2.3.3 or later of the GNU C
libraries that are installed on the
target system
Linux x86_64 Red Hat Enterprise Linux 3 Red Hat Enterprise Linux 4
Red Flag Advanced Server 4.1 Red Hat Enterprise Linux 5
SUSE LINUX Enterprise Server SUSE Linux Enterprise Server 9
8 and 10
V2.2.5-213 of the GNU C Asianux 2.0 - Red Flag DC 5.0,
libraries Miracle Linux 4.0, and Haansoft
Linux 2006 or Asianux 3.0
V2.3.3 or later of the GNU C
libraries installed on the target
machine
Linux zSeries SUSE Linux Enterprise Server 8 Red Hat Enterprise Linux 4
/ UnitedLinux 1.0 Red Hat Enterprise Linux 5
Version 2.2.5-108 of the GNU C SUSE Linux Enterprise Server 9
libraries and 10
2.3.3 or later of the GNU C libraries
that are installed on the target
system
Some platforms that were supported for earlier versions of the server are not supported for
V6.1. If the server that you want to upgrade is running on one of these platforms, you cannot
upgrade your server to V6.1 on the same platform. You must install your V6.1 server on a
system that is a specific supported platform, depending on the original platform (see
Table 22-2).
Hardware requirements
For information about estimating the total disk space that is required, see 22.3.3, “Estimating
total space requirements for upgrade process and server” on page 514.
Software requirements
Table 22-4 describes the minimum software requirements.
Operating System AIX 5.3 running in a 64-bit kernel environment with the following
additional requirements for DB2:
AIX 5.3 Technology Level (TL) 6 and Service Pack (SP) 2 plus the
fix for APAR IZ03063
Minimum C++ runtime level with the xlC.rte 9.0.0.8 and
xlC.aix50.rte 9.0.0.8 filesets. These filesets are included in the
June 2008 cumulative fix package for IBM C++ Runtime
Environment Components for AIX.
AIX 6.1 running in a 64-bit kernel environment requires the following
filesets for DB2:
Minimum C++ runtime level with the xlC.rte 9.0.0.8 and
xlC.aix61.rte.9.0.0.8 filesets. These filesets are included in the
June 2008 cumulative fix package for IBM C++ Runtime
Environment Components for AIX.
Web browser A Web browser to retrieve an online installation package. The following
browsers are supported:
Microsoft Internet Explorer 6.0 SP1
Microsoft Internet Explorer 7.0
FireFox 1.5
FireFox 2.0
FireFox 3.0
Mozilla 1.7.8
Your browser must support the server code page. If your browser does
not support the server code page, the windows might be unreadable.
If your browser meets these requirements but does not correctly
display a Tivoli Storage Manager Web-based interface, consider using
a different browser.
Drivers If you have an IBM 3570, IBM 3590, or IBM Ultrium tape library or
drive, install the most current device driver before you install Tivoli
Storage Manager 6.1. You can locate the device drivers at:
ftp://ftp.software.ibm.com/storage/devdrvr/
You cannot run a V6.1 server on a PA-RISC system that is running HP-UX operating system.
If the server that you want to upgrade is running on this platform, you cannot upgrade your
server to V6.1 on the same platform. You must install your V6.1 server on an Itanium system
that is running the HP-UX operating system, and then use the network or media method to
upgrade your V5 server to that system.
Hardware requirements
Table 22-5 describes the minimum hardware requirements. For information about estimating
the total disk space that is required, see 22.3.3, “Estimating total space requirements for
upgrade process and server” on page 514.
System resources such as semaphores and kernel values might require special configuration
and tuning.
Devices and Drivers A DVD device that is available for the installation process, if you
are installing from DVD media.
The most current device driver. This must be installed before you
install Tivoli Storage Manager.
You can locate the device drivers at:
ftp://ftp.software.ibm.com/storage/devdrvr/
Some platforms that were supported for earlier versions of the server are not supported
for V6.1:
Linux running on an Itanium system (IA64)
Linux running on a 32-bit x86 system
If the server that you want to upgrade is running on one of these platforms, you cannot
upgrade your server to V6.1 on the same platform. You must install your V6.1 server on an
x86_64 system that is running the Linux operating system, and then use the network or media
method to upgrade your V5 server to that system.
Hardware requirements
Table 22-7 describes the minimum hardware requirements. For information about estimating
the total disk space that is required, see “Estimating total space requirements for upgrade
process and server” on page 514.
Hardware Linux on POWER IBM system such as one of the systems listed in the
following Linux for Power server solution Web site:
http://www-03.ibm.com/systems/power/software/linux/about/inde
x.html
Software requirements
Table 22-8 describes the minimum software requirements.
Operating System The Tivoli Storage Manager server on Linux on Power (ppc64
architecture) requires one following operating systems:
Red Hat Enterprise Linux 5, Update 3 or later
SUSE Linux Enterprise Server 10, Update 2 or later
Asianux 3.0
Web browser A Web browser to obtain the Linux installation packages. The following
browsers are supported:
Microsoft Internet Explorer 6.0 SP1
Microsoft Internet Explorer 7.0
FireFox 1.5
FireFox 2.0
FireFox 3.0
Mozilla 1.7.8
Your browser must support the server code page. If your browser does
not support the server code page, the windows might be unreadable.
If your browser meets these requirements but does not correctly
display a Tivoli Storage Manager Web-based interface, consider using
a different browser.
For information about estimating the total disk space that is required, see “Estimating total
space requirements for upgrade process and server” on page 514.
Software requirements
Table 22-10 describes the minimum software requirements.
Operating System The Tivoli Storage Manager server on Linux x86_64 requires one of
the following operating systems:
Red Hat Enterprise Linux 5
SUSE Linux Enterprise Server 10
Asianux 3.0
Libraries For Linux x86_64, GNU C libraries, Version 2.3.3-98.38 or later that is
installed on the Tivoli Storage Manager system.
Web browser A Web browser to obtain the Linux installation packages. The following
browsers are supported:
Microsoft Internet Explorer 6.0 SP1
Microsoft Internet Explorer 7.0
FireFox 1.5
FireFox 2.0
FireFox 3.0
Mozilla 1.7.8Your browser must support the server code page. If your
browser does not support the server code page, the windows might be
unreadable. If your browser meets these requirements but does not
correctly display a Tivoli Storage Manager Web-based interface,
consider using a different browser.
Hardware requirements
Table 22-11 describes the minimum hardware requirements. For information about estimating
the total disk space that is required, see “Estimating total space requirements for upgrade
process and server” on page 514.
Hardware An IBM Linux on System z® 900, IBM Linux on System z 800, or IBM
Linux on System z 990 server with either native logical partitions
(LPARS) or VM guests. You can use 64-bit LPARs and VM guests.
64-bit LPARS and VM guests are used by the storage agent to perform
LAN-free operation.
Disk space Disk space The following minimum values for disk space:
5 MB for the /var directory
4 GB for the /opt directory
190 MB for the /tmp directory
300 MB for the /usr directory
Additional disk space might be required for database and log files. The
size of the database depends on the number of client files to be stored
and the method by which the server manages them.
Software requirements
Table 22-12 describes the minimum software requirements.
Operating System The Tivoli Storage Manager server on Linux on System z (s390x 64-bit
architecture) requires one of the following operating systems:
Red Hat Enterprise Linux 5, Update 3 or later
SUSE Linux Enterprise Server 10, Update 2 or later
Web browser A Web browser to obtain the Linux installation packages. The following
browsers are supported:
Microsoft Internet Explorer 6.0 SP1
Microsoft Internet Explorer 7.0
FireFox 1.5
FireFox 2.0
FireFox 3.0
Mozilla 1.7.8
Your browser must support the server code page. If your browser does
not support the server code page, the windows might be unreadable.
If your browser meets these requirements but does not correctly
display a Tivoli Storage Manager Web-based interface, consider using
a different browser.
Hardware requirements
Table 22-13 describes the minimum hardware requirements.
For information about estimating the total disk space that is required, see “Estimating total
space requirements for upgrade process and server” on page 514.
Disk space The following list is the minimum disk space for Sun Ultra
SPARC-based processors (sun4u and sun4v architecture) and for
x86_64-based processors (AMD64 or EM64T architecture) for the
respective directories and logs:
5 MB for the /var directory
2 GB for the /opt directory
310MB for the /tmp directory
300 MB for the /usr directory
Additional disk space might be required for database and log files. The
size of the database depends on the number of client files to be stored
and the method by which the server manages them.
Devices and drivers If you have an IBM 3570, 3590 or Ultrium tape library or drive, install
the most current device driver before you install Tivoli Storage
Manager Version 6.1. You can locate the device drivers at:
ftp://ftp.software.ibm.com/storage/devdrvr/
You cannot run a V6.1 server on an Itanium system (IA64) that is running the Windows
operating system. If the server that you want to upgrade is running on this platform, you
cannot upgrade your server to V6.1 on the same platform. You must install your V6.1 server
on an x86_64 system that is running the Windows operating system, and then use the
network or media method to upgrade your V5 server to that system.
Hardware requirements
Table 22-15 describes the minimum hardware requirements.
For information about estimating the total disk space that is required, see “Estimating total
space requirements for upgrade process and server” on page 514
Web browser A Web browser to log in and use the console. The Web browser can
be installed on the same or a separate system. The following browsers
are supported:
Microsoft Internet Explorer 6.0 SP1
Microsoft Internet Explorer 7.0
FireFox 1.5
FireFox 2.0
FireFox 3.0
Mozilla 1.7.8
Your browser must support the server code page. If your browser does
not support the server code page, the windows might be unreadable.
If your browser meets these requirements but does not correctly
display a Tivoli Storage Manager Web-based interface, consider trying
a different browser.
System functions The Windows system functions, such as Device Manager, are
supported on the 64-bit Tivoli Storage Manager Console.
Normal Windows system functions are available for both the 32-bit and
64-bit server using the Manage Computer function of the Windows
system.
The backup of the server database requires as much space as is used by your V5 database.
Store the backup on the form of sequential media that is convenient for you, either tape or
disk.
Additional space requirements depend on the method that you choose for moving the data
from the V5 database:
Media method
You need media to store the data that will be extracted from the V5 database. The media can
be tape, or disk space that is defined as a sequential-access disk device class. The space
required for the extracted data is the same as the used space in your database. If your
database is safely backed up, and you are certain that you no longer need to run the V5
server, after you extract the data you can optionally release the space used by the V5
database and recovery log.
Network method
You must have the working copy of the V5 database and recovery log on 26 IBM Tivoli
Storage Manager: Server Upgrade Guide the V5 system. If you are working with a copy of the
database that was created for testing the upgrade process, you need enough space to hold
the total allocated size of the database; you can use the minimum size for a V5 recovery log.
Related tasks
See 22.3.3, “Estimating total space requirements for upgrade process and server” on
page 514.
22.3.2 Space requirements for the Tivoli Storage Manager V6 server system
Before beginning the upgrade process, plan for the space that is required for the database
and recovery log.
You need unique, empty directories for the following items for the upgraded server:
The database
The recovery log:
– Active log
– Archive log
– Optional: Log mirror for the active log
– Optional: Secondary archive logs (failover location for archive log)
The instance directory for the server, which is a directory that will contain files specifically
for this server instance (the server options file and other server-specific files).
Locate the database and recovery log directories on separate physical volumes or file
systems. Ideally, use multiple directories for database space (we are recommending at least
four as a minimum) and locate them across as many physical devices or logical unit numbers
(LUNs) as there are directories.
Plan for 33 - 50% more than the space that is used by the V5 database. (Do not include
allocated but unused space for the V5 database in the estimate.) Some databases can grow
temporarily during the upgrade process; consider providing up to 80% more than the space
that is used by the V5 database.
Space estimates
Estimate the amount of space that the database will require by completing the following
steps:
1. Use the QUERY DB FORMAT=DETAILED command to determine the number of used
database pages in your V5 database.
2. Multiply the number of used database pages by 4096 to get the number of used bytes.
3. Add 33 - 50% to the used bytes to estimate the database space requirements.
Consider testing the upgrade of the database to get a more accurate estimate. Not all
databases will grow as much as the suggested 33 - 50% increase in space.
When the server is operating normally, after the upgrade process, some operations might
cause occasional large, temporary increases in the amount of space used by the database.
Continue to monitor the usage of database space to determine whether the server needs
more database space.
For the best efficiency in database operations, anticipate future growth when you set up
space for the database. If you underestimate the amount of space that is needed for the
database and then must add directories later, the database manage might need to perform
more database reorganization, which can consume resources on the system. Estimate
requirements for additional database space based on 600 - 1000 bytes per additional object
stored in the server.
Restriction: You cannot use raw logical volumes for the database. If you want to reuse
space on the disk where raw logical volumes were located for an earlier version of the
server, you must create file systems on the disk first.
Visit the support site for the latest information and recommendations.
Active log
The minimum size of 2 GB is large enough to complete the upgrade process. When you
begin normal operations with the server, you might need to increase the size of the active log.
The required size depends on the amount of concurrent activity that the server handles. A
large number of concurrent client sessions might require a larger active log. For example, the
server might need an 8 GB or larger active log.
Archive log
The size required depends on the number of objects stored by client nodes over the period of
time between full backups of the database.
Remember that a full backup of the database causes obsolete archive log files to be pruned,
to recover space. The archive log files that are included in a backup are automatically pruned
after two more full database backups have been completed.
If you perform a full backup of the database every day, the archive log must be large enough
to hold the log files for client activity that occurs over two days. Typically 600 - 4000 bytes of
log space are used when an object is stored in the server. Therefore you can estimate a
starting size for the archive log using the following calculation:
objects stored per day x 3000 bytes per object x 2 days
For example:
5,000,000 objects/day x 3000 bytes/object x 2 days = 30,000,000,000 bytes,
or 30 GB
22.3.3 Estimating total space requirements for upgrade process and server
In addition to the space required for the upgraded server itself, some additional disk space is
needed for the upgrade process. For example, if you are upgrading the server on the same
system where it is currently located, you need enough space for two copies of the database
during the upgrade process.
Table 16-3 on page 261 shows basic tips for estimating each item, for each of the main
scenarios. Select the scenario then read down the column.
Table 16-4 on page 262 shows a sample filled-in work sheet for a 100-GB, V5 database that
has 80% space utilization, with the assumption that the database increases by 33 - 50%
when upgraded.
The network method for the data movement overlaps the extraction time with the insertion
time. Using the network method might help reduce the total time required for the upgrade
because of the overlap.
In benchmark environments in IBM labs, upgrade operations have achieved between 5-10
GB per hour, depending on the configuration. Your environment might produce different
results.
Performance tips depend on the method that you choose for moving the data from the V5
database:
Media method:
– If you are extracting the data to tape, use a high-speed tape device.
– If you are extracting the data to disk, use a disk device or LUN that is different than the
device in use for the V5 database and recovery log.
– If you are performing a V5 unload, then load to a test system, the test system disk
should be optimized for the highest I/O rate possible.
If both the V5 database and the destination for the extracted data are on a virtualization
device (high-end storage controller, or a SAN virtualization device), ensure that the two
virtual LUNs are not on the same physical disk drive. Ensure that the space in use for the
V5 database and the destination for the extracted data are on different physical disk drives
within the virtualization device.
22.4.2 Performance tips for inserting data into the V6.1 database
The process for inserting the V5 extracted data into the V6.1 database is the longest-running
part of an upgrade process, and is the most sensitive to the configuration of the system. On a
system that meets the minimum requirements, the insertion process will run, but performance
might be slow. For better performance, set up the system as described in the tips:
Processors:
The insertion process is designed to exploit multiple processors or cores (multi-threaded).
The insertion process will typically perform better on a system with a relatively small
number of fast processors, rather than a system with more slower processors.
Much of the activity is I/O interrupt related, thus faster processors would be more
effectively utilized.
Disk storage:
The insertion process is designed to exploit high-bandwidth disk storage subsystems. The
speed of the process is highly dependent on the disk storage that is used for the source
and target disk.
For best performance, use multiple LUNs that map to multiple independent disks, or that
map to RAID arrays with a large stripe size (for example, 128 KB). Use a different file
system on each LUN.
Table 22-17 shows an example of good usage of LUNs.
1 Active log
2 Archive log
7 Extracted V5 database (needed only if the media method is used to extract the V5
database to a sequential disk device class)
If the disk storage is supplied by a virtualization device (high-end storage controller, or a SAN
virtualization device), ensure that none of the virtual LUNs are on the same physical disk
drive. Ensure that the directories in use are on different physical disk drives within the
virtualization device.
This book focuses on installing the server itself. For information about installing other
components, see the Installation Guide for your operating system.
You can use the upgrade wizard, or manually use the upgrade utilities to upgrade the servers:
If you use the upgrade wizard, run the wizard once for each server instance.
You can upgrade multiple servers at the same time. Each time that you start the upgrade
wizard, you work with a single server, but you can start the wizard in multiple windows at
the same time.
If you use the upgrade utilities manually from a command line, repeat the procedure for
upgrading each server instance.
You can begin running one upgraded server instance while other server instances are still
being upgraded.
If servers are using shared libraries, upgrade the server that is the library manager first. Then
upgrade the servers that are library clients.
If you are moving a library manager or library clients to new systems for the upgrade to V6.1,
consider moving the servers to the new systems before upgrading the servers. By moving the
servers first, you can reestablish connectivity to all servers and devices before the upgrade.
Then upgrade the library manager, followed by upgrading the library clients.
If you have storage agents at earlier versions, upgrade them to V5.5 before upgrading the
server to V6.1. Verify that LAN-free data movement works as expected before upgrading the
server.
For the most recent information about supported levels of storage agents, go to the Web site:
http://www.ibm.com/support/docview.wss?uid=swg21302789
22.6 Testing
Tivoli Storage Manager 6.1 has many new features, which have contributed to new hardware
and software requirements. There are some major server enhancements which require
additional planning considerations include:
Integrated Solutions Console and Administration Center
Reporting and Monitoring
Deduplication
Disk structure for DB2, active logs and archive logs, storage pool volumes
Disaster recovery using the Disaster Recovery Manager feature
Adapting an existing V5.x Tivoli Storage Manager server infrastructure with V6.1
In addition, there are many methods of upgrading, so understanding all your options and the
cost of each is important (actual monetary and downtime costs might vary based on each
scenario).
For considerations when building test cases for upgrade scenarios, go to “Testing” on
page 288.
The planning for what equipment is needed (such as hardware platform, size of processor,
and network connectively) should all have been done before starting with the upgrade to IBM
Tivoli Storage Manager V6.1. You should have read Chapter 16, “Installation and upgrade
planning for Tivoli Storage Manager V6.1” on page 245 and Chapter 22, “Upgrading to Tivoli
Storage Manager V6.1” on page 497.
In the current chapter, we cover the steps needed to upgrade to V6.1 and walk through an
example showing step by step how we upgraded a V5.5 server to V6.1 using the network
model and the wizard. The example is based on upgrading a fairly simple Tivoli Storage
Manager configuration, but you will be able to see how the issues that we have covered prior
to this can impact the upgrade strategy.
For the test upgrade we have selected Scenario 2 from Table 23-1. In this scenario, some
upgrade tasks are performed on the original system and some on the new system. The data
is extracted from the original server database and sent over the network connection to be
inserted into the new server database.
You can use the wizard, or perform the upgrade by manually using the utilities. The wizard
offers a guided approach to the upgrade of a server. We strongly recommend using the
wizard; you can avoid some configuration steps that are complex when done manually.
Scenario 3 for upgrading the server: Same system as original server Media method
same system, media method
Scenario 4 for upgrading the server: Same system as original server Network method
same system, network method
Note: These steps assume that the upgrade utilities package is already installed on the
system where the V5 server is installed.
However, if you have SSAM, then use EVENTBASEDUSED=YES for the DB Upgrade.
If you are not sure if event based retention has ever been used, take the default, which is
EVENTBASEDUSED=YES.
Applications such as CDP, Content Manager, and Space Manager assume that the Tivoli
Storage Manager server is always available.
Customer databases might need to back up archive logs hourly.
Preparing space for the upgrade process might involve:
– Determining the amount and type of space that is required for the upgrade process
before beginning the process.
– Verifying that the system has the amount of space that was estimated in the planning
step. Use the planning work sheet that you filled in with your information. Refer to
“Space requirements” on page 257.
This command fixes a problem that might exist in older Tivoli Storage Manager databases. If
the problem does not exist in your database, the command completes quickly. If the problem
exists in your database, the command might take some time to run.
Important: Do not skip this step. If your database has the problem and you do not run this
command now, the DSMUPGRD PREPAREDB utility fails when you run it. You must then
restart the V5 server and run the CONVERT USSFILESPACE command before continuing
with the upgrade process.
Review the steps for reverting to the earlier version of the server in the section, “Reverting
from V6.1 to the previous V5 server version” in the IBM Tivoli Storage Manager: Server
Upgrade Guide, SC23-9554. If for some reason you need to revert to the earlier version
after the upgrade to V6.1, the results of the reversion will be better if you understand the
steps and prepare for the possibility now.
Make the following adjustments to settings on your server and clients. These adjustments
must be done to make it possible for you to revert to the original server after the upgrade,
if problems occur:
– For each sequential-access storage pool, set the REUSEDELAY parameter to the
number of days during which you want to be able to revert to the original server, if that
becomes necessary. For example, if you want to be able to revert to the original server
for up to 30 days after upgrading to V6.1, set the REUSEDELAY parameter to 31 days.
Notes:
When you use the upgrade utilities and if you have multiple servers running on
the system, you must use the -k option to specify the name of the Windows
registry key from which to retrieve information about the server being upgraded.
Do not install the utilities in the same directory as the original server that is to be
upgraded, this is a restriction and is not allowed.
The utilities package must be installed whether you are using the upgrade wizard
or performing the upgrade manually with utilities.
For example, if the V5 server was installed using the default path:
C:\Program Files\Tivoli\TSM\server
Create an upgrade folder and install the upgrade utilities in the path:
C:\Program Files\Tivoli\TSM\upgrade
After the upgrade utilities are installed, continue with installing the V6.1 server on the
target server.
– Log on to the target system as an administrator and change to the directory where you
placed the executable file. In the next step, the files are extracted to the current
directory. Ensure that the file is in the directory where you want the extracted files to be
located.
Either double-click the executable file, or enter the following command on the
command line to extract the installation files:
6.1.0.0-TIV-TSMALL-platform.exe
The files are extracted to the current directory.
Install the V6.1 server code on the new system. After the install is done, check the
installation logs in the path:
C:\Program Files\Tivoli\TSM
– Create the directories for the V6.1 database and logs, and the user ID that will own the
server instance. We choose TSM1 as the DB2 user ID.
Note: You need an upgrade version that is greater than or equal to the level of the Tivoli
Storage Manager server that you are upgrading.
Example 23-1 shows the commands we used to create the directories that the Tivoli Storage
Manager server instance needs for database and recovery logs.
Tip: When you use the upgrade utilities, if you have multiple servers running on the
system, you must use the -k option to specify the name of the Windows registry key
from which to retrieve information about the server being upgraded. The default value
for the option is SERVER1. Use the -o option with the DSMUPGRD command to
specify the location of the server options file.
4. On the source system, run the install.exe, then run the upgrade wizard tool,
dsmupgdx.exe in the c:\Program Files\Tivoli\TSM\server\ directory. The V6.1 server
upgrade window will open.
We choose the appropriate language and click OK to continue (see Figure 23-1).
Figure 23-4 Tivoli Storage Manager V6.1 server upgrade - Select Upgrade
Figure 23-5 Tivoli Storage Manager 6.1 server upgrade prepare database for upgrade
Figure 23-6 Tivoli Storage Manager 6.1 server upgrade - Select Upgrade
Figure 23-7 Tivoli Storage Manager 6.1 server upgrade - Verify Server Selection
Note: If the upgrade wizard shows a non-zero return code, scroll up and check the
messages displayed in the panel, or check the output file for the operation. If there is a
success message that is wrapped, then the operation did complete successfully.
Figure 23-8 Tivoli Storage Manager 6.1 server upgrade preparation phase
Figure 23-11 Tivoli Storage Manager 6.1 server upgrade - Configure the new server instance
Figure 23-12 Tivoli Storage Manager 6.1 server upgrade - Modify Disk Configuration
Figure 23-13 Tivoli Storage Manager 6.1 server upgrade - Instance User ID
Figure 23-14 Tivoli Storage Manager 6.1 server upgrade - Instance Directory
Figure 23-15 Tivoli Storage Manager 6.1 server upgrade - Database Directories
Figure 23-16 Tivoli Storage Manager 6.1 server upgrade - Recovery Log Directories
Figure 23-17 Tivoli Storage Manager 6.1 server upgrade - Configuration Summary
Note: What happens to the database backup performed during dsmserv format or
dsmserv loadformat is that during the server installation format processing for V6.1, you
can see that the server performs a backup of the database, as indicated by the messages:
ANR2976I Offline DB backup for database TSMDB1 started.
ANR2974I Offline DB backup for database TSMDB1 completed successfully.
This initial backup is required by DB2 in order for Tivoli Storage Manager to set the
recovery log processing mode to ROLLFORWARD. At this point, this database backup
only contains the server schema (DDL). This database backup is performed to a file in the
local file system. This database backup is subsequently deleted by Tivoli Storage Manager
because it only contains the server schema definitions which can be recreated by Tivoli
Storage Manager anyway.
After completing the installation and configuration of the Tivoli Storage Manager server,
we recommend that you perform a FULL database backup. This database backup and any
subsequent database backups will be tracked in the server volume history, as expected,
and used as part of the server disaster recovery manager (DRM) processing.
Figure 23-18 Tivoli Storage Manager 6.1 server upgrade - Configure Instance
Figure 23-19 Tivoli Storage Manager 6.1 server upgrade - insert data into the new server instance
Figure 23-20 Tivoli Storage Manager 6.1 server upgrade - Load New Database
Figure 23-21 Tivoli Storage Manager 6.1 server upgrade - Load New Database
Figure 23-23 Tivoli Storage Manager 6.1 server upgrade - upgrade messages
27.You can view the full text of a message from the Tivoli Storage Manager Information
Center at:
http://publib.boulder.ibm.com/infocenter/tsminfo/v6/index.jsp
To display a message, enter the message ID in the Search field and click Go.
The Insertion process completed with return code 405. For the INSERTDB, the message
ANR1395I indicates success.
Figure 23-24 is the Upgrade Complete window. Click Done and review the installation logs.
Figure 23-24 Tivoli Storage Manager 6.1 server upgrade - Upgrade Complete
After the upgrade process is done, we start the Tivoli Storage Manager V6.1 server on the
new system. We start the server in the foreground with DSMSERV and look for error
messages that would indicate options that are not supported with the Tivoli Storage Manager
V6.1 server that was defined in the V5.5 test database.
Important: It is possible, after the upgrade to V6.1 is complete, that conditions might
cause the need to temporarily revert to the previous version of the server. Successfully
reverting to the previous version of the server is possible only if you performed all of the
preparation steps. To understand why this is important, review the procedure for reverting
an upgraded server to its previous version in 23.8, “How to rollback to V5 if needed or
restart the process” on page 560.
Methods
The following methods are used for backup of for the Tivoli Storage Manager V6.1 database:
Full backup:
This is typically done through Tivoli Storage Manager Admin Schedule. You can also use
server-to-server for device class of backup.
Incremental backup:
This is not quite the same as Tivoli Storage Manager 5.x incremental backup. You also use
server-to-server for device class of backup.
Tivoli Storage Manager database snapshot:
This is typically done through Tivoli Storage Manager Admin Schedule and it does not
clear out archive logs. You can also use server-to-server for device class of backup.
The point to emphasize here is that a FILE DEVCLASS verses a TAPE DEVCLASS uses a
different number of volumes, and that the extensions for a database backup verses a
snapshot backup are different. Also, be sure to check your file includes/excludes on the
system if you are doing FILE devclass database backups.
Note: You must add new exclude statements into the dsm.opt files to make sure that you
do not back up the database backups.
The following commands show examples for getting summary information for some specific
types of objects:
File spaces:
select node_name, count(*) as “Number of Filespaces” from filespaces group by
node_name order by 2
Note: The procedure is different for moving archive logs to a new location. You should
specify a new directory for the archive logs in the dsmserv.opt file on the archivelogdir
parameter. In this case, however, when Tivoli Storage Manager is restarted, the
existing archive logs are not moved but rather are just kept in the old location. New
archive logs are written to the new location. The archive logs are then cleared when a
database backup is done, and the logs are older than two full backups ago.
You can directly access DB2 database, this should be in read-only mode. Tivoli Storage
Manager DB2 database is exposed but not documented. Schema is proprietary and might
change in the future.
Check compatibility of third party reporting tools to determine if they work with Tivoli Storage
Manager V6.
SQL example
Example 23-5 shows a select command that creates temporary workspace within DB2.
Example 23-5 This is an example of a select command that creates temporary workspace within DB2
select node_name,count(*) as Number_of_Objects, sum(file_size) as Bytes_of_WKLD
from contents where node_name='61SOURCE18' and filespace_id=1 and file_name like
'\WKLD%' group by node_name
2. If your source server is at Tivoli Storage Manager 5.4 when dsmupgrd preparedb is done:
You will need to restore your database from backups prior to restarting your server and
you will need to re-install Tivoli Storage Manager 5.4 from installation media if using
in-place upgrade methods
3. If your source server is at Tivoli Storage Manager 5.5.0 – 5.5.2 when dsmupgrd preparedb
is done:
You will not need to restore your database from backups prior to restarting your server.
You need to re-install Tivoli Storage Manager 5.5 from installation media if using in-place
upgrade methods
Note: This might change at level of 5.5.3. Review the readme files to see if this applies.
More detailed instructions for fallback are given in the database upgrade section of the
manual IBM Tivoli Storage Manager Server Upgrade Guide, SC23-9554. This discussion is
only the requirements for the database restore in order to go back to V5. This is discussed in
the section “Reverting from V6.1 to the previous V5 Server Version”.
If you are using the extract to media method for upgrade and have completed the extract, you
can restart the upgrade from the INSERTDB step after cleaning up directories and
reformatting the database as shown in Example 23-7.
23.9 Debugging
In cases where you need to investigate log messages, there are several files where
information is available, as described next.
Code Description of various Exit Error codes in InstallAnywhere can be found in 18.14,
“Gathering logs” on page 404.
Example 23-8 Macro with a series of select statements for use to compare the database upgrade
select node_name, count(*) as "Number of Filespaces" from filespaces group by
node_name order by 2
select platform_name, count(*) as "Number of Nodes" from nodes group by
platform_name
select node_name,sum(num_files) as "Number of Backup Files" from occupancy where
type='Bkup' group by node_name
select node_name,sum(num_files) as "Number of Archive Files" from occupancy where
type='Arch' group by node_name
select count(*) as "Number of Management Classes" from mgmtclasses
select count(*) as "Number of Server Scripts" from script_names
select count(*) as "Number of Storage Pools" from stgpools
After running these select statements in a macro, the output of our Tivoli Storage Manager
V5.5.2 server is as shown in Example 23-9.
Now, after collecting our comparison data, we then proceed to perform the manual upgrade
steps, as indicated in the process flow that follows:
1. Extract the data from the V5.5.2 server as shown in Example 23-10.
# cd /usr/tivoli/tsm/upgrade/bin
# /usr/tivoli/tsm/upgrade/bin/dsmupgrd preparedb
ANR7800I DSMSERV generated at 12:32:53 on Mar 11 2009.
2. Prepare the disk for the database and logs on the UNIX system.
3. Issue the DSMSERV LOADFORMAT command, as shown in Example 23-11.
Example 23-11 Preparing Tivoli Storage Manager Server instance directories and DB2 environment
$ dsmserv -u tsm2 -i /home/tsm2 loadformat
dbdir=/tsm2/dbdir001,/tsm2/dbdir002,/tsm2/dbdir003,/tsm2/dbdir004
activelogsize=8192 activelogdir=/tsm2/actlog archlogdir=/tsm2/archlog
mirrorlogdir=/tsm2/activelogm
After the insertion has completed, we then start the Tivoli Storage Manager Server instance
and review the content to ensure we have transferred all our nodes and object pointers. Here
is the V6.1 Tivoli Storage Manager Server macro output for comparison after our database
upgrade, as seen in Example 23-13.
Example 23-13 Select commands run on V6.1 Tivoli Storage Manager Server after insertdb completed
ANS8000I Server command: 'select node_name, count(*) as "Number of Filespaces"
from filespaces group by node_name order by 2'
Example 23-14 Query content against the volume which holds the NAS TOC pointers and data
tsm: RS6000>q vol stgp=TOCPOOL
23.11.2 Summary
This short review demonstrates that in Tivoli Storage Manager V6.1.2, the NAS TOC
capabilities have been enabled and are functioning as they should. This review also
demonstrates the processes for upgrading from the PREPAREDB, through the EXTRACTDB
and into the INSERTDB phases.
Part 10 Appendixes
This part of the book contains several appendixes of reference information.
$ db2set DB2COMM=tcpip
$ db2set -all
[i] DB2_SKIPINSERTED=ON
[i] DB2_KEEPTABLELOCK=ON
[i] DB2_EVALUNCOMMITTED=ON
[i] DB2_SKIPDELETED=ON
[i] DB2COMM=tcpip
[i] DB2_PARALLEL_IO=*
[g] DB2SYSTEM=utah
[g] DB2INSTDEF=tsm1
2. If you want to activate the changes, you need to stop and start DB2 using the db2stop and
db2start commands. Trying to stop DB2 will fail while your Tivoli Storage Manager is up
and running. Example A-2 shows the SQL1025N message being returned.
4. Because you now know that the service name is not configured, use the db2 update dbm
command as shown in Example A-4. The service name that you configure is DB2_TSM1.
5. Again, if you try to stop and restart DB2 now, you will receive a message (in this case
SQL5043N) indicating the configuration is not complete. The exact message is shown in
Example A-5.
6. We did not configure the TCP service port so far. So next you need to open the
/etc/services file and add the service details. On a Windows machine, you configure the
service port with the %SYSTEMROOT%\system32\drivers\etc\services file. We decided
to configure the DB2_TSM1 service to use port 50000. Just place the statement next to
the ports already configured for DB2 usage (do not use any of these ports for ODBC). See
Example A-6 for the details.
Then search for “IBM DB2 Driver for ODBC and CLI”. This package comes with a complete
set of DB2 client applications and includes a Control Center that allows to manage DB2
databases by a graphical interface.
3. The InstallShield Wizard prepares the installation, the progress window shown in
Figure A-3 on page 577 is displayed.
4. The Install Shield Wizard progress is followed by the Windows Installer notification
window; see Figure A-4.
6. After you read the license agreement, you select to accept the license terms as shown in
Figure A-6. Click Next to continue.
8. Figure A-8 shows that you can create a response file during installation. With this
installation, select Install IBM Data Server Client on this computer and click Next to
continue.
10.By default, operating system security will be enabled with the package installation. As
shown in Figure A-10, you could disable this option, but here we stay with the default
again. Click Next to continue.
The installation window, see Figure A-12, informs you about the installation progress.
13.On the final window, Figure A-14, you could select to install additional products, here you
see the IBM Database Add-Ins for Visual Studio 2005. You just click Finish to complete
the installation.
2. You can immediately verify if the configuration was successful by using the db2 connect
command, as shown in Example A-9.
You have now completed the ODBC configuration for client and target machine. After you
completed both catalog commands in Example A-8 on page 583, you can already access the
database using the Control Center that comes with the IBM DB2 Driver for ODBC and CLI.
4. Now we just need a small perl script that connects to the database and reads some
information out of the database. We save the sample script shown in Example A-11 as
dbaccess.pl. After being connected to the database, the script reads the available nodes
and their platforms from the nodes table and writes them to standard output. More
examples and code for the C programming language or php interpreters are available
from the DB2 installation samples subdirectory.
my $database = "utahdb1";
my $user = "tsm1";
my $password = "<valid password>";
my $schema = "tsmdb1";
$sth->execute();
exit; # Done
The script can connect to the database and returns the nodes and their platforms.
Here the task is to connect to the Tivoli Storage Manager database using Calc, the
OpenOffice spreadsheet module. Follow these steps:
1. On your Windows system, click Start → All Programs → OpenOffice.org 3.0 →
OpenOffice.org Calc. An untitled spreadsheet is opened, select File → New →
Database (see Figure A-16).
3. To set up the ODBC connection on Figure A-18 you click Browse to search for existing
databases.
5. Figure A-20 does not yet list the DB2 ODBC drive as an available data source; click Add.
6. In the Create New data Source window, shown in Figure A-21, select the IBM DB2 ODBC
DRIVER and click Finish.
8. From the Data Source Administrator window, Figure A-23, select the data source you just
created, UTAHDB1. Click OK to proceed.
9. Again select the UTAHDB data source, see Figure A-24. Click OK to continue.
11.When asked for authentication details, provide the user ID and make the Password
required field active. The user authentication window, Figure A-26, does allow you to
immediately test the connection. We do not do this, so click Next to continue.
13.A dialog appears that allows you to save the database you just configured, see
Figure A-28. In this example you save to folder tsmtools.
15.By default, the SYSCAT tables branch is expanded; see Figure A-30. Click the collapse
selector (-).
You have now completed the tasks to allow you to access the Tivoli Storage Managers
databases through OpenOffice.
In addition, we discuss changes that you might need to make to existing SQL commands in
order for your commands to work.
Note: The Tivoli Storage Manager configuration wizard creates the instance used by the
server and database. After a server is installed and configured, the db2icrt command
would not typically be used.
We recommend that the Tivoli Storage Manager server environment use these db2osconf
values as the minimum setting for those values. It can also be beneficial to exceed the
recommended value. Consideration should be given to this information and the kernel
settings that are recommended to be changed. These settings can significantly affect normal
operations and how well the server performs in this environment. Not following the kernel
setting recommendations could result in an unstable or under performing Tivoli Storage
Manager server for this environment.
See “HP-UX and Sun Solaris systems recommendations” on page 47 for the actual
recommendations. Example 23-15 shows that you can use the command to verify the current
settings.
Set the DSMI_ api environment variable configuration for the database instance.
db2set -i server1 DB2_VENDOR_INI=d:\tsmserver1\tsmdbmgr.env
The server configuration wizard typically takes care of any catalog needed for using the
server database. This command would only need to be considered or run manually after a
server has been configured and running if something in the environment changes or is
damaged.
Example 23-16 shows the usage of the command and the messages returned.
When you initialize a new database, the AUTOCONFIGURE command is issued by default.
Note: When the instance and database directories are created by the DB2 database
manager, the permissions are accurate and should not be changed.
Example B-1shows how you can invoke the create database command.
Note: The database manager and database configuration parameters are typically set and
managed directly by DB2. They are listed here for informational purposes and a means to
view the existing settings. Changing these settings should only be done through the use of
Tivoli Storage Manager server commands or procedures. Changing these settings might
be recommended by IBM service or through service bulletins such as APARs or Technical
Guidance documents (Technotes). These settings should not be changed manually and
should only be changed at the direction of IBM.
See Example 5-50 on page 116 for sample output collected with the get database manager
configuration command. To get information in order to verify database configuration and
settings, log mode, maintenance settings, and so on, you can use the additional show detail
parameters.
db2 get db config for tsmdb1 show detail
Note: The database manager and database configuration parameters are typically set and
managed directly by DB2. They are listed here for informational purposes and a means to
view the existing settings. Changing these settings should only be done through the use of
Tivoli Storage Manager server commands or procedures. Changing these settings might
be recommended by IBM service or through service bulletins such as APARs or Technical
Guidance documents (technotes). These settings should not be changed manually and
should only be changed at the direction of IBM.
Example B-2 shows the output collected with the DB2 get snapshot for dbm command.
Node name =
Node number = 0
Memory Pool Type = Other Memory
Current size (bytes) = 14614528
High water mark (bytes) = 15007744
Configured size (bytes) = 33488896
Node number = 0
Memory Pool Type = FCMBP Heap
Current size (bytes) = 786432
High water mark (bytes) = 786432
Configured size (bytes) = 917504
Node number = 0
Memory Pool Type = Database Monitor Heap
Current size (bytes) = 327680
High water mark (bytes) = 327680
Configured size (bytes) = 327680
Tivoli Storage Manager monitors the state of the database using the health snapshot and
other mechanisms that are provided by DB2. There might be cases where the health
snapshot or other DB2 documentation indicates that an item or database resource might be
in an alert state, indicating that action should be considered to remedy the situation. Tivoli
Storage Manager monitors the condition and takes action as appropriate. Not all declared
alerts by the DB2 database are acted on.
For a table, this utility should be called when the table has had many updates, or after
reorganizing the table. For a statistical view, this utility should be called when changes to
underlying tables have substantially affected the rows returned by the view. The view must
have been previously enabled for use in query optimization using the ALTER VIEW
command.
The Tivoli Storage Manager server has a monitor and tuning algorithm that evaluates the
workload and changes against the server's tables. As needed it invokes RUNSTATS for a
table to update the statistics as necessary. If issues arise with how this monitoring and tuning
algorithm work, IBM might recommend manually performing RUNSTATS for one or more
tables.
db2start command
The db2start command starts the current database manager instance background
processes on a single database partition or on all the database partitions defined in a
multi-partitioned database environment.
The Tivoli Storage Manager server starts and stops the instance and database whenever the
server starts and halts. While the server is running, a db2start is not needed or
recommended. Similarly, while the server is running, stopping the database might adversely
affect the server including causing current workloads and activity to fail or possibly causing
the server to crash. It is important to allow the Tivoli Storage Manager server to manage the
starting and stopping of the instance and database. See “ODBC target machine
configuration” on page 574 for sample usage of the command.
This command can also be used to drop a database partition from the db2nodes.cfg file
(partitioned database environments only). This command is not valid on a client.
The Tivoli Storage Manager server starts and stops the instance and database whenever the
server starts and halts. While the server is running, a db2start is not needed or
recommended. Similarly, while the server is running, stopping the database might adversely
affect the server including causing current workloads and activity to fail or possibly causing
the server to crash. It is important to allow the Tivoli Storage Manager server to manage the
starting and stopping of the instance and database.
When running commands against DB2 databases, you should always make sure to run them
against the correct Tivoli Storage Manager server instance. The db2ilist command, as
shown in Example B-4, provides that information. If you want to query the current instance,
use the db2 get instance command. See Example B-4 for details.
You can use this command and other information available directly from the Tivoli Storage
Manager server to diagnose memory and performance related issues.
The following call returns database and instance normal values and repeats every 10
seconds:
db2mtrk -i -d -v -r 10
Logs:
Current Log Number 2
Pages Written 1605
Method 1 Archive Status n/a
Method 1 Next Log to Archive 2
Method 1 First Failure n/a
Method 2 Archive Status n/a
Method 2 Next Log to Archive n/a
Method 2 First Failure n/a
Example B-6 shows a sample invocation of the command; the parameters used translate to:
-d database_name or -database database_name
-c or -connect
-s or -system_detail
-g or -get_dump
After being started, the db2support Welcome panel is presented as shown in Example B-7.
Press Enter to complete the documentation collection.
_______ D B 2 S u p p o r t ______
NOTES:
1. By default, this program will not capture any user data from tables or
logs to protect the security of your data.
2. For best results, run this program using an ID having SYSADM authority.
3. On Windows systems you should run this utility from a db2 command
session.
4. Data collected from this program will be from the machine where this
program runs. In a client-server environment, database-related
information will be from the machine where the database resides via an
instance attachment or connection to the database.
You can easily transfer the db2support.zip to your IBM support contact for review. For
additional information, go to the following URL for complete documentation of the command:
http://publib.boulder.ibm.com/infocenter/db2luw/v9r5/index.jsp?topic=/com.ibm.db2.
luw.admin.cmd.doc/doc/r0004503.html
You would run this command preceding a db2 get dbm cfg command, for example.
00001 USERSPACE1
----------------------------------------------------------------------------
Comment: DROP TABLE
Start Time: 20090618082638
End Time: 20090618082638
Status: A
----------------------------------------------------------------------------
EID: 466
00001 USERSPACE1
EID: 467
Logs:
Current Log Number 2
Pages Written 1605
Method 1 Archive Status n/a
Method 1 Next Log to Archive 2
Method 1 First Failure n/a
Method 2 Archive Status n/a
Method 2 Next Log to Archive n/a
Method 2 First Failure n/a
Note: The database manager and database configuration parameters are typically set and
managed directly by DB2. They are listed here for informational purposes and a means to
view the existing settings. Changing these settings should only be done through the use of
Tivoli Storage Manager server commands or procedures. Changing these settings might
be recommended by IBM service or through service bulletins such as APARs or Technical
Guidance documents (technotes). These settings should not be changed manually and
should only be changed at the direction of IBM.
See Example 5-50 on page 116 for sample output collected with the get database manager
configuration command. To get information in order to verify database configuration and
settings, log mode, maintenance settings, and so on, you can use the additional show detail
parameters:.
db2 get db config for tsmdb1 show detail
Note: The database manager and database configuration parameters are typically set and
managed directly by DB2. They are listed here for informational purposes and a means to
view the existing settings. Changing these settings should only be done through the use of
Tivoli Storage Manager server commands or procedures. Changing these settings might
be recommended by IBM service or through service bulletins such as APARs or Technical
Guidance documents (technotes). These settings should not be changed manually and
should only be changed at the direction of IBM.
Tivoli Storage Manager monitors the state of the database using the health snapshot and
other mechanisms that are provided by DB2. There might be cases where the health
snapshot or other DB2 documentation indicates that an item or database resource might be
in an alert state, indicating that action should be considered to remedy the situation. Tivoli
Storage Manager monitors the condition and takes action as appropriate. Not all declared
alerts by the DB2 database are acted on.
Example B-2 on page 600 shows the output collected with the command:
db2 get snapshot for dbm
Runstats command
The runstats command updates statistics about the characteristics of a table and associated
indexes or statistical views. These characteristics include number of records, number of
pages, and average record length. The optimizer uses these statistics when determining
access paths to the data including the most efficient means to process the data and whether
or not to exploit an index and such for the operation.
For a table, this utility should be called when the table has had many updates, or after
reorganizing the table. For a statistical view, this utility should be called when changes to
underlying tables have substantially affected the rows returned by the view. The view must
have been previously enabled for use in query optimization using the ALTER VIEW
command.
The Tivoli Storage Manager server has a monitor and tuning algorithm that evaluates the
workload and changes against the server's tables. As needed it invokes RUNSTATS for a
table to update the statistics as necessary. If issues arise with how this monitoring and tuning
algorithm work, IBM might recommend manually performing RUNSTATS for one or more
tables.
Replace use of the LIKE predicate with the in parameter as follows and as customized in
Example B-11.
select * from volumeusage where volume_name in (select distinct volume_name from
volumeusage where node_name='node1')
NODE_NAME: CAPITOLA
COPY_TYPE: BACKUP
FILESPACE_NAME: \1000000KB_of_10KB
STGPOOL_NAME: FILEPOOL
VOLUME_NAME: D:\TSM\SERVER1\FILECLASS\0000005A.BFS
FILESPACE_ID: 3
NODE_NAME: CAPITOLA
COPY_TYPE: BACKUP
FILESPACE_NAME: \1000000KB_of_10KB
STGPOOL_NAME: FILEPOOL
VOLUME_NAME: D:\TSM\SERVER1\FILECLASS\0000005B.BFS
FILESPACE_ID: 3
While the V5 servers did return a result, this is not valid SQL syntax. The select statement
from Example B-13 documents correct syntax using the timestampdiff() function. The
timestampdiff() function takes a numeric expression as first interval, we use the number 2
which translates to seconds interval. Microseconds would be 1, minutes 4, hours 8 and so on.
For a complete description, refer to the DB2 reference information manuals.
However, the SYSCAT.COLUMNS and SYSCAT.TABLES catalog tables now include all
database objects that are known to the server, including some objects that cannot be
accessed through the SELECT command. You receive an error message if a SELECT
command includes an attempt to access one of these objects.
You can declare names for columns that are retrieved from multiple tables so that a
conditional statement can be run with the results that you want from the SELECT command.
Example B-14 shows a sample command to join tables and label columns.
ENTITY: CAPITOLA
ACTIVITY: BACKUP
SUM_BYTES: 27219109316
SUM_TIME: 8400.000000
SUM_AFFECTED: 524236
SUM_FAILED: 0
SUM_MEDIAW: 16
In previous releases, the SHARED field was blank (null) for the DISK device class. In V6.1,
the SHARED field contains the value NO as shown in Example B-15. The SHARED field does
not apply to the DISK device class, and the value NO can be ignored.
Example B-15 Changed results of the SELECT command for the DISK device class
tsm: TIRAMISU>select devclass_name, access_strategy, shared from devclasses where
access_strategy like 'Random'
For example, if you are writing scripts for automation and need to strip out the additional
spaces, you can use the RTRIM scalar function as shown in Example B-17.
Any subsequent attempt to start the server fails with the ANR0130E out of log space
condition as shown in Example C-2.
Now the question is, how to recover from this condition, because the V5 DUMP/LOAD utilities
are not available with the Tivoli Storage Manager V6 server and no database backup is
available for restore.
The way we do that is by taking database backups using DB2 commands. Next we describe
the procedure.
E:\>dir
Volume in drive E is MoreSpace
Volume Serial Number is 34F1-BF7B
Directory of E:\
C:\Program Files\Tivoli\TSM\db2\BIN>db2start
SQL1063N DB2START processing was successful.
Directory of D:\tsm\tsmactivelog
Example C-7 shows the copies under the new archive log path, renamed to the next volumes
in sequence.
Directory of E:\temp_activelog
Note: The database backups we take here are temporary and will be deleted. The purpose
is to prune the archive logs.
Before we process and start the backup process, we create a directory on the E: drive to
temporarily hold the database as shown in Example C-8.
E:\>dir
Volume in drive E is MoreSpace
Volume Serial Number is 34F1-BF7B
Directory of E:\
Now we start two consecutive database backups as shown in Example C-9. After the second
backup, the archive log directory and original active log directory are empty of log files.
C:\Program Files\Tivoli\TSM\db2\BIN>db2start
SQL1063N DB2START processing was successful.
After the second backup, the archive log directory and the original active log directory are
emptied from the log files. The active log files are stored under the new active log directory
path we defined above. Keep the backup image timestamps so you can later prune them from
DB2.
Example C-10 shows the empty original log directory after the second database backup, The
archive log directory is also empty at this point in time.
Directory of D:\tsm\tsmactivelog
C:\Program Files\Tivoli\TSM\db2\BIN>db2start
SQL1063N DB2START processing was successful.
C:\Program Files\Tivoli\TSM\db2\BIN>db2stop
SQL1064N DB2STOP processing was successful.
We take two more full backups, this time from inside the Tivoli Storage Manager server of
course. Example C-15 shows how we take two database backups while we disabled the
server for incoming client sessions.
At this point we are almost completed with the recovery procedure. As a last step we delete
the second database backup taken using the DB2 backup db command. Example C-16
shows how we submit the prune command for the backup image with timestamp
20091031090213.
The publications listed in this section are considered particularly suitable for a more detailed
discussion of the topics covered in this book.
Other publications
These publications are also relevant as further information sources:
IBM Tivoli Storage Manager Server Upgrade Guide, SC23-9554
IBM Tivoli Storage Manager for Windows Backup-Archive Clients Version 6.1, SC23-9792
IBM Tivoli Storage Manager for UNIX and Linux Backup-Archive Clients 6.1, SC23-9791
IBM Tivoli Storage Manager for AIX Installation Guide V6.1, GC23-9781
IBM Tivoli Storage Manager for AIX Administrator's Guide V6.1, SC23-9769
IBM Tivoli Storage Manager for AIX Administrator's Reference V6.1, SC23-9775
IBM Tivoli Storage Manager for SAN for AIX Storage Agent User's Guide V6.1,
SC23-9797
IBM Tivoli Storage Manager for HP-UX Installation Guide V6.1, GC23-9782
IBM Tivoli Storage Manager for HP-UX Administrator's Guide V6.1, SC23-9770
IBM Tivoli Storage Manager for HP-UX Administrator's Reference V6.1, SC23-9776
IBM Tivoli Storage Manager for SAN for HP-UX Storage Agent User's Guide V6.1,
SC23-9798
IBM Tivoli Storage Manager for Sun Solaris Installation Guide V6.1, GC23-9784
IBM Tivoli Storage Manager for Sun Solaris Administrator's Guide V6.1, SC23-9772
IBM Tivoli Storage Manager for Sun Solaris Administrator's Reference V6.1, SC23-9778
IBM Tivoli Storage Manager for SAN for Sun Solaris Storage Agent User's Guide V6.1,
SC23-9800
IBM Tivoli Storage Manager for Linux Installation Guide V6.1, GC23-9783
IBM Tivoli Storage Manager for Linux Administrator's Guide V6.1, SC23-9771
IBM Tivoli Storage Manager for Linux Administrator's Reference V6.1, SC23-9777
IBM Tivoli Storage Manager for SAN for Linux Storage Agent User's Guide V6.1,
SC23-9799
Online resources
These Web sites are also relevant as further information sources:
IBM product announcement letters:
http://www-01.ibm.com/common/ssi/index.wss
Additional IBM Tivoli Storage Manager V6.1 products:
http://www-01.ibm.com/common/ssi/cgi-bin/ssialias?subtype=ca&infotype=an&appnam
e=iSource&supplier=897&letternum=ENUS209-088
Tivoli Storage Manager V6.1 documentation:
http://publib.boulder.ibm.com/infocenter/tsminfo/v6/index.jsp
UNIX client installation media:
http://www.ibm.com/software/sysmgmt/products/support/IBMTivoliStorageManager.html
TSM V6.1 product and capacity planning information:
http://publib.boulder.ibm.com/infocenter/tsminfo/v6/topic/com.ibm.itsm.srv.inst
all.doc/t_srv_plan_capacity.html
TSM V61. product support site:
http://www.ibm.com/software/sysmgmt/products/support/IBMTivoliStorageManager.html
Cygwin site:
http://www.cygwin.org
Index 633
db2 connect 65, 583 dbdir.txt file 108
db2 connect reset 65 DBMEMPERCENT 34
db2 describe table 69 DBMEMPERCENT option 46, 398
db2 get db cfg 66 DBMEMPERCENT parameter 328
db2 get instance 69 DBPAGESHADOW option 39
db2 list db directory 68 DBPAGESHADOWFILE option 39
db2 select 65 DBREPORTMODE setting 114
db2 update dbm 575 dd storagepool 153
detach 609 ddtrace utility 27
Get database manager configuration 600, 611 deduplicated copy storage pool 139
get database manager configuration 116 deduplicated storage pools 138
get snapshot 600, 612 deduplication 137, 146, 154
GRANT 602 commands 151
list history 609, 612 dd storagepool 153
runstats 603 disabling 154
set db2instance 604 encrypted files 151
stop dbm 603 hash functions 146
DB2 configuration hyperfactor 147
query 343 log sizing 61
DB2 database 41 ratios 147
archive log 348 Single Instance Store 146
backup 323 space considerations 154
Cached copies 50 workloads 146
log function 348 default server 394
transition 44 DEFINE
DB2 database backup 132 DBBACKUPTRIGGER 37
DB2 Driver for ODBC 583 DBCOPY 37
DB2 instance DBVOLUME 37
environment variable 85 LOGVOLUME 37
DB2 options DELETE VOLHISTORY command 296, 525
db2instance 85 Deployment Engine
DB2 password 365 Initialization 372
DB2 system commands remove 403
db2cc 85 DEVCLASSES table 615
db2cmd 598, 604, 610 DEVCONFIG option 274
db2icrt 598 DFS links 412
db2ilist 604 diagnostic information 114, 118
db2level 70 diagpath
db2mtrk 604 configuration parameter 119
db2osconf 47, 598 DIAGPATH parameter 117
db2set 70, 574, 599 diffsnapshot 221
db2start 70, 574, 603 diffsnapshot option 408
db2stop 70, 574 DiffSnapShot=Latest option 221
Db2 system commands disable reclamation 284
db2pd 605 DISABLENQR YES option 158
db2 update command 131, 320 disabling sessions 296, 525
DB2 UPDATE DB command 619 disaster
DB2 version 500 preparation 43
DB2_VENDOR_INI variable 91 recovery 43
DB2COMM setting 574 disaster recovery 216
db2diag utility 118 data deduplication 122
db2diag.log 119 Disaster Recovery Manager 5
db2diag.log file 117, 575 license 121
db2icrt command 293, 319, 393 disaster recovery plan 126
db2osconf system utility 598 Disaster Recovery solution 142
db2set command 91 DISK device class 615
db2stop command 119 Disk Structure 317
db2support.zip 608 dismount tapes 298, 526
DB2TSM 365 DMS tablespace 53
DBBACKUPTRIGGER command 98 DPI protocol 400
Index 635
GENERATE BACKUPSET command 474 restriction 358
get health snapshot command 602 setup wizard 416
grant auth admin 325 install folder 418
InstallAnywhere 357
InstallAnywhere platform 293
H installation 245
hash functions 146 installation log 316
Health Monitor 432, 454 installation wizard 360
multi-threaded model 455 installer program
help dsmicfgx.exe 378
non-English online 414 instance configuration
Help command 410 database backup 390
HP-UX instance creation 462
passthru driver 27 instance directory 258, 264, 393, 542
HP-UX systems instance startup 322
upgrade 504 instance user ID 379
HSM instance wizard
introduction 186 database directory 383
threshold migration 187 Integrated Solution Console 449
HSM for Windows Integrated Solutions Console 487
migration jobs 186 introduction 146
V5.4 enhancements ipcs - l command 48
hyperfactor 147 ISC 449
default location 492
I default timing 492
IBM Autonomic Deployment Engine 366 enhancements in V7.1 487
IBM Tivoli Monitoring 432 multitasking 487
IBM Tivoli Storage Manager partial refresh 487
Client enhancements, additions, and changes 8 preserving scroll location 487
disaster preparation and recovery 43 single sign-on 487
new features overview 7, 10 startup pages 487
overview 4, 42, 122 upgrade 451
product components 4 upgrade considerations 451
product positioning 5 url login 452
Server enhancements, additions and changes 7 ISC/AC
IBM Tivoli Storage Manager Client. see Client remote connection 326
IBM Tivoli Storage Manager for ERP 15 ISC/AC server process 325
IBM Tivoli Storage Manager for Mail 14 Itanium system 504, 510
IBM Tivoli Storage Manager for Space Management see
Tivoli Storage Manager for Space Management J
IBM Tivoli Storage Manager HSM for Windows see HSM JFS2 file system 349
for Windows JVM 448
IBM Tivoli Storage Manager see Tivoli Storage Manager
IBM Tivoli Storage Manager Server. see Server
IDENTIFY DUPLICATES command 32 K
Identify Duplicates processes 153 kernel parameter 47
identify processes 153 kernel parameters 48
Incremental backup 556
incremental database backup 98
index_keyseq 614 L
index_order column 614 LABEL LIBVOLUME operation 24
Individual Mailbox Restore 238 language packs 416
limitations 239 library clients 517
restoremailbox parameter 242 library managers
initial database size 332 upgrade 517
INSERTDB Linux on System z
ANR1525I message 275 upgrade 508
insertdb 562 Linux system
INSERTDB utility 274, 570 upgrade 505
install Local disaster recovery 128
Index 637
package names 357 R
PA-RISC system 504 raw logical volumes 52, 259
Passport Advantage 305 RECLAIM parameter 298, 527
passthru device driver 27 RECLAIMDELAY option,server option
device configuration 27 RECLAIMDELAY 26
passthru driver 27 RECLAIMPERIOD 26
Payloads 357 RECLAIMPERIOD option,server option
performance RECLAIMPERIOD 26
extraction process 299 reclamation
TXNGROUPMAX 179 diable 284
perl recovery log 45
sample script 585 ROLLFORWARD 390
perl script 584 space requirements 259
planning recovery log process 348
ACTIVE log mirror 253 recovery log space 514
ARCHIVE FAILOVER log 253 naming 265
database capacity 251 recovery logs 251
database migration 246 recovery mode 58
performance 265 recovery plan 127
ppm install DBI command 584 recovery site scenario 129
PREPARE command 124 Redbooks Web site 629
PREPAREDB command 272 Contact us xx
primary archive log 383 REDUCE DB 38
processor architectures 412 REDUCE LOG 38
product positioning 5 REGISTER LICENSE command 398, 555
product support site 502 REGISTER NODE command 179
proprietary database 42 registry keys 394
ProtecTIER 150 RELABELSCRATCH parameter 24
prune command 624 rename system object 413
REORG database 251
Q Reporting
q script f=d 484 Administration Center 433
Qtree security 231 Deployment Engine 434
query adobjects command 409 Reporting and Monitoring
query archive command 410 frequently asked questions 435
query backup command 410 Reporting feature
QUERY DB FORMAT=DETAILED 258 components 434
QUERY DBSPACE 55, 110 reporting monitoring feature 250
QUERY DBSPACE command 32 Reporting package 432
QUERY DRMEDIA 124 RESET
QUERY DRMEDIA command 123 DBMAXUTILIZATION 38
QUERY LIBRARY command 25 LOGCONSUMPTION 38
QUERY LOG 110 LOGMAXUTILIZATION 38
QUERY LOG command 37, 99 RESET BUFPOOL 38
query mount 526 restore adobjects command 410
query NASBACKUP * 570 RESTORE DB command 395
QUERY NODE command 91 restore systemservices command 412
query process 526 restore systemstate 412
QUERY PROCESS command 171 restore vm 410
query san command 23 restoremailbox parameter 242
QUERY SERVER command 264 rollback 560
query session command 296, 525 rollback upgrade 560
QUERY SQLSESSION 38 ROLLFORWARD 390
QUERY STATUS command 114 rollforward operation 62
query stgpool command 154 ROLLFORWARD recovery 58
Query SystemState command 200 rollforward recovery 62
QUERYSUMMARY option 206, 410 rollforward utility 62
QUERYSUMMARY output 206 RTRIM function 616
rvoptsetencryptiondisabled option 410
Index 639
SnapMirror image 215 T
snapmirror log 219 tape drive 414
SNAPMirror parameter 217 tape library 414
SnapMirror restore TCP/IP 386
unlike geometry 220 SHMPORT option 400
SnapMirror to Tape 215 SSLTCPADMINPORT 399
restore 219 SSLTCPPORT 399
Snapshot 556 TCPADMINPORT 399
snapshot difference TCPNODELAY 399
performance 233 TCPPORT 399
snapshot operations 16 TCPWINDOWSIZE 399
snapshotproviderfs option 414 TCP/IP communication 424
snapshotproviderimage option 414 TCP/IP options 399
snapshotroot option 221 tcpport 465
snmp daemon 400 tdpexcc command 242
SNMP subagent 400 telnet session 333
space 257 test upgrade 288
Active log 514 The Deployment Engine 357
future growth 513 threshold migration 187
server setup tips 513 command line client 193
upgrade process 514 environments 188
V5 server 512 options 192
space considerations summary 194
deduplication 154 tasks 194
space estimates THROUGHPUTDATATHRESHOLD option 179
database 513 THROUGHPUTTIMETHRESHOLD option 179
space requirements 257 timeline 6
active log 259 timestampdiff function 614
active log mirror 259 Tivoli Common Reporting 432, 434
archive failover log 260 Tivoli Event Portal 432
archive log 259 Tivoli Storage Manager
recovery log 259 Administration Center 449
tables 261 client V5.3 enhancements 9
TSM V6 server 257 components
upgrade 511 development timeline 6
space requirements 512 Extended Edition 5
work sheet 263 for products 13
SPACETRIGGER commands 35 overview
SQL queries 559 release timeline 6
SQL1025N message 574 Space Management 188
SQL1063N message 576 version compatibility 6
SQL5043N message 575 Tivoli Storage Manager Extended Edition 5
stagingdirectory option 411 Tivoli Storage Manager for ERP 15
stape driver 28 Tivoli Storage Manager for Mail 14, 235
storage agents Tivoli Storage Manager for Space Management
upgrade 518 Tivoli Storage Manager for Storage Area Networks 10
storage pool Tivoli Storage Manager HSM for Windows see HSM for
verify 179 Windows
storage pool commands 35 Tivoli Storage Manager V6 Upgrade Guide 283
storage pools 284 transaction activity 58
DEVTYPE=FILE 393 transaction group 178
Sun Solaris triggered backups 98
upgrade 509 TSM
SYSSTAT function 215 ARCHIVE log 252
system memory 355 proprietary database 42
system requirements 248 recovery logs 251
system state backup 412 relabel volumes 24
System Storage Archive Manager 282 TSM Client
install 407
TSM Copy Services
Index 641
upgrading ISC 451 media method, upgrade
User Account Control 378 media wizard 276
network method 278
wizard tool 529
V
V6.1 database
extraction process 300 X
virtual machine backup 408 X11 client 306
vmbackdir option 411 X11 environment 311
vmbacknodelete option 411 X11 pre-configured 307
vmbackuptype option 411 X11 redirection 333
volume history X11 remote setup 311
backup volhistory 297, 526 xterm & command 312
volume history file 101, 103
Volume Shadow Copy Service 16, 414
VOLUMEHISTORY option 104
VOLUMEHISTORY server option 101
VTL
RELABELSCRATCH 24
VTL devices 22
W
Web-client language files 414
Windows install
216 message 397
batch script 375
Client install 420
communication 399
components 358, 363
configuration 376
configure 369
DB2 considerations 354
DB2 password 365
debugging 402
default server 394
disk space 355
first to know 354
initialize server 394
installation folder 363
installation log 369
instance user ID 379
log gathering 404
planning 354
register license 398
registry keys 394
server instance 376
server stop 397
silent mode 374
software 355
system memory 355
system requirements 354
TCP/IP options 399
user account control 378
Windows Registry 411
Windows service
server instance 397
Windows systems
install server 354
upgrade 510
wizard
Learn the new This IBM Redbooks publication provides details of changes,
updates, and new functions in IBM Tivoli Storage Manager
INTERNATIONAL
features and function
Version 6.1. We cover all the new functions of Tivoli Storage TECHNICAL
in Tivoli Storage
Manager that have become available since the publication of IBM SUPPORT
Manager V6.1
Tivoli Storage Manager Version 5.4 and Version 5.5 Technical ORGANIZATION
Detailed installation, Guide, SG24-7447.
upgrade, and
This book is for customers, consultants, IBM Business Partners,
customization and IBM and Tivoli staff who are familiar with earlier releases of
provided BUILDING TECHNICAL
Tivoli Storage Manager and who want to understand what is new INFORMATION BASED ON
in Version 6.1. Hence, because we target an experienced PRACTICAL EXPERIENCE
Monitoring and audience, we use certain shortcuts to commands and concepts
reporting of Tivoli Storage Manager. If you want to learn more about Tivoli IBM Redbooks are developed
enhancement Storage Manager functionality, see IBM Tivoli Storage by the IBM International
examples Management Concepts, SG24-7447, and IBM Tivoli Storage Technical Support
Manager Implementation Guide, SG24-5416. Organization. Experts from
IBM, Customers and Partners
from around the world create
This publication should be used in conjunction with the manuals timely technical information
and readme files provided with the products and is not intended based on realistic scenarios.
to replace any information contained therein. Specific recommendations
are provided to help you
implement IT solutions more
effectively in your
environment.