Académique Documents
Professionnel Documents
Culture Documents
Install and Configure Tape Library Attached to the IBM Tivoli Storage Manager Server
......................................................................................................................................................
Introduction..................................................................................................................... 15
Objective........................................................................................................................ 15
Install and Configure Tape Library Attached to the IBM Tivoli Storage Manager Server.15
Prepare Tape Cartridges for Use in an IBM Tivoli Storage Manager Tape Library.........17
Checking in Volumes...................................................................................................... 18
Create Scratch Tapes by Using the LABEL LIBV Command..........................................19
Checking Out Volumes................................................................................................... 19
Auditing a Library........................................................................................................... 20
Scheduler....................................................................................................................................
Introduction..................................................................................................................... 32
Objectives....................................................................................................................... 32
Overview of Schedules................................................................................................... 32
Central Scheduling Mode - Client Polling.......................................................................33
Central Schedule Mode - Server Prompted....................................................................33
Selecting Schedule Methods.......................................................................................... 34
Additional Scheduling options........................................................................................ 35
Additional Client Polling Options....................................................................................35
Additional Server Prompted Options..............................................................................36
Identify Tasks/Actions that can be Scheduled for Clients...............................................37
Consistent Client Return Codes.....................................................................................38
Managing Client/Server Sessions..................................................................................38
Client Configuration...................................................................................................................
Introduction..................................................................................................................... 43
Objectives....................................................................................................................... 43
Identify the types of Clients............................................................................................ 43
Using the command line................................................................................................. 43
Using the web client....................................................................................................... 44
Using GUI....................................................................................................................... 44
Administrative Control of Access....................................................................................44
Configuring Client Access to Server...............................................................................45
Define include/exclude option......................................................................................... 47
Include/exclude processing............................................................................................ 47
Policy Management....................................................................................................................
Introduction..................................................................................................................... 58
Objectives....................................................................................................................... 58
Policy Management........................................................................................................ 58
Policy Specification........................................................................................................ 58
Copy Group Attributes.................................................................................................... 61
Policy set........................................................................................................................ 61
Command line to define policy set..................................................................................62
Validating a policy set..................................................................................................... 62
Management class......................................................................................................... 63
How are files bound to management class.....................................................................63
Privilege classes.........................................................................................................................
System privileges........................................................................................................... 66
Storage privileges........................................................................................................... 67
Unrestricted Storage Privileges......................................................................................68
Restricted Storage Privilege........................................................................................... 68
Policy Privileges............................................................................................................. 68
Operator privilege........................................................................................................... 69
Analyst privilege............................................................................................................. 69
This unit provides an introduction to IBM Tivoli Storage Manager and describes the major
functions and features that are currently available.
Objectives
How IBM Tivoli Storage Manager serves as a tool for data management and protection.
Major components of the IBM Tivoli Storage Manager solution
IBM Tivoli Data Protection products that are part of the IBM Tivoli Storage Manager
solution
Todays storage management needs go beyond traditional backup and recovery solutions.
Data is the currency of today's e-business economy, and planning to store this data needs to
encompass data reliability, solution scalability, disaster planning and recovery, and impact the
overall infrastructure as well as individual mission-critical applications.
IBM Tivoli Storage Manager (TSM) is a storage management application built for the
enterprise. TSM provides an enterprise solution for data protection, disaster recovery, space
management, and record retention. TSM facilitates flexible and scalable storage management
policies to support complicated business needs for storage management and disaster
recovery. Most importantly, TSM automates storage management tasks by eliminating labor
and cost intensive manual procedures for backup, archive, and recovery.
TSM protects and manages data on more than 30 operating platforms. The TSM server
application is supported on over 10 platforms, and it supports hundreds of disk, tape, and
optical storage devices. The TSM server software provides built-in device drivers for directly
connecting more than 300 different device types from every major manufacturer. All common
LAN, WAN, and SAN infrastructures are also supported by TSM.
IBM Tivoli Storage Manager protects your organizations data from hardware failures and
other errors by storing backup and archive copies of data on offsite storage. Scaling to protect
hundreds of computers running a dozen OS ranging from laptops to mainframes and
connected together via the internet, WANs or LANs, Storage Managers centralized Web-
based management, smart-data-move and store techniques and comprehensive policy-based
automation all work together to minimize data protection administration costs and the impact
to both computers and networks. Optional modules allow business-critical applications that
must run 24x365 to utilize Storage Managers centralized data protection with no interruption
to their service
Figure 3
IBM Tivoli Storage Manager for ERP specifically designed and optimized for the SAP R/3
environment provides automated data protection, reduces the CPU performance impact of
data backups and restores on the R/3 server, and greatly reduces the administrator workload
necessary to meet data protection requirements. Tivoli Storage Manager for ERP builds on
the SAP database, a set of database administration functions integrated with R/3 for database
control and administration.
IBM Tivoli Storage Manager for Hardware improves the data protection of your business-
critical databases and ERP applications that require 24x365 availability. This software module
helps IBM Tivoli Storage Manager and its other data protection modules to perform high-
efficiency data backups and archives of your most business-critical applications while
eliminating nearly all performance impact on database or ERP servers.
IBM Tivoli Storage Manager for Mail is a software module for IBM Tivoli Storage Manager
that automates the data protection of e-mail servers running either Lotus Domino or
Microsoft Exchange. This module utilizes the application program interfaces (APIs) provided
by e-mail application vendors to perform online hot backups without shutting down the e-
mail server and improves data-restore performance.
IBM Tivoli Storage Manager for Application Servers is a software module that works with
IBM Tivoli Storage Manager to better protect the infrastructure and application data and
improve the availability of WebSphere Application Servers. It works with the WebSphere
Application Server software to provide an applet GUI to do reproducible, automated online
backup of a WebSphere Application Server environment, including the WebSphere
administration database (DB2 Universal Database), configuration data, and deployed
application program files.
IBM Tivoli Storage Manager for Space Management frees administrators and users from
manual file system pruning tasks, and defers the need to purchase additional disk storage, by
automatically and transparently migrating rarely accessed files to Storage Device, while the
files most frequently used remain in the local file system.
The IBM Tivoli Storage Manager for Storage Area Networks extension allows SAN-
connected Storage Manager Servers and Storage Manager Client computers to make
maximum use of their direct network connection to storage. This software extension allows
both servers and client computers to make the bulk of their backup/restore and
archive/retrieve data transfers over the SAN instead of the LAN, either directly to tape or to
the Storage Manager Disk storage pool. This ability greatly reduces the performance impact
of data protection on the LAN while also reducing CPU utilization on both client and server.
The specially designed IBM Tivoli Storage Manager database retains information about all
client system and user files, business policies, disaster recovery, and the scheduling of client
and administrative tasks. This database retains information called metadata, which means
data that describes data. The flexibility of the IBM Tivoli Storage Manager database enables
customers to define storage management policies around business needs for individual
clients or groups of clients. Client data attributes such as storage destination, number of
versions, and retention period can be assigned at the individual file level and stored in the
database.
The IBM Tivoli Storage Manager database also ensures reliable storage management
processes. To maintain data integrity, the database uses a recovery log to roll back any
changes made if a storage transaction is interrupted before it completes. This is known as a
two-phase commit. Also, both the IBM Tivoli Storage Manager Database and recovery log can
be mirrored for availability, providing automatic volume switching after a media failure. In the
unlikely event of an IBM Tivoli Storage Manager Database recovery, operators can restore the
database to the exact point of a failure by rolling the recovery log forward after restoring from
the latest database backup.
Full + Incremental
Figure
Full + Differential
Figure
During the initial client backup, IBM Tivoli Storage Manager backs up
all eligible files, creating a full backup. Subsequently, files are backed up again
only if they are new or have changed since the last backup. IBM Tivoli Storage
Manager maintains a pointer in its database to the latest version of each file for
each client, eliminating the need for another full backup to consolidate the files
into a single image.
Figure
Saves time and disk space by backing up only new files and modified files. The
progressive backup feature uses its own relational database to track data
wherever it is stored, delivering direct one-step file restore. This eliminates the
need for base-plus-incrementals tapes, commonly used for restore procedures in
other storage management products.
The reorganization of the physical storage media to store each clients data
One of the most important concepts in IBM Tivoli Storage Manager data
management is the difference between an active backup version and an inactive
backup version.
Assume a new file is created on your workstation. The next time you run a
backup operation (say, Monday at 9 p.m.), IBM Tivoli Storage Manager server
stores this file. This copy of the file is known as the ACTIVE version. When you
run an incremental backup again (say, Tuesday at 9 p.m.), IBM Tivoli Storage
Manager uses this ACTIVE version already stored to check back with your
workstation to determine whether the file has changed since the last backup. If it
has, it is backed up again. This version now becomes the ACTIVE version and
the copy from Monday becomes an INACTIVE. The most recent backed-up
version of the file is always the ACTIVE version, as long as it still exists on the
original client. IBM Tivoli Storage Manager will keep storing a new ACTIVE
version and inactivating the previous active version, up to the limit of the total
number of versions defined to be retained in the management class. Once this
limit is exceeded, the oldest INACTIVE version is deleted from IBM Tivoli
Storage Manager storage and will no longer be able to be restored.
IBM Tivoli Storage Manager controls the retention of its ACTIVE and INACTIVE
versions of a file that exist on a client machine by using two criteria defined in the
Management Class:
_ How many versions: The parameter that controls the number of backup
versions is called VEREXIST. This may be set at a specific number or to
UNLIMITED.
_ How long to keep: The RETEXTRA parameter controls how much time must
elapse before an INACTIVE file version is considered expired. This parameter
controls how long to retain all remaining inactive files and may be set at a
specific number of days or to NOLIMIT, meaning they will never be expired
Important: An ACTIVE file version is never expired. Even if you never change
a particular file after the first incremental backup, IBM Tivoli Storage Manager
will keep this file version indefinitely
For a file deleted on a client machine, IBM Tivoli Storage Manager uses different
criteria:
How many files: The parameter that controls the number of inactive backup
versions is called VERDELETED. This number is normally less than or equal
to the number you have for VEREXIST.
How long to keep files: The RETEXTRA parameter controls how much time
must elapse before an INACTIVE file version is considered expired. This
parameter controls how long to retain all remaining inactive files except for the
last one and may be set at a specific number of days or to NOLIMIT, meaning
they will never be expired.
How long to retain the last file: The RETONLY parameter controls the last
inactive copy of a file. As files get expired by RETEXTRA, you can configure
IBM Tivoli Storage Manager to manage the last inactive copy differently, so
that you can keep that file for a longer period of time. It may be set at a
specific number of days or to NOLIMIT, meaning they will never be expired.
Typically, configure RETONLY to be either the same value or longer than
RETEXTRA because it functions as a grace period before expiring the file.
.
The figure ( )shows an example of a file (file1) that was first backed up on January
1 and then on January 5, 15, 20, and 22. On January 22, the backup procedure
had four versions of the same file (January 22 being the active and most recent
copy, and the January 1 copy being expired due to VEREXIST limits).
When file1 is deleted on the client on January 23 and expiration runs, all file1
Retention
The retention period of a file version is the length of time in which that file is
maintained by IBM Tivoli Storage Manager and accordingly is available to be
restored to the client. When a file version is no longer retained, then it is expired
from the IBM Tivoli Storage Manager database. A file version is expired either
because it is superseded by version control (VEREXISTS, VERDELETED) or it is
older than the retention period (RETEXTRA, RETONLY). Retention only applies
to INACTIVE files because ACTIVE files are never expired. The retention period
is measured from the time when the file version becomes inactive.
In our example, Figure 6-19 shows a scenario in which the last inactive backup
copy of file1 will be kept up to March 9th, 2000.
The two main categories of devices supported for storage pools are random
access and sequential devices.
Random access devices refer to magnetic disks.
Sequential devices usually refer to tape devices and/or optical devices
IBM Tivoli Storage Manager enables you to configure storage pools to provide
the best combination of performance throughput and data permanence. In most
cases, keeping client data on tape or optical media is a requirement. However,
making the backups direct to tape may not give the best performance, especially
where there are many clients to back up concurrently, and many small files are
being backed up.
COLLOCATION:
TAPE RECLAMATION
Enables multiple Tivoli Storage Manager servers to use the same tape library
and drives. This improves tape hardware asset utilization, recovery performance,
and tape hardware asset utilization.
Supports high-speed client data recovery directly from a tape or CD-ROM. This
minimizes recovery time by eliminating the use of network and central services
resources.
SAN technology provides an alternative path for data movement between the
IBM Tivoli Storage Manager client and the server. Shared storage resources
(disk, tape) are accessible to both the client and the server through the SAN.
Data movement is off-loaded from the LAN and from the server processor and
allows for greater scalability. LAN-free backups decrease the load on the LAN by
introducing a Storage Agent. The Storage Agent can be thought of as a small
IBM Tivoli Storage Manager server (without a database or recovery log) that is
installed and run on the IBM Tivoli Storage Manager client machine. The Storage
Agent handles the communication with the IBM Tivoli Storage Manager server
over the LAN but sends the data directly to SAN attached tape devices, relieving
the IBM Tivoli Storage Manager server from the actual I/O transfer. A LAN-free
backup environment is shown in Figure
Permits multiple clients to simultaneously transfer data to and from the same
Tivoli Storage Manager server. This new feature boosts performance backups to
more than three times faster than the rate of a single-threaded session. This
speed is achieved because the number of IBM Tivoli Storage Manager data
transfer sessions is transparently optimized based on available system
resources.
Backup /Restore
Tivoli Storage Manager can perform backups of both files and raw logical volumes. When
backing up files, the Tivoli Storage Manager Server database keeps a list of all files and their
attributes (time, date, size, access control lists, and extended attributes). At each file backup
operation; this list is compared to the current file system on the client workstation to determine
new, deleted and changed files.
Figure 1
There are 4 levels of backup available: byte level (small amounts of data like laptops), block
level (bigger amounts of data - between 40KB and 2MB), file level (normal files), and image
level (includes file system and files).
The Tivoli Storage Manager Archive function stores selected files unconditionally on the
server, according to the applicable management class limits. Unconditionally means that there
is no version limit and they will be retained for the defined time period regardless of whether
they are deleted on the client.
Figure 2
Archive: Creates a copy of a file or set of files for vital record retention of data, such as
patent information, financial information or customer records. Customers control archive by
defining the retention period. This feature enables the customers to keep unlimited archive
copies of a file.
Retrieve: A function that allows users to copy an archive file from the storage pool to the
workstation. The archive copy in the storage pool is not affected.
The AIX installation of IBM Tivoli Storage Manager is performed using smit or smitty. Choose
Software Installation and Maintenance >>Install and Update Software >> Install and
Update from ALL Available Software. Select the input device, list and select software, and
accept new license agreements.
Server code
Message catalog
License Support
Web Server Admin
IBM AIX AIX 5L 5.1 or later (32 bit or 64 bit) or AIX 5.2 (32 bit or 64 bit)
HP-UX 11.0 (32 bit or 64 bit) or 11.11 (11i Version 1.0) (32 bit and 64 bit)
Windows Server 2003 - Standard Edition - 32 bit, Enterprise Edition - 32 bit,
Datacenter Edition - 32 bit, Enterprise Edition - 64 bit, Datacenter Edition - 64 bit
Windows 2000 Professional, Server, Advanced Server, Datacenter Server
Sun Solaris 8 (64 bit), or 9 (64 bit)
OS/400 PASE V5R1 or V5R2
OS/390 z/OS V1R1, or later, V2R10 or later
Linux on pSeries: SuSE Enterprise Server 8
Linux on xSeries: Red Hat Linux Advanced Server 2.1 or 2.4.9-e.10 enterprise SMP,
SuSE Enterprise
Server 7 or SuSE Enterprise Server 8/United Linux 1.0
Linux on zSeries: SuSE Linux Enterprise Server 8
The TSM Administrative Web interface, Web Proxy, and Web client require one of the
following:
It is recommended that you install and use JRE 1.4 to optimize performance of the Java
backup-archive client.
IBM Tivoli Storage Manager supports the following communication protocols: TCP/IP, named
pipes, and HTTP for Linux.
The following Tivoli Storage Manager packages install the IBM Tivoli Storage Manager server:
For either architecture, install the following Tivoli Storage Manager packages for Web
administration support:
tivoli.tsm.msg.en_US.webhelp
tivoli.tsm.server.webadmin
Figure 4
The basic Tivoli Storage Manager Server installation will create the following:
IBM Tivoli Storage Manager Database contains information about policy, schedules,
activity log, etc.
Recovery Log contains information about all changes to the database.
DSMSERV.OPT contains server configuration options.
DSMSERV.DSK (on most platforms) identifies the fully-qualified name of the
database and recovery log.
BACKUPPOOL is disk storage for backed up data.
ARCHIVEPOOL is disk storage for archived data.
SPACEMGPOOL is disk storage for data this is not used frequently (to save space).
DISKPOOL is only for Windows.
Installations can be verified on Windows systems by viewing the initserv.log file. On AIX
systems, examine the contents of the install.trace file located in the ITSM server installation
directory.
Figure 5
If you are running with 32-bit hardware, the following Tivoli Storage Manager packages install
the IBM Tivoli Storage Manager server:
tivoli.tsm.license.cert
tivoli.tsm.license.rte
If you are running with 64-bit hardware, install the following Tivoli Storage Manager packages:
tivoli.tsm.license.cert
tivoli.tsm.license.aix5.rte64
To find out what you are licensed for you can issue the Query LICense command.
You can use the REGister LICense command to register a new license with the Storage
Manager server. Licenses are stored in files called enrollment certificate files. These
certificates are files that contain licensing information for the server product.
When registered, the licenses are stored in a file named NODELOCK in the current directory
that the server was started from.
If a Storage Manager system exceeds the terms of its license agreement, one of the following
occurs:
The server issues a warning message indicating that it is not in compliance with the
licensing terms.
Operations fail because the server is not licensed for specific features.
Storage Manager requires the mgsyslan.lic license for each managed system that moves
data to and from storage over a local area network (LAN).
The following are examples of enrollment certificate files to register additional clients:
To register 20 managed systems that move data over a local area network, issue the following
command:
The packages installed during the server installation process that provide SCSI and FCP
support are
tivoli.tsm.msg.en-US.devices
tivoli.tsm.devices.aix5.rte
For AIX 5.1 and later, tivoli.tsm.devices.aix5.rte is required, regardless of the kernel mode.
Starting TSM Server:
./dsmserv
TSM Administrative Interfaces:
Server Console: The server console prompt appears in the system which runs the TSM
server .
TSM Client software has to be installed to get the Administrative client command line
for issuing administrative commands. The administrative client session can be started
in Console, mount, batch or interactive mode.
To start administrative client interface in console mode and have TSM redirect the
output to a file enter the following command:
Tivoli Storage Manager displays messages related to media mount activities when
started in mount mode. You cannot enter any administrative commands in mount
mode.
To start administrative client interface in mount mode and have TSM redirect the
output to a file enter the following command :
Use batch mode to enter a single administrative command. Your administrative client session
automatically ends when the command has processed.
To have Tivoli Storage Manager redirect all output to a file, specify the -OUTFILE
Option with a destination file name.
For example, to issue the QUERY STATUS command in batch mode with the output
redirected to the ABC.OUT file, enter:
To connect to the TSM server for administration using web browser type
A client has to registered in TSM server for performing backup / restore operations.
Registration can be of two types:
(i)Open
If the registration method is open the client can register the node, when the client
connects to the TSM server.
You can enable open registration by entering the following command from an
administrative
Client command line:
(ii)Closed
The default registration method is closed. If the registration method is closed, then the
TSM administrator has to register the node with the initial password using the
following command:
tivoli.tsm.client.ba.aix51.64bit.base
Installs the backup-archive client files (command-line and GUI), administrative client
(command-line) into the /usr/tivoli/tsm/client/ba/bin directory.
tivoli.tsm.client.ba.aix51.64bit.common
Installs the Tivoli Storage Manager common files into the /usr/tivoli/tsm/client/ba/bin directory.
tivoli.tsm.client.ba.aix51.64bit.image
Installs the image backup component into the /usr/tivoli/tsm/client/ba/bin directory.
tivoli.tsm.client.ba.aix51.64bit.web
Installs the Web client into the /usr/tivoli/tsm/client/ba/bin directory.
tivoli.tsm.client.ba.aix51.64bit.nas
Installs the NAS backup component into the /usr/tivoli/tsm/client/ba/bin directory.
tivoli.tsm.client.books
Installs the PDF and HTML book files into the /usr/tivoli/tsm/client/books directory.
tivoli.tsm.client.ba.msg.lang
Installs NL messages for the Backup-Archive client. Where lang is the language identifier, for
example Ja_JP for Japanese. American English messages are already included in the
backup-archive client code. The default installation directory is
/usr/tivoli/tsm/client/ba/bin/lang, where lang is the language identifier.
tivoli.tsm.client.ba.aix51.64bit.api
Installs the 64 bit API into the /usr/tivoli/tsm/client/api/bin64 directory.
tivoli.tsm.client.api.msg.lang
Installs the NL messages for API. Where lang is the language identifier, for example Ja_JP for
Japanese. American English messages are already included in the API client code. The
default installation directory is /usr/tivoli/tsm/client/api/bin/lang, where lang is the language
identifier.
Log in as the root user, insert the CD-ROM into the CD-ROM drive device, and mount
the CD-ROM drive.
From the AIX command line, type smitty install and press Enter.
Select Install and Update Software and press Enter.
Select Install and Update From ALL Available Software and press Enter.
The Tivoli Storage Manager files are installed in the /usr/tivoli/tsm/client/ba/bin directory
To connect to the TSM server, the node should be configured with the following
details:
These details can be given in the client option file dsm.sys which is available in the
client installation directory.
Example
To connect to the TSM server SERVER_A configured with IP Address 10.0.0.10 and
use TCP port 1500 for client communication, the dsm.sys will have the following
lines included in it.
Servername SERVER_A
COMMmethod TCPIP
TCPport 1500
TCPServeraddress 10.0.0.10
(i)Batch mode
Enter dsmc followed by the command, if you want to run a single command .
Example
(ii)Interactive mode
To start a client command session in interactive mode, enter either of the following
commands:
dsmc
dsmc loop
tsm>
Introduction
In this unit you will learn how to connect and configure a tape library to the TSM server
Objective
Configure a tape library local attached to the IBM Tivoli Storage Manager
Prepare tape cartridges for use in an IBM Tivoli Storage Manager managed tape Library
To define an optical disk or tape device to TSM, the administrator must define a
library, each drive and a device class. The library identifies whether TSM sends tape
mount requests to an operator or a robotic picker. Drive definitions are required to
map individual drives to TSM and the operating system
When configuring the Tape library in TSM server, the physical and logical device configuration
has to be done in a sequence. The physical definition for the library has to be configured
followed by the logical configuration.
Step1
MANUAL
MANUAL libraries contain devices with drives that require an operator to mount
media.
SCSI libraries contain devices with drives for which media is mounted
automatically.
Step2
Library Path
Step3
Drive
Drive Path
Step
Deviceclass
TSM storage object that represents a device. A device class contains information about the
device type and the way the device manages its media including definitions such as
recording format, estimated capacity, and labeling prefixes. A device class for a tape drive
must also specify a library.
The administrator must define a device class for each unique device type in the TSM
environment. Examples of TSM device types include:
For random access storage, TSM supports only the DISK device class. The DISK device
class is predefined by TSM. You cannot modify the DISK device class
a. Install the SCSI or FC adapter card in your system, if not already installed.
b. Determine the SCSI IDs available on the adapter card to which you are
attaching the device. Find one unused SCSI ID for each drive, and one for the library or
autochanger controller. In some automated libraries, the drives and the autochanger
share a single SCSI ID, but have different LUNs. For these libraries, only a single SCSI ID
is required. Check the documentation for your device.
c. Follow the manufacturers instructions to set the SCSI ID for the drives and
library controller to the unused SCSI IDs that you found. Usually this means setting
switches on the back of the device.
Note: Each device connected in a chain to a single SCSI bus must be set to a unique
SCSI ID. If each device does not have a unique SCSI ID, you may have serious system
problems.
d. Follow the manufacturers instructions to attach the device to your server
system hardware.
Notes: Power off your system before attaching a device to prevent damage to the
hardware. Also, you must attach a terminator to the last device in the chain of devices
connected on one SCSI adapter card. Detailed instructions should be in the
documentation that came with your hardware.
Example:
Configure a 3581 SCSI based tape library in TSM server. The drive type is LTO.
Step 1
Defining a library
Figure 8
Tapes must first be labeled and then added to the inventory of tapes available to IBM Tivoli
Storage Manager. Tapes may be checked into IBM Tivoli Storage Manager as either scratch
or private. Tapes that are part of the scratch pool are eligible to be selected for use. Once a
tape is selected, data remains on the tape until it is expired or moved. The tape can then be
reclaimed and returned to the scratch pool.
There are two different methods to checkin a tape, online and offline. You can label then
checkin, or label and checkin in one step.
You can label volumes with the LABEL LIBVOLUME command. The following example
demonstrates using the LABEL LIBVOLUME command to label tapes for a manual library and
for an automated library. Assume the automated device is attached to SCSI address 4, and
the manual device is attached to SCSI address 5. You want to insert media into the device's
entry/exit ports and you want the device's bar code reader to read bar code labels and
overwrite existing labels with the information on the bar code label.
Checking in Volumes
After volumes have been labeled, make the volumes available to Tivoli Storage Manager
Devices by checking the volumes into the library volume inventory using the CHECKIN
LIBVOLUME command. Checking media into an automated library involves adding them to
the library volume inventory.
Figure 9
A private volume is a labeled volume that is in use or owned by an application, and may
contain valid data. You must define each private volume, and it can only be used to satisfy a
request to mount that volume by name. Private volumes do not return to scratch when they
become empty.
A scratch volume is a labeled volume that is empty or contains no valid data, and can be used
to satisfy any request to mount a scratch volume. When data is written to a scratch volume,
its status is changed to private.
The LABEL LIBVOLume command combines the DSMLABEL and CHECKIN LIBVOL
commands which were used in previous versions of TSM. Using one command (LABEL
LIBVOL) significantly reduces the time and interaction required during these two labor-
intensive operations. This command, however, does not replace the previous method of
DSMLABEL followed by CHECKIN LIBVOL to prevent large-scale tape labeling from tying up
the servers resources. The LABEL LIBVOL command allows all the functionality of the
DSMLABEL command, such as the search, barcode, and overwrite options. LABEL LIBVOL
also checks the volumes into the library as either private or scratch volumes.
Figure 10
You can remove volumes from automated libraries by issuing the CHECKOUT LIBVOLUME
command
Figure 11
Auditing a Library
Figure 12
You can issue the AUDIT LIBRARY command to audit the volume inventories of automated
libraries. Auditing the volume inventory ensures that the information maintained by the Tivoli
Storage Manager server is consistent with the physical media in the library. The audit is useful
when the inventory has been manually manipulated. Tivoli Storage Manager deletes missing
volumes and updates the locations of volumes that have moved since the last audit. Tivoli
Storage Manager cannot add new volumes during an audit.
In this unit, you will learn how to create hierarchical storage pools of different media types to
allow for efficient management of your data.
Objectives
Describe purpose of Storage Pools, Storage Pool Hierarchies & Storage Pool
Volumes
Create a Storage Pool
Design and configure storage pools based on given customer requirements
Manage Storage Pool Volumes
Figure 13
Data storage pools are where the server stores files which are backed up and archived. The
database serves as the inventory or index to client files within data storage. Data storage may
be composed of optical media, direct access storage, and sequential tape media. Files may
be initially placed on different storage pools according to the desired storage management
policy.
Files are automatically moved to other devices to satisfy free space, space utilization,
performance, and recovery requirements. An administrator with system, storage, or operator
privilege can manage data storage. This includes planning, preparing, monitoring, and
deleting storage volumes and storage pools depending on level of privilege class. Data
storage is actually defined as a collection of storage pools.
Figure 14
Storage Pool: A storage pool is a named set of volumes that is the destination of backed-
up or archived data.
The purpose of storage pools is to match user requirements for data with the physical
characteristics of storage devices.
For example, if users need immediate access to certain data, you can define a storage pool
which consists of storage volumes residing on high-performance DASD. Then, users can
associate this storage pool as a destination for their files by binding the appropriate
management class.
Automatic data movement between storage pools is used to balance the performance and
cost of different storage devices while ensuring an adequate free space to satisfy new space
allocations. This process is known as migration. For each storage pool, you define low and
high migration thresholds. The low threshold identifies the amount of free space needed to
satisfy the daily processing requirements of your business. The high threshold is used to
trigger migration and ensure that enough free space is available while migration is performed.
The difference between the high and low thresholds indicates the approximate amount of data
that will be migrated.
To reduce tape mounts and to use the space on tape volumes most effectively ensure that the
amount of data that is migrated from a disk storage pool is a multiple of the capacity of a tape
volume in the next storage pool. Automatic data movement is also used to free up space on
tape volumes by consolidating active data from fragmented tape volumes onto a single
volume, leaving the original volumes available for reuse. This process is known as
Figure 17
When the high migration threshold is reached in a storage pool, TSM migrates files from the
pool to the next storage pool in chain. No migration occurs if there is no next storage pool.
TSM first identifies which client node has backed up or migrated the largest single file space
or has archived files that occupy the most space. When the server identifies the client node
based on these criteria, the server migrates all files from every file space belonging to that
client for those files whose number of days in the storage pool exceeds the value specified by
the MIGDELAY parameter.
After the files for the first client node are migrated to the next storage pool, the server checks
the low migration threshold for the storage pool to determine if the migration process should
be stopped. If the amount of space used in the storage pool is now below the low migration
threshold, migration ends. If not, another client node is chosen by using the same criteria as
described above, and the migration process continues.
If the value for that parameter has been set to YES, then TSM continues the migration
process based on how long the files have been in the storage pool. The oldest files are
migrated first until the low migration threshold is reached. If the value for MIGCONTINUE has
been set to NO, then the migration process ends, and a warning message is issued to the
administrator
If multiple migration processes are running (controlled by the MIGPROCESS parameter of the
DEFine STGpool command), the files for more than one node may be chosen for migration
at the same time.
Figure 18
You can enable cache by specifying CACHE=YES when you define or update a storage pool.
When cache is enabled, the migration process leaves behind duplicate copies of files on disk
after the server migrates these files to subordinate storage pools in the storage hierarchy. The
copies remain in the disk storage pool, but in a cached state, so that subsequent retrieval
requests can be satisfied quickly. However, if space is needed to store new data in the disk
storage pool, cached files are erased and the space they occupied is used for the new data.
The advantage of using cache for a disk storage pool is that cache can improve how quickly
the server retrieves some files. When you use cache, a copy of the file remains on fast disk
storage after the server migrates the primary file to another storage pool.
You may want to consider using a disk storage pool with cache enabled for storing space-
managed files that are frequently accessed by clients.
Reclamation
Figure 19
For example, files become obsolete because of aging or limits on the number of versions of a
file.
When the percentage of reclaimable space exceeds a specified level (the reclamation
threshold), the volume is eligible for reclamation. The server checks whether reclamation is
needed at least once per hour and begins space reclamation for eligible volumes. You can set
a reclamation threshold for each sequential access storage pool when you define or update
the pool.
When multiple volumes are eligible for reclamation, TSM reclaims the eligible volumes in
random order.
Space within aggregate files is also reclaimed during the reclamation process. An aggregate
is a physical file that contains multiple logical files backed up or archived from a client in a
single transaction.
Unused space from expired or deleted logical files is removed as the aggregate file is copied
to another volume during reclamation
Collocation
Collocation is a process in which the server attempts to keep files belonging to a single client
node or to a single file space of a client node on a minimal number of sequential access
storage volumes. You can set collocation for each sequential access storage pool when you
define or update the pool.
To have TSM collocate data in a storage pool by client node, set collocation to YES. To have
TSM collocate data in a storage pool by client file space, set collocation to FILESPACE. By
using collocation, you reduce the number of volume mount operations required when users
restore, retrieve, or recall many files from the storage pool. Collocation thus improves access
time for these operations.
If collocation is enabled and reclamation occurs, the server tries to reclaim the files for each
client node or client file space onto a minimal number of volumes
Figure 20
Figure 16
TSM uses the device class to determine which device and storage volume type to use to:
One device class can be associated with multiple storage pools. Each storage pool is
associated with only one device class. Each device class is characterized by its device type,
which indicates the type of storage volumes that are used to store data.
Each device is associated with a device class that specifies the device type and how the
device manages its media.
Storage pools are mapped to a device class. It is through this mapping that, when data is
written to or accessed from a storage pool, TSM knows the device characteristics of the
storage pool media and how to access it
When a user tries to restore, retrieve, recall, or export file data, the requested file
is obtained from a primary storage pool if possible. Primary storage pool volumes
are always located onsite.
A primary storage pool can use random access storage (DISK device class) or
sequential access storage (tape)
The server has three default, random access, primary storage pools:
ARCHIVEPOOL
The default destination for files that are archived from client nodes
BACKUPPOOL
The default destination for files that are backed up from client nodes
SPACEMGPOOL
For space-managed files that are migrated from Tivoli Storage Manager for Space
Management client nodes (HSM clients)
A copy storage pool can use only sequential access storage (for example, a tape
device class )
You can move copy storage pool volumes offsite and still have the server track the
volumes. Moving
Copy storage pool volumes offsite provides a means of recovering from an onsite
disaster.
DEFine STGpool
Example
Define a primary storage pool, DISKPOOL, to use the DISK device class, with
caching enabled. Limit
The maximum file size to 10MB. Store any files larger than 10MB in subordinate
storage pool named TAPEPOOL. Set the high migration threshold to 60 percent,
and the low migration threshold to 30 percent.
The command is
Example
Define a primary storage pool named LTOPOOL to the LTO device class (with a
device type of LTO) with a maximum file size of 1GB. Store any files larger than
1GB in subordinate pool TAPEPOOL
Enable collocation of files for client nodes. Allow as many as 5 scratch volumes for
this storage pool.
The command is
Example
Define a copy storage pool, LTOPOOL2, to the LTO device class. Allow up to 50
scratch volumes
For this pool. Delay the reuse of volumes for 45 days.
Command
Define stgpool LTOPOOL2 LTO pooltype=copy maxscratch=50 reusedelay=45
Use the UPDate STGpool command to change any parameters in an existing storage pool.
You can use this command to modify selected parameters for the specified storage pool. If
you do not explicitly update a parameter, it remains unchanged. The parameters to update
are the same as the parameters when you define a storage pool.
Use the Query STGpool command to display information about one or more storage pools.
For the syntax of the commands refer to the Tivoli Storage Manager Administrator's Guide or
issue the HELP UPDate STG, or HELP QUERY command as an administrator.
Use the DEFINE VOLUME command to assign a random or sequential access volume to be
used for storage within an existing storage pool. You can define a volume to either a primary
storage pool or a copy storage pool.
You must define each volume to be used in a storage pool, unless you allow scratch volumes
for the storage pool. For a random access volume, before issuing this command you must
allocate and/or format the volume by using the DSMFMT utility or a version of it.
For sequential access storage pools with other than FILE device type, you must prepare
volumes for use. When the server accesses a sequential access volume, it checks the
volume name in the header to ensure that the correct volume is being accessed.
To prepare a volume:
Label the volume.
For storage pools in automated libraries, use the CHECKIN LIBVOLUME command
to check the volume into the library.
Use the DEFine Volume command unless you allowed scratch volumes in the storage
Delete storage pools and storage volumes with the following commands:
MOVE DATA
MOVE DATA. Use the MOVE DATA command to move all files to another volume.
Explicitly request to discard all files in the storage volume by specifying the following option:
DISCARDDATA= YES
If you are deleting several volumes, it is recommended that you delete the volumes one at a
time. Concurrent volume deletion can adversely affect server performance.
QUERY CONTENT. To determine the contents stored on a volume, use the Query CONtent
command.
In order to use this command, you must first delete all volumes assigned to the specified
storage pool. You cannot delete a storage pool that is defined as a subordinate storage pool.
Central scheduling enables automation of backup, archive, and other processes. The
administrative interfaces to defining, updating, deleting schedules will be discussed here.
Objectives
Describe the difference between client polling and server prompted methods
Identify and describe scheduler options available
Associate scheduler options with the schedule method to which it applies
Identify the tasks that can be scheduled for a client
Create a schedule
Copy a schedule
Define Association of client to schedule
Overview of Schedules
Figure 21
IBM Tivoli Storage Manager uses schedules to allow administrators to automate operations.
Each scheduled operation is called an event and is tracked by the server and recorded in the
database. It records scheduled operations that are in progress, have completed, or have
failed. The administrator can query the log to determine whether the scheduled events have
completed successfully or not, and event records can be deleted from the database as
needed to recover database space.
The Central Scheduler supports two modes of scheduling: client polling and server prompted.
Figure 22
In client polling, the TSM client periodically queries or polls the server for a scheduled
operation and the date/time that the operation is to start. The client then waits until it is time to
start the scheduled operation and executes the operation.
Client polling is initiated by the client starting the TSM client scheduling program using the
command line interface. To start the program the client enters DSMC SCHEDULE. The
program will continue to query the server and execute schedules until the user explicitly stops
the program or the machine is shut down.
Figure 23
Server prompted is initiated by the client starting the Tivoli Storage Manager client scheduling
program using the command line interface. To start the program the client enters DSMC
SCHEDULE.
To enable server prompted scheduling, change the client options file so that the
SCHEDMODE is PROMPTED.
Figure 24
ANY - Which indicates the server can support clients using either client-polling or
server-prompted scheduling. Any is the default and recommended value.
POlling - Which indicates that only clients using client-polling will be accepted.
PRompted - Which indicates that only clients using server-prompted mode will be
accepted.
On the client , the dsm.opt file must be updated with the SCHEDMODE option which is used
to specify which mode the client scheduler will operate in.
This option is ignored except during the execution of the DSMC SCHEDULE command,
which invokes the client portion of the central scheduling function.
Users (root users on UNIX systems) set the scheduling mode on client nodes. They specify
either the client polling or the server prompted scheduling mode on the command line or in
the client user options file (client system options file on UNIX systems).
SCHEDLOGNAME
SCHEDLOGRETENTION
MAXCMDRETRIES
RETRYPERIOD
Use the SCHEDLOGNAME option to specify the name and location of a file where you want
Tivoli Storage Manager to store the schedule log. For UNIX clients, this option goes in the
client system options file. When you run the SCHEDULE command, output from scheduled
commands appears on your screen. It is also directed to the file you specify with this option.
Use the SCHEDLOGRETENTION option to specify the number of days to keep entries in the
schedule log and whether to save the pruned entries. Tivoli Storage Manager prunes the log
after every schedule is run if you tell Tivoli Storage Manager to prune. The default is not to
prune the log. For UNIX clients, this option goes in the client system options file.
Use the MAXCMDRETRIES option to specify the maximum number of times you want the
client scheduler on your workstation to attempt to process a scheduled command that fails.
Your TSM administrator can also set this option. If your TSM administrator specifies a value
for this option, that value overrides what you specify in the client options file after your client
node successfully contacts the TSM server. All clients support this option. For UNIX clients,
this option goes in the client system options file.
Use the RETRYPERIOD option to specify the number of minutes you want the client
scheduler to wait between attempts to process a scheduled command that fails or between
unsuccessful attempts to report results to the server. Your TSM administrator can also set this
option. If your TSM administrator specifies a value for this option, that value overrides what
you specify in the client options file after your client node successfully contacts the TSM
server. All clients support this option. For UNIX clients, this option goes in the client system
options file
Figure 25
Use the QUERYSCHEDPERIOD option to specify the number of hours you want the client
scheduler to wait between attempts to contact the TSM server for scheduled work.
This option applies only when the SCHEDMODE option is set to POLLING. Tivoli Storage
Manager uses this option only when the SCHEDULE command is running.
For UNIX clients, this option goes in the client system options file dsm.sys.
Figure 26
Use the TCPCLIENTADDRESS option to specify a TCP/IP address if your client node has
more than one address, and you want the server to contact a different address than the one
Use the TCPCLIENTPORT option to specify a TCP/IP port number if you want the TSM
server to contact a different port than the one used to make initial contact with the server. If
the default or specified port is busy, Tivoli Storage Manager attempts to use any other
available port. For UNIX clients, this option goes in the client system options file.
Figure 27
Tivoli Storage Manager cannot run multiple schedules concurrently for the same client node.
Also, not all clients can run all scheduled operations, even though Tivoli Storage Manager
allows you to define the schedule on the server and associate it with the client. For example,
a Macintosh client cannot run a schedule when the action is to restore or retrieve files, or run
an executable script. An executable script is also known as a command file, a batch file, or a
script on different client operating systems.
Figure 28
The command line client and scheduler provide reliable, consistent, and documented return
codes. This facilitates automation of client operations via user-written scripts. Administrators
can now distinguish between scheduled backups that complete successfully with no skipped
files, and scheduled backups that complete successfully with one of more skipped files. Also,
if the preschedulcmd command ends with non-zero return codes, the scheduled event will not
run. This ensures that scheduled events will not run if pre scheduled commands do not
complete successfully.
Figure 29
Use the SET MAXSCHEDSESSIONS command to regulate the number of sessions that the
server can use for processing scheduled work. This command specifies the maximum
number of scheduled sessions as a percentage of the total number of server sessions
available.
Use the SET MAXCMDRETRIES command to specify the maximum number of times that a
scheduler on a client node can retry a scheduled command that fails.
The MAXCMDRETRIES parameter can be specified by each user at the time their client
scheduler program is started. You can use the SET MAXCMDRETRIES command to set a
global value for the maximum number of retries, which overrides the value specified by the
user. The client's value is overridden only if the client can contact the server.
Use the SET RETRYPERIOD command to specify the number of minutes the scheduler on a
client node waits between retry attempts after a failed attempt to contact the server or after a
scheduled command fails to process.
Each client can set their own retry period at the time their scheduler program is started. You
can use this command to set a global value for the retry period which will override the value
specified by all clients. The client's value is overridden only if the client is able to connect with
the server.
When setting the period between retry attempts, set a time period that permits more than one
retry attempt within a typical startup window.
This command is used in conjunction with the SET MAXCMDRETRIES command to regulate
the period of time and the number of retry attempts to execute a failed command.
Use the SET QUERYSCHEDPERIOD command to regulate the frequency with which client
nodes contact the server to obtain scheduled work when they are running in the client-polling
mode. The value for the QUERYSCHEDPERIOD parameter can be set by each client node at
the time the client scheduler program is started.
You can set a global value for the period between attempts by the client to contact the server
for scheduled work. This value overrides the value specified by the client.
The client's value is only overridden if the client can contact the server.
Use the SET RANDOMIZE command to specify the degree to which start times are
randomized within the startup window of each schedule for clients using the client-polling
mode. Randomize will be covered in detail later in this unit.
Defining Schedules
Schedules are created and maintained in the TSM database by an administrator with either
System or Policy privilege, are defined with a Define Schedule command. and apply to a
particular policy domain.
Administrators use the Define Association command to associate clients (that are in the
domain) with a schedule. A client may be associated with more than one schedule, and any
number of schedules can be defined in a policy domain.
Schedules are executed serially by a client.
Example:
define schedule xyz_critical_project weekly_backup
startdate=06/07/2003 starttime-23:00 duration=4
durunits=hours perunits=weeks
dayofweek=saturday options=-quiet
For more details issue the Help Define Schedule command from a command line
administrator or while in the web interface
Schedule Example
Figure 30
A schedule is given a startup window, which defines when a scheduled operation is to start.
The client must start the scheduled operation within the startup window. If the client is unable
to do so (for example the terminal is turned off, or the network is unavailable), the client will
wait until the next occurrence of the schedule's startup window to execute the operation.
The scheduled operation must start within the window it may complete outside of the window.
A log is maintained on the server, which records information about the scheduled events. The
administrator can query the log for information about started, completed, and failed events.
Figure 31
Use the DEFine ASSOCiation command to associate one or more clients with a schedule.
Client nodes that are associated with a schedule initiate Tivoli Storage Manager functions
according to that schedule
Domainname specifies the name of the policy domain to which the schedule belongs. This
parameter is required.
Schedulename specifies the name of the schedule that you want to associate with one or
more clients. This parameter is required.
Nodename specifies the name of the client node to be associated with the specified schedule.
This parameter is required. You can specify a list of clients that you want to associate with the
specified schedule. The items in the list are separated by commas, with no intervening
spaces.
You can use a pattern matching expression to specify a name. All matching clients are
associated with the specified schedule. In some commands, such as the query commands,
you can use wildcard characters to create a pattern-matching expression that specifies more
than one object. Using wildcard characters makes it easier to tailor a command to your needs.
The wildcard characters you use depend on the operating system from which you issue
commands. For example, you can use wildcard characters such as an asterisk (*) to match
any (0 or more) characters or you can use a question mark (?), or a percent sign (%) to match
exactly one character.
If a client is listed, but is already associated with the specified schedule or is not assigned to
the domain to which the schedule belongs, the command has no effect for that client.
In this unit we will discuss the types of clients available with IBM Tivoli Storage Manager, and
how to invoke, manage and configure them.
Objectives
Figure 32
Figure 33
The client invokes a session with the server until the command is completed. The command
dsmc can be followed by the keyword you wish. For example the dsmc incremental
command:
dsmc i
When authentication is on, a password must be entered on the command line or the user will
be prompted to enter a password. The password is encrypted and will not display when
prompted.
To use the web client, specify the URL of the client machine running the web client in your
web browser, and the client port 1581. For example http:\\x.x.x.x:1581
Netscape Navigator 6.0 or later with the Java support option installed.
Netscape Navigator 4.7 or later with Java Plug-in 1.3.1
Microsoft Internet Explorer 5.0 or later with Java Plug-in 1.3.1. The minimum JRE
level required for Microsoft Internet Explorer browsers running on Windows platforms is
JRE 1.3.1_01.
Refer to the Backup-Archive Client Requirements section for the specific operating system
levels supported for the Web clients. TCP/IP is the only communication protocol supported for
this client interface.
Using GUI
Before a user can request Tivoli Storage Manager services, the node must be registered with
the server. Each node must be registered with the server and requires an option file with a
pointer to the server.
Each node must be registered with the server and requires an option file with a
pointer to the server.
When a node is registered, Tivoli Storage Manager automatically creates an
administrative user ID with client owner authority over the node. This can be prevented by
the Administrator as shown in the command below.
You can use this administrative user ID to access the Web backup-archive client from
remote locations through a Web browser.
If an administrative user ID already exists with the same name, an administrative user
ID is not automatically defined.
register a node at the admin command line with
register node mercedes montana userid=none
register a node through the admin GUI
Client Nodes >> Register a new node
Registration can be set open or closed. Open means the client node is automatically
registered when a session is started. The administrator does not have to register this node.
Closed registration means the client node must be registered by the Administrator.
There are two client options files, and whichever is used depends on the operating systems:
DSM.OPT
DSM.SYS
On multiuser systems like UNIX the client options are in both files DSM.OPT and DSM.SYS .
On other systems like Windows 2000 the client options are in DSM.OPT . This is a file that a
client can edit, containing a default set of processing options that identify the server,
communication method, backup and archive options, and scheduling options.
Figure 34
NODENAME
Use the NODename option to identify your workstation to the server. The nodename can be a
1 to 64-character name which will be used to identify the node for which you want to request
Storage Manager services. For Windows NT and Windows 95, the default is the name of the
machine if you do not use this option. For UNIX, the default is the same as the name returned
by the hostname command.
The nodename option goes in your client system options file dsm.sys. For Unix
environments, dsm.opt is also used.
TCPSERVERADDRESS
Use the TCPSERVERADDRESS option to specify the TCP/IP address for a Storage Manager
server. To use the TCP/IP communication protocol, you must include the tcpserveraddress
option in your client options file. The other TCP/IP options have default values which you can
modify only if you want to change the default value.
One to 64-character TCP/IP address for a Storage Manager server. The value you specify for
this parameter can be a TCP/IP Internet domain name or a dot address.
TCPPORT
Use the TCPPORT option to specify a server's TCP/IP port address. The TCP/IP port address
used to communicate with a Storage Manager server. The range of values is 1000 to 32767.
The default is 1500.
TCPPORT port_address
COMMMETHOD
Use the COMMMethod option to specify the communication method you are using to provide
connectivity for client-server communication.
Two of the options that may be defined to a client options set are INCLUDE and EXCLUDE.
When these parameters are specified in client options files an additional parameter for the
management class to be used also may be provided.
Figure 35
Include/exclude processing
The Include/Exclude list allows you to establish files which are to be included in or executed
from backup processing. The include statement is used for two purposes. One is to specify
exceptions to the exclude list. The other is to associate a Management Class with a file or
group of files.
The include is also used during archive to determine Management Class, while the exclude
statement is not checked during the archive processing. Directory type files are always
included in the backup, even when all the files within the directory are excluded, unless you
have the EXCLUDE.DIR statement.
The Include/Exclude list uses metacharacters to select files to be included or excluded. Some
metacharacters differ depending on the client platform. These metacharacters allow you to
specify wild card processing. The metacharacters can also be used in the command line to
specify the file specification on most commands. Metacharacters include: (Examples are in
parenthesis)
Figure 36
Exclude.dir
The EXCLUDE.DIR statement excludes a directory structure from the internal traverse tree
the TSM backup archive client builds internally before performing the backup and prevents
directories and directory attributes from being backed up.
EXCLUDE.DIR: Excludes a directory structure from backup and from being traversed
during incremental backup.
EXCLUDE.FILE: Can be abbreviated to EXCLUDE and excludes files from backup.
Excluded directory structures are traversed during incremental backup.
The client communicates with the server and invokes the client functions of Storage Manager.
The client is supported on a variety of platforms which might reside on an end-user
workstation or a LAN server. In this unit we will discuss the various methods available to
backup, restore, archive and retrieve data, and what options are available to customize these
processes to fit your needs.
Objectives
Figure 37
The incremental backup function (also known as incremental (complete)) backs up all your
files that have changed since they were last backed up, and all your files that were created
since the last backup. This function performs a journal based backup of those file systems
previously selected for journaling. The incremental backup function does not back up files that
are excluded by your include and exclude list
Figure 38
Journal-based backup is supported on all Windows clients, except the Windows Server 2003
64-bit client. If you install the journal engine service and it is running, then by default the
incremental command will automatically perform a journal-based backup on selected file
systems which are being monitored by the journal engine service. Tivoli Storage Manager
does not use the journaling facility inherent in Windows NTFS file systems or any other
journaled file system. In order to successfully perform a journal-based backup, several
conditions must be met. These include:
The journal service must be set up to monitor the file system that contains the files and
directories being backed up.
A full incremental backup should have been run successfully at least once on the file
system being backed up.
The file space image of the file system at the server cannot have been modified by an
administrative command since the last full incremental.
The storage management policy for the files being backed up cannot have been updated
since the last full incremental.
Figure 39
For a disk or volume to be eligible for incremental-by-date backups, you must have performed
at least one full incremental backup of that entire disk or volume. Running an incremental
backup of only a directory branch or individual file will not make the disk or volume eligible for
incremental-by-date backups.
To perform an incremental-by date backup using the GUI, select the Incremental (date only)
option from the type of backup pull-down menu or use the incrbydate option with the
incremental command.
The client backs up only those files whose modification date and time is later than the date
and time of the last incremental backup of the file system on which the file resides.
Files added by the client after the last incremental backup, but with a modification date earlier
than the last incremental backup, are not backed up.
Files that were renamed after the last incremental backup, but otherwise remain unchanged,
will not be backed up.
Frequency
New files
Deleted files
Changes in file attributes
No re-binding of files
Figure 40
1. The client starts the session, and asks for the backup information.
2. There is a search in the TSM database for the information about the client files.
3. If there is no information, meaning there has never been a backup before, a full
backup will take place.
4. If it does find information in the database, the information about which files have been
changed is relayed and then it performs an incremental backup.
Figure 41
Always attempts to back up the objects you selected. This type of backup is also known as
"selective backup". Use a selective backup when you want to back up specific files or
directories regardless of whether a current copy of those files exists on the server.
Figure 42
Performs an online image backup of a volume in which the volume remains active and
available for read and write operations during the backup. Available only if the Tivoli Storage
Manager Logical Volume Snapshot Agent is installed and available. This item is visible only if
the image plug-in is installed and the client is connecting to a Tivoli Storage Manager V5.1 or
higher server.
Restore operation
Restore is the process of copying a backup version of a user's file from the Storage Manager
server to the workstation or LAN server.
If a file is damaged, the user (Storage Manager client) can request without the aid of an
administrator, that the system restores the current or a specific backup version. A user may
only restore files that he/she has backed up unless he/she has been granted authority to
another person's backup files.
When a user restores a backup version of a file, Storage Manager sends a copy of the file to
the client node. The backup version remains in Storage Manager server. If more than one
backup version exists, a user can restore the active backup version of the file or any inactive
backup versions.
The restore GUI queries the TSM server for a list of files that have been backed up and
presents them in the same format as the backup GUI. Simply select the files you want to
restore.
You can also use the Find function to select files. The Find function gives you the same
options as those for doing a backup but will look for backed up files on the TSM server from
which to select for restoring.
Point-in-time restore
Storage Manager uses a point-in-time (PIT) restore to restore a filespace, directory, or file to
the version equal to or before the point in time. Incremental backups are necessary to capture
the fact that files have been deleted. Support for PIT restore is essential to be able to recover
a filespace or directory to a time when it was known to be in a good or consistent state. For
example, a PIT restore can eliminate the effect of data corruption or recover a configuration to
a prior date or time. When a PIT restore is performed, new files that have been created on the
client after the PIT date are not deleted.
Both the backup-archive GUI client and command line client support PIT restore when used
with a Version 3 server.
Point-in-time restores, that do include deleted files, are possible when incremental backups
are run on the client. This is because the server is only notified about files that are deleted
from a client filespace during an incremental backup. Incremental backups should run
frequently enough to provide the necessary point-in-time resolution. Files that have been
deleted from a client filespace between two incremental backups might be restored during a
point-in-time restore.
Archive Process
Policy Management enables the administrator to determine a set of rules explaining how IBM
Tivoli Storage Manager will treat data. These sets of rules are composed of policy domains,
policy sets, management classes, and copy groups.
Objectives
Upon completion of this unit, you should be able to do the following tasks to manage your IBM
Tivoli Storage Manager environment by policy:
Policy Management
Policies are created by the administrator and stored in the database on the server. Several
elements comprise policies:
Policy Domain. A group of nodes managed by the same set of policy constraints as
defined by the policy sets.
Policy Set. A collection of Management Class (MC) definitions. A policy domain may
contain a number of policy sets, however, only one policy set in a domain can be active at
a time.
Management Class. A collection of management attributes describing backup and
archive characteristics. There are two sets of MC attributes, one for backup and one for
archive. A set of attributes is called a copy group. There is a backup copy group and an
archive copy group.
Policy Specification
Policy is defined both at the client and at the server. On the server, an administrator is
responsible for creating policies that will manage the client's data, and for associating clients
with a set of policies from which they may select. The administrator is also responsible for
defining a default policy, one that will be used unless another policy is explicitly selected.
The client, however, may choose to override the default policy and select any other policy that
is also in his/her policy domain. There are several ways that the client may do this:
A policy domain provides you with a logical way of managing backup and archive policies for
a group of nodes with common needs. It is a collection of one or more nodes and one or more
policies. Each domain is an object stored in the Storage Manager database with a name from
1-30 characters. Policy domain names should be meaningful. There is no limit to the number
of policy domains that can be defined on a Storage Manager server.
A client node can be associated with only one policy domain on a specific Storage Manager
server. However, a client/node may be registered (defined) to more than one server. Each
domain may have one or more clients/nodes associated with it. The clients/nodes may be
running on the same or different platforms. Some installations may find that they only require
a single policy domain.
A policy domain also contains a "grace period" backup and archive retention period that acts
as a safety net to insure that backed up and archived data in a storage pool is not
inadvertently deleted if it looses its backup or archive copy group.
Use the DEFINE DOMAIN command as shown to define a new policy domain:
Domainname specifies the name of the policy domain to be defined. This parameter is
required. The maximum length of this name is 32 characters.
DESCription=description
Specifies a text string that describes the policy domain. This parameter is optional,
but it is recommended to provide a meaningful description. The maximum length of the
description is 255 characters.
BACKRETention=bkretvalue
Specifies the number of days (from the date of deactivation) to retain backup versions
that are bound to a management class that no longer exists on the client's system. This is
a grace period.
ARCHRETention=arch
Specifies number of days (from the date of archive) to retain archive copies that are
bound to a management class that no longer exists on the client's system. This is a grace
period.
Use the following syntax for the DEFINE POLICYSET command to define a policy set in a
specified policy domain.
Domainname. Specifies the name of the policy domain to which the policy set belongs. This
parameter is required.
Setname. Specifies the name you want to assign to the policy set. This parameter is required.
The maximum length of this name is 30 characters.
Tivoli Storage Manager provides a predefined policy domain, policy set, management class,
backup copy group, and archive copy group. Each policy is stored on the server and named
STANDARD. Using the policy objects provided in Tivoli Storage Manager allows you to begin
using IBMTivoli Storage Manager immediately, as you become familiar with Tivoli Storage
Manager you can tailor the standard policies.
Figure 44
The Backup Retention Grace Period specifies the number of days to retain a backup version
when the server is unable to rebind the file to an appropriate management class.
The Archive Retention Grace Period specifies the number of days to retain an archive copy
when the server is unable to rebind the file to an appropriate management class.
These values come with Tivoli Storage Manager in the STANDARD Domain:
The storage pool destination where the backed up or archived data is to be stored.
The minimal interval, in days, between backup and archive operations.
Whether the file is to be backed up regardless of whether it has been modified since the
last backup.
Whether the file can be in use when a user attempts to backup or archive the file.
The maximum number of different backup versions that may be retained for files no
longer on the client's file system.
The retention period, in days, for all but the most recent backup version, and for the last
remaining backup version that is no longer on the client's file system.
The number of days that an archive copy is to be retained.
The set of backup parameters include frequency, mode (modified/absolute), destination, copy
serialization, # versions, # versions when file deleted, retention days for all but last version,
and retention days for last version when file deleted
The set of archive parameters include frequency (always Cmd), mode (always ABSolute),
destination, copy serialization, and retention days for archive copies.
Policy set
Figure 45
Each policy set contains a default management class, and can contain any number of
additional management classes. Policy sets are used to implement different policies based on
user and business requirements.
There can be only one active policy set per policy domain
Use the following syntax for the DEFINE POLICYSET command to define a policy set in a
specified policy domain.
>>--DEFine POlicyset--domainname--setname----->
>-------------------------------------------------------------------->
+-DESCription--=--description-+
Example: DEF Policyset Windows NEWDEF
Domainname. Specifies the name of the policy domain to which the policy set belongs. This
parameter is required.
Setname. Specifies the name you want to assign to the policy set. This parameter is required.
The maximum length of this name is 30 characters.
DESCription. Describes the new policy set using a text string. This parameter is optional. The
maximum length of the description is 255 characters. It is advisable to use the description,
since this will define the policy set once it becomes active.
The validate command examines the management class and copy group definitions in a
specified policy set and reports on conditions that need to be considered if the policy set is to
be activated. Once a change is made to a policy set by changing/adding a management
class, copy group, and so forth, the policy set must be activated to make it the "ACTIVE"
policyset. Before you activate a policyset it is a good idea to validate it.
Use the VALIDATE POLICYSET command to verify that a policy set is complete and valid
prior to activating it:
Use the ACTIVATE POLICYSET command to specify a policy set as the ACTIVE policy set
for a policy domain.
The VALIDATE POLICY SET command will fail if any of the following conditions exist:
When a policy set is activated, the contents of the policy set are copied to a policy set that
has the reserved name ACTIVE. Once activated, there is no real relationship between the
policyset that has been activated (copied to ACTIVE) and the contents of the
ACTIVEpolicyset. The original policy set can still be modified, but the copied definitionsin the
ACTIVE policy set can only be modified by activating another policy set.
Management class
A management class associates backup and archive groups with files, and specifies if and
how client node files are migrated to storage pools. A management class can contain one
backup or archive copy group, both a backup and archive copy group, or no copy groups.
Users can bind (that is, associate) their files to a management class throught the include-
exclude list.
Figure 46
If there is not enough space in the initial storage pool, a migration is started. The server
stores the information about the file in the database.
Copy groups contain the parameters that control the generation and expiration of backup and
archive data. There are two types of copy groups: Backup and Archive. A management class
can have 0, 1, 2 copy groups. All copy groups are named STANDARD. Each management
class can contain up to two copy groups: one for backup files and one for archive files.
Registering administrators
Granting one or more administrative privilege classes to other administrators
Allowing separation of tasks
Allowing delegation of authority
Varying commands by privilege class
Figure 48
The figure above shows how you can divide the administrative tasks through the five privilege
classes.
System privileges
Figure 49
ACCounting
ACTlogretention
AUthentication
EVentretention
MAXCMDRetries
MAXSCHedsessions
PASSExp
QUERYSCHedperiod
RANDomize
REGistration
RETRYperiod
SCHEDMODes
SERVername
Storage privileges
Figure 50
An administrator with unrestricted storage privilege has the authority to manage the database,
recovery log, and all storage pools. He or she can issue commands that affect all existing
storage pool as well as any storage pools that are defined in future. An unrestricted storage
administrator cant define or delete storage pools. An administrator with unrestricted privileges
can:
AUDit Volume
DEFine/DELete Volume
MOVE DATA
UPDate STGpool
Administrators with restricted storage privilege can issue a subset of the storage commands
only for the storage pools for which they have been authorized. They do not have the
authority to manage the database or recovery log. An administrator with restricted privileges
can:
Policy Privileges
Figure 51
Operator privilege
Figure 52
Administrators with operator privilege control the immediate operation of the TSM server and
the availability of the storage media.
Analyst privilege
Figure 53
Registering administrators
Granting one or more administrative privilege classes to other administrators
Allowing separation of tasks
Allowing delegation of authority
Varying commands by privilege class
The figure above shows how you can divide the administrative tasks through the five privilege
classes.
The IBM Storage Manager database is used by the server to manage information about client
files. The Storage Manager recovery log is used to ensure the consistency and availability of
the database. In this unit, you will learn how to choose the size and location of the database
and recovery log. You will also learn to configure the database and recovery log to optimize
performance by using the BUFPOOLSIZE and LOGPOOLSIZE parameters. Finally, you will
configure the database and recovery log for high availability with mirroring and the
SPACETRIGGER parameter.
Objectives
On completion of this unit, you should be able to do the following tasks to manage your IBM
Tivoli Storage Manager environment by policy:
Figure 54
The recovery log contains information about database updates that have not yet been
committed. Updates can include activities such as defining a management class, backing up
a client file, and registering a client node. Changes to the database are recorded in the
recovery log to maintain a consistent database image.
Transactions
Figure 55
To support multiple transactions from concurrent client sessions, the server holds transaction
log records in the recovery log buffer pool until they can be written to the recovery log. These
records remain in the buffer pool until the active buffer becomes full or IBM Tivoli Storage
Manager forces log records to the recovery log. Changes resulting from transactions are held
in the buffer pool temporarily and are not made to the database immediately. Therefore, the
database and recovery log are not always consistent.
When all records for a transaction are written to the recovery log, IBM Tivoli Storage Manager
updates the database. The transaction is then committed to the database. At some point after
a transaction is committed, the server deletes the transaction record from the recovery log.
1. Reads a database page into the database buffer and updates it. A page is a 4096
byte block that is transferred as a unit between memory and disk storage.
Figure 56
The recovery log is used by the server to keep a record of all changes to the database. When
a change occurs, the recovery log is updated with some transaction information prior to the
database being updated. This enables uncommitted transactions to be rolled back during
recovery so the database remains consistent. The recovery log functions in two modes,
normal mode and roll forward mode
Normal Mode
When the transaction log record is written to the recovery log, a recovery point is recorded in
it and the data is committed to the database. If the database needs to be recovered, the
server uses the recovery point in the recovery log to bring the database back to its last point
of consistency. A point of consistency is a time when all recoverable information in the
database matches the data managed by the server. If a failure occurs before a transaction is
committed to the database, the server rolls back any changes made to the database pages.
The log is treated as a circular array of blocks with the head (the newest log records) always
chasing the tail (oldest records). The server will never let the head overtake and overwrite the
tail; it must take some other action. As transactions commit, they free up log space and allow
the tail to move forward. The recovery log saves some records for transactions that have
already been committed, but only to the extent necessary to perform redo processing on
recovery. The recovery log can also be used for roll forward recovery of the database during
disaster recovery.
In roll forward mode all changes made to the database since the last backup are saved in the
recovery log.
Space Allocation
Figure 57
Database
o Predominately read-oriented
o Spread database for performance
o It is recommended to limit the size of the database to 40GB. Beyond this
size, consider a second server.
Recovery log
o Predominately write-oriented
o Do not spread recovery log
o Log file maximum size limit is 13 GB
In general, access to the recovery log is predominately write-oriented with the writes and the
few reads clustered together for the most part. The writes are done in a moving cursor format
which does not lend itself to multiple volume optimization. Therefore, fewer recovery log
volumes are appropriate. Mirroring has little effect on the performance of the recovery log.
Figure 58
Volumes used to contain the database and the recovery log must be disk volumes.
IBM Tivoli Storage Manager treats all volumes associated with the database or with the
recovery log as a single logical volume. The logical volume manager maps data between
logical and physical storage, allowing database and recovery log data to span physical disks.
No reorganization of the database or recovery log is required.
The amount of available space for the database or recovery log equals the combined space
of all volumes defined to the database or recovery log. As data is added, Tivoli Storage
Manager tracks the percentage of utilization, which is the amount of space used at a specific
point in time. Be aware that the maximum amount of space used by the recovery log can vary
significantly throughout the day, as it is proportional to the transaction load on the system. The
maximum amount of space used by the database is more consistent with the utilization
percentage, because the amount of database space consumed grows in proportion to the
number of object inserted into the database.
Figure 59
Each version of a file that Tivoli Storage Manager stores requires about 400 to 600 bytes
of database space.
Each cached or copy storage pool copy of a file requires about 100 to 200 bytes of
database space. Caching is turned off by default. It is only done for moving from one
storage pool to next.
Overhead could increase the required space up to an additional 25%.
In the example below, the computations are probable maximums. In addition, the numbers
are not based on the use of file aggregation. In general, the more that small files are
aggregated, the less the required database space.
Assume the following numbers for an IBM Tivoli Storage Manager system:
Backed up files
Up to 500,000 client files might be backed up. And storage policies call for retaining up to
three copies of backed up files:
500,000 files x 3 copies = 1,500,000 files
Archived files
Up to 100,000 files might be archived copies of client files.
Space-managed files
Up to 200,000 files migrated from client workstations might be in server storage.
The space required for all backed up, archived, and space-managed files at 600 bytes per file
is:
(1,500;000 + 100,000 + 200,000) x 600 = 1.0 GB
In this example, there are three database servers for which the database and recovery log are
being sized.
3 x 1.0 GB = 3 GB
Figure 60
If the average file size is about 10 KB, about 100,000 files are in cache at any one time.
100,000 files x 200 bytes = 19 MB
Overhead
Figure 61
Up to this point approximately 1.4 GB is required for file versions and cached and copy
storage pool files. Up to 50% additional space (or 0.7 GB) should be allowed for overhead.
The database, then, should be approximately 2.1 GB.
If it is not practical to estimate the number of files to be covered by your storage management
policies, you can roughly estimate the database size as from 1% to 5% of the required server
storage space. For example, if you need 100 GB of server storage, your database should be
between 1 GB and 5 GB.
During SQL queries of the Storage Manager server, intermediate results are stored in
temporary tables that require space in the free portion of the database. Therefore, the use of
SQL queries requires additional database space. The more complicated the queries, the more
space required.
The size of the recovery log depends on the number of concurrent client sessions and the
number of background processes executing on the server.
Note: The maximum number of concurrent client sessions is set in the server options.
Note: The maximum size of the Recovery log increased now to 13GB. Significantly increasing
the size of your recovery log could also significantly increase the time required to start the
server, to backup the database and to restore the database.
Attention: Be aware that the results are estimates. The actual size of the database may differ
from the estimate because of factors such as the number of directories and the length of the
Begin with at least 12 MB for the recovery log. If you will be using the database backup and
recovery functions in roll-forward mode, you should begin with at least 25 MB.
In both normal mode and roll-forward mode, the volume of Tivoli Storage Manager
transactions affects how large you should make your recovery log. As more clients are added
and the volume of concurrent transactions increases, you should extend the size of the log. In
roll-forward mode you must also consider how often you perform database backups. In this
mode, the recovery log keeps all transactions since the last database backup and typically
requires significantly more space than is required in normal mode.
In roll-forward mode, you need to determine how much recovery log space is used between
database backups. For example, if you plan daily incremental backups, you should check
your daily usage over a period of time.
Start by setting your log mode to normal. In this way you are less likely to exceed your log
space if your initial setting is too low for roll-forward mode.
After a scheduled database backup, issue the following command to reset the statistic on
the amount of recovery log space used since the last reset:
reset logconsumption
Just before the next scheduled database backup, issue the following command to display
the current recovery log statistics:
query log format=detail
The Cumulative Consumption field contains the log space in megabytes used by the
server since the statistic was last reset. Record the value.
Repeat steps 2 and 3 over at least one week.
Increase the highest cumulative consumption value by 30 to 40 percent. Set your
recovery log size to this increased value to account for periods of unusually high activity.
For example, over a period of a week the highest cumulative consumption value was 500 MB.
If you set your recovery log to 650 MB you should have sufficient space between daily
backups.
Figure 62
Use the REDUCE DB command to decrease the amount of space that can be used by
the database. To reduce the capacity of the database, you must reduce the database in 4
MB increments. If you do not specify the reduction in 4 MB increments, Tivoli Storage
Manager rounds the number to the next 4 MB partition.
Use the REDUCE LOG command to decrease the amount of space that can be used by
the recovery log. To reduce the capacity of the recovery log, you must reduce the
recovery log in 4 MB increments. If you do not specify the reduction in 4 MB increments,
Tivoli Storage Manager rounds the number to the next 4 MB partition.
For example, if you specify 11 MB, the server will round up to 12 MB when doing the extend
or reduce operation. This does not apply to data storage. There is no notion of extending or
reducing storage pool storage other than in volume increments
Storage Manager provides the ability to log client messages as server events. These events
can be passed on or reported to external sources. This unit introduces Storage Manager
logging and monitoring functions.
Objectives
IBM Tivoli Storage Manager provides the ability to log certain client messages as events on
the Storage Manager server. This lesson explains which messages can be logged and how
client event monitoring is configured
Figure 63
Which client messages can be logged as events and how these messages are formatted.
Figure 64
Loggable Messages
The intention of client error logging is to notify the server of problems encountered during a
client operation. Therefore client message candidates are those messages which reflect an
error condition. Client statistics are also passed to the server.
Nonloggable Messages
Client memory errors: Due to insufficient memory resources the client is not able to
log these types of messages.
Server disabled messages: During client sign-on procedure the Storage Manager
server provides information to the client about which messages should be logged to the
server. Disabled messages are not passed to the server.
API Messages: For all application programming interface (API) related messages, it is the
responsibility of the API application to place an appropriate message text into the string buffer.
Event formatting
Event content
Message Formatting
A N [ R S E ] # # # # [ I W E S]
R = Server
S = Client
E = Event
Levels:
I = Info
W = Warning
E = Error
S = Severe
Eligible messages are grouped in a common, shared repository. The repository resides on
both the client and the server, and contains new messages for all client events and related
event data. The repository is shared by the command line and GUI clients.
Event Formatting
Client messages in the ANS4000 to ANS4999 range are eligible to be sent to the server as
client events.
Eligible client messages will be sent to the server as events using an ANE prefix instead of
ANS. These client messages will be logged locally in the client schedule or error logs as
appropriate.
Event Content
Client event messages contain enough information to be processed outside of the context
where the message occurred. The client assigns the correct message number and provides
information about related object or filespace names and the server adds information such as
timestamp, node name or any other relevant information.
Figure 65
When an eligible client message occurs, the message number is looked up in the client
message event repository and assigned the appropriate ANE message number. The event
message is formatted with the related object or filespace name and is sent to the Storage
Manager server in this format.
The server receives the event message and then adds information such as node name from
where the event was received and the session number from which the original client error
message originated. If the event has been enabled for the Storage Manager console it is
shown on the console as soon as all the necessary information has been formatted. Message
prefixes are
Figure 66
To enable all nodes to log events of ERROR or SEVERE severity to the Storage Manager
console, the following ENABLE EVENT command can be issued by an administrator:
tsm> enable events console error,severe node=*
Client events are displayed as soon as they have occurred on the client and have been
passed to the Storage Manager server.
Figure 67
Storage Manager server events are always stored in the activity log and cannot be disabled.
This is because server information in the activity log is often needed to resolve critical
situations. All client events are also enabled for the server activity log by default.
Client events can be disabled for the activity log. To disable information events for the client
node chocolate, the following administrator command would be used:
Figure 68
The QUERY ACTLOG command has been updated in order to enable querying of centrally
logged client events from the Storage Manager activity log. The following parameters have
been added for extended event querying:
Parameter Description
The following example command queries for any client events in the last seven days from the
node named chocolate, associated with the DAILY_INC client schedule:
This unit covers backing up and recovering the Tivoli Storage Manager database, recovery
log, and storage pools.
Objectives
Full backup
Incremental backups (default)
Database Out-of Band (Snapshot) backup
Full Backup
Storage Manager can perform regular and incremental backups of the database to tape while
the server is running and available to clients. With the Storage Manager recovery log in
normal mode, the backup media can then be stored on-site or off-site and can be used to
recover the database up to the point of the backup.
You can run regular or incremental backups as often as needed to ensure that the database
can be restored to an acceptable point in time.
You can provide even more complete protection if you specify that Storage Manager run roll
forward mode. With Storage Manager in roll-forward mode and with an intact recovery log,
you can recover the database up to its most current status.
Figure 69
For backups, an administrator can weigh the trade-offs between running regular backups and
running incremental backups.
A regular backup takes longer to run than an incremental because it copies the entire
database. However, recovery time is faster with a regular backup because only one set of
volumes needs to be loaded to restore the entire database. A regular backup is required
under specific conditions, but an administrator can choose to run as many as 32 incremental
backups between each regular backup.
An incremental backup takes less time to run because it copies only those database pages
that have changed since the last time the database was backed up. However, incremental
backups increase the time it takes to recover a database because a regular backup must be
loaded first, followed by some or all of the incremental backups in the same database backup
series.
A snapshot backup is a full backup which does not interrupt the full + incremental backup
series, in other words, it is an out-of -band database backup. This backup can be stored off-
site for disaster recovery purposes.
A database snapshot backup is tracked by the volume history and can be used for a restore
of the Storage Manager database to the point-in-time when the snapshot was performed.
Figure 70
Parameter Description
Specifies the name of the sequential access device class to use for the
backup. Be sure that you have used the DEVCONFIG option in the
dsmserv.opt file to specify an external file in which to store a backup copy of
DEVclass=devclassname device class definitions. If you do not have this file and your Storage Manager
database is damaged or lost and must be restored, the definitions created by
using the DEFINE DEVCLASS command will not be available and must be
recreated manually. This parameter is required.
Specifies the type of backup to run. This parameter is optional. The default
Type=typevalue
value is INCREMENTAL.
Specifies the volumes to use for the backup. You can specify more than one
VOLumenames=volname volume by separating each volume name with a comma, with no intervening
spaces.
Specifies whether scratch volumes can be used for the backup. This
Scratch=scratchvalue
parameter is optional. The default value is YES.
Specifies whether to wait for the server to complete processing this command
in the foreground. The default value is NO.
The first backup of your database must be a regular backup. You can run up to
32 incremental backups between regular backups. To perform a regular
backup of your database to the TAPECLASS device class, for example, enter:
backup db type=full devclass=tapeclass
Wait=waitvalue In this example, Storage Manager writes the backup data to scratch volumes.
You can also specify volumes by name. After a regular backup, you can
perform incremental backups, which copy only the changes to the database
since the previous backup.
Run an incremental backup of the database, using a scratch volume. Assume
a device class of FILE for the backup:
backup db devclass=file type=incremental
Figure 71
QUERY DB
To help you determine how much storage space a regular or incremental backup will require,
use the Q DB command. This command displays the number of changed megabytes in the
database.
QUERY VOLHISTORY
Use the Q VOLH command to display sequential volume history information that has been
collected by the server. Volume history information includes data such as date and time of use
for the following types of volumes:
Typevalue Description
Use the DELETE VOLHISTORY command to delete sequential volume history information
collected by the server when the information is longer needed. For example, you may want to
delete information about volumes used for obsolete database backups.
When volume history information about volumes not in storage pools is deleted, the volumes
return to scratch status if they were acquired by Storage Manager as scratch volumes. For
scratch volumes with device type FILE, the files are deleted.
Do not delete sequential volume history information until you no longer need it. Do not delete
the volume history information for database dump, database backup, or export volumes that
reside in automated libraries unless you want to return the volumes to scratch status. When
the DELETE VOLHISTORY command removes volume information for database dump,
database backup, or export volumes, the volumes are automatically returned to scratch status
if they reside in automated libraries. These volumes are then available for reuse by the server
and the information stored on them may be overwritten when the server reuses the volume for
some other purpose, such as storage pool volumes or other database backups.