Vous êtes sur la page 1sur 59

Tivoli Storage Manager 6.

3 Technical Overview
Speaker: Tricia Jiang
10-18-2011

Job Title: Technical Enablement

TSM 6.3 Server Content


TSM for z/OS Media Node Replication Node Replication support in the Admin Center Administrator Center Enhancements DB Backup Enhancements Reporting Enhancements Client Performance Monitoring Client Deployment Process Value Unit (PVU) Tape Optimized Recall Externalized VTL Awareness GSKIT 8 and AES enhancements Install Changes Software Inventory tagging Persistent Reserve

TSM 6.3 Client


Backup-Archive client simplify configuration of client in an MSCS cluster Tape Optimized Recall externalize Journal Based Backup support on Linux Upgrade Install for the TSM Linux Client Extend maximum path name support to 4096 bytes for Linux b/a Tivoli Integration Agent Management Initiative: Common Agent Package 64 bit Linux 64 bit Solaris Mac Integration initiative - Serviceability - Inventory Tagging HSM migrate/recall log HSM for Windows - Stub moving tool UNIX HSM - Multiple Server externalize
(Not covered in this presentation) VMware off-host full VM backup and recovery using vStorage API Allow back-up of virtual machines with physical disks vSphere 5 support

TSM for (not covered in this presentation)


TDP for Oracle TDP for Oracle: Currency - TDPOSYNC support for backup history in Oracle control file TDP Oracle - Add ability to specify TCPSERVER, TCPPORT using sbtcommand TDP Oracle - Provide a mechanism to query backed up Oracle files and Indicate whether a backup was encrypted, compressed, or Deduped TDP Oracle - read PASSWORDACCESS from dsm.sys TDP for Oracle - Reversion to V6 TDP for Oracle - UTF-8 message catalog support TDP for Oracle - Implement Inventory Tagging as required by SWG TDP for Oracle - upgrade install programs to use InstallAnywhere 2010 / InstallShield 2010

TDP for Domino TDP Client Scalability Improvement - eliminate memory constraints caused by in memory lists for large numbers of objects TDP Domino - Indicate whether a backup was encrypted, compressed, LAN-free or De-duped TDP for Domino - Reversion to V6 TDP for Domino - Implement Inventory Tagging as required by SWG TDP for Domino - upgrade install programs to use InstallAnywhere 2010 / InstallShield 2010 TDP Domino & Oracle - UTF-8 message catalog support TDP for Domino - communication failure enhancement TDP for Domino - UTF-8 message catalog support

TDP for SAP DP for SAP - Add support for RMAN incremental backups of Oracle

TDP for ERP TSM for ERP Implement SWG Inventory Tagging

TSM for z/OS Media Server


TSM 6.3 together with z/OS Media replaces TSM 5.5 z/OS server
Existing tape media remains in place Advantage: ITSM on zLinux on an IFL, avoids general purpose CPU charges Existing z/OS TSM 5.5 are entitled to z/OS media serer & 6.3 server on AIX or zLinux

Facilitate access to new technology protecting customer investment in FICON attached tape
New FICON TAPE technology can be part of TSM storage hierarchy

In addition to accessing z/OS 5.5 tape inventory from 6.3 server ( AIX or zLinux )
Leverage z/OS Tape Management System for SCRATCH tape selection Benefit from automated offsite tape management ( e.g., Iron Mountain ) Exploit DFSMS for sequential FILE storage ( upto 16 TB volume size ) Dynamic allocation eliminates JCL for FILE volume allocation (TSM 5.5 used physical sequential data sets) Transparent performance benefit of VSAM striping

Appears to TSM 6.3 server as an abstract library resource


ZOSMEDIA library type exhibits resemblance to EXTERNAL library no libvol inventory Unique characteristic allows Storage Agent interaction with z/OS Media server

z/OS Media server is used as an IO engine to access FICON attached storage


z/OS IO subsystem highest class of service TSM zLinix only supports FICON attached standalone tape drive (no library support)

TSM for z/OS Media Server


TSM v6 server on zLinux or AIX Database functions: nodes, administrators, policy, tracking of data objects in hierarchy Communication with clients, Admin Center, other TSM servers and media server Storage devices may be attached to TSM server or accessed via media server Supports all v6 functions TSM v6 server on zLinux or AIX LAN FICON TSM for z/OS Media

TSM client

TCP/IP or HiperSocket (zlinux)

DB2 database
The TSM V5 z/OS server performs a cross platform upgrade which effectively migrates their DB to TSM 6.3 on AIX or zLinux. Then they install a new PID TSM for z/OS Media on z/OS. 6

Storage pool hierarchy

TSM for z/OS media server Receives/sends data from/to TSM v6 server Performs I/O to tape and/or sequential disk Supports same storage devices as z/OS server, including non-IBM hardware Interacts with z/OS DFSMS and TMS exactly as TSM z/OS server

TSM for Z/OS Media Server


Five distinct items that together make up the TSM for z/OS Media solution:
z/OS Media server Media server interface module Media server API TSM 6.3 Server device class, library and Storage Agent enhancement Ability to migrate TSM z/OS 5.5 server to 6.3 (AIX or zLinux server)

New Library type of ZOSMEDIA


Shared library ( shared only with Storage Agents ) TSM Server acts in limited capacity as Library Manager, it does Volume selection

**Library must be defined first ( LIBTYPE=ZOSMEDIA ) before any device classes

z/OS device class integration


TAPE device classes

3592, 3590, ECARTRIDGE (legacy z/OS lan-free), CARTRIDGE (legacy VTS) Same as z/OS TSM 5.5 devclass
Enhanced FILE device class

Adjustable MAXCAP Format/Write via Media Manager to VSAM linear data sets (format not required) Allocation parameters

Node Replication
Site A
Database TSM Server A
DB2 Node C Node B Node A

Site B
TSM Server B Database
DB2

Metadata and deduplicated data


Node X Node Y

Storage Hierarchy

Storage Hierarchy

Provides the ability to incrementally replicate a nodes data to a remote target server for disaster recovery purposes
True incremental replicationOnly replicates directories and files that do not exist on target server Deletes data on target server that has been deleted on the source server Can recover client data directly from hot standby server Can use with or without deduplication Can have multiple servers replicate to one server
Remote vaulting without manual tape transfer Efficient use of bandwidth through deduplicated replication Allows hot standby at remote site
8

Node Replication
Source TSM Server Target TSM Server

Enable source server to target server communication


7

DR: restore node data

Set target server as default target for replication Enable nodes to replicate and modify default and filespace replication rules if needed
4

Set storage hierarchy , domain policies

At proper time, preview and initiate replication for desired nodes and data (REPLICATE NODE)

TSM administrator views replication results

Admin Center & TCR

Initial configuration Other

Node Replication
TSM server replicates data & metadata for specified nodes to another server
Can select which nodes replicate Can select what type of data (backup, archive, HSM, active) Can set priority for nodes, and filespaces Can currently only restore data from target, can not use target as failover backup location (unless use remove replnode command)

Implemented in TSM 6.3


TSM servers must be at 6.3 TSM nodes can be at 6.3 or earlier Native TSM solution with no dependency on specific storage device Supports dissimilar hw & configuration at primary & remote sites

Server to Server communications


Servers must be able to communicate via IP Need server to server options -hladdress, lladdres, servername, serverpassword
10

Node Replication
Deduplication
If source has dedupe enabled but target does not
Data is reconstructed before being sent Only the chunks not already on the destination server will be sent Only the chunks of data that are not stored already in the destination pool are transferred

If target has dedupe enabled but source does not If both source and target have dedup enabled

Expiration
Files bound to same mgmt class, if exists, on target server, otherwise to target servers default mgmtclass Source server manages file expiration and deletion for the replicated files on the target server

Flexible Implementation
Many-to-1 transfer to target server (can only have one target server) Can have server A and server B protect each other If import/export data exists, can utilize forcesync

Admin Console or Command line configuration / monitoring Scheduled or manual node replication
Single process is started for replication High priority data is replicated before data with normal priority Only one replicate node process at a time

11

Node Replication - Replication Rules


Rules determine which files are eligible for replication Default backs up all types (backup, archive, hsm) of data for enabled nodes
ENABLED, DISABLED or PURGEDATA (all data for that data type is deleted)

6 replication rules:
4 general use (ALL_DATA, ACTIVE_DATA, ALL_DATA_HIGH_PRIORTY, ACTIVE_DATA_HIGH_PRIORITY)
Default follow the replication rule hierarchy until a non-Default rule is specified None replication not performed, no data replicated

Attributes of the rule tell the process how to process the file
Priority (High or Normal) Replicate active-data only? State (enabled or disabled)

12

Node Replication - Steps

5. Enable nodes for replication 6. Modify replication rules if needed

13

Node Replication and SSL


** Found error in SSL and replication, so unsupported at 6.3.0

SSL ensures that data cannot be captured in-flight SSL security overhead means lower throughput standard trade-off Source & Target servers can be configured to use SSL
Target servers cert must reside in source servers key db Source servers cert must reside in target servers key db Must use SSLTCPPORT / SSLTCPADMINPORT in option files Define Servers with SSL=YES parameter Import certificate in both directions with gsk8capicmd_64 Restart servers

Can use SSL currently between replication servers


Library client / library server, storage agent / server, central config, command routing, virtual volumes, export/import. Dont support SSL

14

Demo time

15

Faster TSM Internal Database Backup and Restore


Parallel streams for backup/restore processing give improved throughput Reduced time for database backup/restore Increased scalability of TSM server without expanding db backup window Will result in possibly more partially filled tape volumes

Before

After

16
16

Faster TSM Internal Database Backup


Parallel data stream support reduces db backup and restore times Updated Syntax
BACKUP DB numstreams=n (n is 1 to 4; default is single stream) SET DBRECOVERY numstreams =n (n is 1 to 4)

If #drives available = > numstreams, backup uses requested # of streams If #drives available < numstreams, backup uses the available drives dbbackup will preempt some other operations

Volume history file may no longer be needed to restore the database


More self describing info written with db backup PREVIEW Mode of restore will show how to rebuild the volume history file

Restore

If #drives =numstreams used for the backup, then that # of streams are used If #drives < numstreams used, then restore is done using the available # of drives Restore process never uses more drives than the # streams used by the backup If full and incremental used different numstreams, restore will use smaller #

17

Demo time

18

TSM Reporting using Cognos


Easier

Installation install, configure and use reporting and monitoring within 2 hours
An aggregated view of reporting and monitoring for the entire TSM environment

Automated post-installation configuration Additional out of the box' reports Integrated Cognos reporting engine providing better creation of 'custom reports' capabilities
Create custom report in 30 mins or less

Ability to build custom data collection agents with the newest IBM Tivoli Monitoring end-user license More TSM data-collection agent performance features TSM activity log custom-data collection in the TSM data-collection agent Ability to email reports
Storage Manager 6 Storage Manager 6 Storage Manager 6 Storage Manager 6

19

TSM Reporting with Cognos


IBM Tivoli Monitoring (ITM) warehouse server makes data available to the new Cognos Tivoli Common Reporting (TCR) component All TSM existing historical warehouse data & reports will be available
BIRT is still there, but TSM moving to Cognos

IBM Cognos Business Intelligence 8.4 query studio and report studio
Cognos is a chargeable component, but is included with TSM TSM license allows reporting on multiple Tivoli products (TPC in the works)

Ad-Hoc reports available (using Cognos) Customization simple (using Cognos) Additional formats: XML, Excel, CSV TCR customizes schedules to generate, store and send out reports

20

TSM Reporting with Cognos


Before 6.3 +1 Hour installation Difficult Configuration No TSM Activity Log Mining BIRT Report Customization Difficult No Charge Back Reports No Cross-Server Reports Problem Determination Difficult Information on Reporting Difficult to Find Report Creation Difficult No Ad-Hoc Queries After 6.3 1 Hour installation Automatic Configuration Activity Log now Available BIRT Report Customization Difficult New Reports to Assist in Chargeback Aggregated Reports Available Problem Determination Guide One-Stop Reporting Information Shop Easy Cognos Report Builder Easy Cognos Query Builder
21

TSM Reporting Predefined Reports


Client reports (6.1) Client job status Client backup currency Storage capacity protected Backup details Top 10 backups Backup missed files Backup history Restore details Top 10 restores Restore history Archive details Top 10 archives Archive history Retrieve details Top 10 retrieves Retrieve history Aggregated Client reports (6.3) Client Activity Details Client Activity History Client Missed Files Client Storage Summary Client Top Activity Client Schedule Status Client Activity Details Client Backup Currency Node Replication Details (suggested) Server reports Server job status Server throughput Server resource usage Database details Disk usage Tape usage Other storage usage Tape volume usage analysis Tape capacity analysis Tape device errors Device usage history Server machine utilization Activity Log Details Node Replication Details Node Replication Summary PVU Aggregated Server reports (6.3) Server Resources Used Tape Volume Capacity Analysis Server Throughput Server Database Details PVU Details (suggested)

22

TSM Monitoring
Tivoli Monitoring Agent is the Data collection agent
Small java agent recommended to be installed on TSM server Can monitor 100s of servers, but run data collections at different times Could be installed just on reporting server, but could only monitor a few TSM servers New data collected: actlog, occupancy, PVU, replication, library, drives, trace log Tivoli Enterprise Portal makes data viewable realtime (monitoring dashboard)

Activity Log Mining

Collects activity log entries based on Message type and Message code Can use data for monitoring or for historical db

Situational Processing in ITM


If a condition is met, a script is run that can send the data to a user

Operational reporting
All functions were implemented in the TSM monitoring functions The queries are out there for TSM 6.3, but not packaged with TSM

23

Monitoring TSM for Performance


TSM API Performance Monitoring Functions API Performance analysis built into the TSM Admin Center

Bottleneck analysis: Disk I/O Network / Tape

Simulated backup and restore Helps with tuning TSM for products for optimal throughput

24
24

Demo time

25

Deployment of Backup Archive Client Updates


Deploy client maintenance updates to non-Windows platforms
Previsouly only Windows Now AIX, Solaris, HP-UX, Linux, Macintosh & Windows

Allow client to upgrade to 5.5, 6.1, 6.2 or higher versions


Previously only allowed to update Backup-Archive clients to version 6.2 Now clients can be updated to lower (supported) versions e.g., 5.5 or 6.1, or 6.2 or higher
Previously Windows
Admin Center

AIX

new

Solaris HP Linux

Client updates

Storage Manager 6

Mac
26
26

Deployment of Client Updates


Capabilities supported in 6.3
Discover client maintenance levels available on the FTP site Retrieve packages required by client maintenance levels Store packages on the TSM server and manage the packages Select a maintenance level to be distributed to a list of existing clients Distribution & code update scheduled to executed automatically on the clients Review the client distribution status Windows BA client maintenance for upgrade from 5.4.*.* & higher to 5.5.3, 6.1.4, 6.2, 6.3 & higher Non-Windows BA client maint. for upgrade from 5.5.*.* & higher to 5.5.3, 6.1.4, 6.2.2, 6.3 & higher 32 bit for non-Windows BA client is dropped in TSM 6.3.
Existing 32 bit BA client running on 64 bit hardware will be upgraded to 64 bit client code. BA client upgrade to 6.3 and above will be cancelled if the current client runs on 32 bit non-Windows hardware

Not supported in 6.3


Distribution of other TSM components (Storage Agent, TDP, HSM) Downgrade/rollback (e.g., 6.2.1.1 to 6.2.1.0). Clients allowed automatically discover & upgrade to the latest version without administrator action Client distribution of initial installation Push updates to clients that dont have scheduler running Auto update function in Admin Center versions prior to 6.2.0.0 Cluster node deployment Ability to alter current components installed (e.g. language packs)
27

Automatic Client Deployment Supported OS

(Coming soon)

Solaris 11 coming soon Client platforms must support TSM client API version 6.2.0.0 Not all platforms support client upgrade to the latest (6.3.0.0). For example, We will not have support for Mac OS X 10.5 or XP in 6.3
28

Automatic Client Deployment - Prereqs

29

Demo time

30

PVU Estimation Reporting

Information on # of client and server devices managed by the TSM server Information on utilization of processor value units by server devices Useful when assessing licensing requirements for the TSM system
31
31

PVU Estimation Reporting


Estimated PVU reporting for Backup-Archive client & API applications
TSM clients will scan system and send processor data to TSM Server TSM server will store processor data and calculate PVU value
Number of client and server devices Number of physical processors The processor vendor and type

Ability to report on client-device and server-device at a node level


Allow TSM administrator to change classification on a per-node basis

Utilizes Common Inventory Technology


IBM_ProcessorValueUnitTable.xml installed automatically & updates downloadable

Results can by copying and pasting into Microsoft Excel spreadsheet

Full-Capacity licensing only


Virtualization Capacity (Sub-Capacity) customers are still required to use the IBM License Metric Tool (ILMT) to create, verify, adjust, sign and save reports

Commands used to accumulate info


Q PVUESTIMATE to populate the summary page SELECT on the PVUESTIMATE_DETAILS to populate the details page SELECT on the LICENSE_PVU table to get the wording for the processor type Q NODE F+D and a SELECT on the LICENSE_PVU

32

Optimized Tape Recall - HSM for Unix


Recall of files on tape impacted by Files stored on different tapes Files recalled in different order than as stored on tapes Frequent tape mounts and unmounts Optimized tape recall processing, optimizes tape access and minimizes mounting and unmounting tapes
Number of files recalled No tape optimization Optimitzed tape processing

On one tape:

10.000 files

140 minutes 11 minutes (2.3 hrs) 816 minutes 170.000 min (13.6 hrs) (118 days) *

500.000 files

On two tapes:

10.000 files

340 minutes 8 minutes (5.6 hrs)

33

Optimized Tape Recall HSM for Unix - Usage Overview


Manually check and adjust the tape access order
1.generate sorted list: dsmrecall -preview -filelist=<myFileList> <myFS> => generates

1 collection file 1 tape ordered filelist per tape


2.modify the collection list to convenience 3.recall files dsmrecall -filelist=<CollectionFile> <myFS>

Dont modify anything, simply recall:


dsmrecall -filelist=<myFileList> <myFS>

Note: IBM Information Archive uses the feature


Based on search results from GUI they recall files back to archive file system

34

Virtual Tape Libraries


Historically VTL defined as SCSI TSM libraries
SCSI tape mount operations degraded VTL performance
VTL libraries with 256+ drives with heavy activity reported mount times of 5-7 minutes Resulted in max number of drives limited to 80-120

New VTL library type drastically improves performance


Existing SCSI libraries can be updated to VTL library type
UPDATE LIBRARY <libname> LIBTYPE=VTL

Better assumptions made & unnecessary SCSI validations skipped 300-500 maximum drives
Depending on OS, 256-1024 drives may run into OS limitations

Cannot have mixed media: drives with different device types or device generations within the same library (e.g. LTO2 and LTO3) Requires online path defined for servers & storage agents to all drives in the library
If paths are missing or offline, mount performance degrade to SCSI library type performance levels

TSM 6.3 PERFORM LIBACTION


Defines and delete TSM libraries containing large number of drives Valid for library type VTL and SCSI
35
35

Install Updates New Prereq Checkers


New TSM Prereq Checker tools
Customers can use tools to prepare for the install
TSM server TSM Admin Center Tivoli Monitoring for TSM

Shares the same code as the installation wizard Customer can choose locale to display info in

36

Install Updates License Acceptance


TSM Server Installer
New panel where customer chooses

TSM * TSM EE * SSAM * TSM for SAN


*If user is using LAN-Free or Library Sharing, must accept an additional licensing agreement

37

Install Updates Component Updates


TSM Server
DB2 v9.7 Fixpack 4 (will uninstall DB2 v9.5 if upgrading from 6.2) Gskit 8.0.14.11 (will not uninstall Gskit 7 must be done manually) TSM Client API 6.2.2 (will upgrade API)

TSM Storage Agent - Gskit 8.0.14.11 TSM Admin Center Installer Components
TIP 2.1 + fix pack 2.1.0.5 Ewas, ISC, TIPCore
Supports IE8, FF3.5 and FF3.6 Skin update visual change of what is displayed Newer version of Dojo that produces a faster UI Java 1.6

TCR 2.1: ifix 3 & ifix 5 Birt Reports TSM admin center TSM client performance

38

Install Update First Failure Data Capture


First Failure Data Capture
New feature improves capturing of all native installation logs Captures installation output for all components and places in:
Unix: /var/tivoli/tsm Windows: <installation location>\_uninst\plan\logs

39

GSKIT use with TSM


IBM Global Security Kit (GSKit) provides Secure Sockets Layer (SSL) data encryption TSM uses two GSKIT installable components Encryption Libraries
DES and AES encryption for passwords Functions for Hashing Routines for Deduplication

Communication Functions
SSL Library Certificate Utilities

New Function FIPS Compliance (Federal Information Processing Standards)


Algorithms for password & SSL session encryption use GSKIT FIPS certified algorithm

Support for Transport Security Layer (TSL) 1.2


Uses AES-256 for SSL session encryption (up from AES-128) Requires 6.3 client and 6.3 server For self-signed certificates, a new certificate must be imported

IBM Crypto for C (ICC) is integrated into GSKIT packaging


No ICC changes in TSM only the ICC location has changed and the reported version

Install
GSKIT 8 is installed; GSKIT 7 remains on the system, must manually remove DB2 user profile is updated to point to new system GSKIT
40

GSKIT
DB2 9.7 (in TSM 6.2/6.3) has a private/standard version of GSKIT 8.0.13.3
TSM 6.3 ships with GSKIT 8.0.14.11
These two version are not compatible for certain functions When a DB2 user logons on, db2profile script sets up the DB2 environment Script calls userprofile script to set environment variable LD LIBRARY PATH
This favors the system GSKIT over the DB2 private version

Soon a DB2 fix pack will provide an 8.0.14.x version of GSKIT, which will be compatible

Determine GSKIT version from command line


gsk8capicmd_64 version Echo $LD_LIBRARY_PATH Echo %PATH% (Windows)

Determine GSKIT version within TSM


Show GSKIT (if SSL enabled) Show AESCRYPTO

GSKIT Certificate Key Database


New key db created in 6.3 is populated with standard certification authority certificates 3rd party certificates also must reside in this db Key DB created prior to 6.3 were not populated with these certificates
Must run once manually: gsk8capicmd_64 keydb convert db cert.kdp pw password -populate

41

Software Inventory Tagging


SW group service initiative to help IBM service identify levels of installed software in the field Each product or component must identify itself using a software tag
Text file which is installed on the target machine during the product install SWGFMIDX value unique number assigned by the Software tag team

Listed in the tables of the TSM Wiki link Identifies as specifically as possible the installed product or component that has its own service stream Two types of inventory tags:
1. Product level tags: identifies an IBM software product 2. Component level tags: identifies a significant component of a product which has a separately installable part:

TSM Server, TSM Client, TSM Storage Agent Many TSM "products" won't have a product tag as they dont have a unique installation part for the product TSM Server component tag:
Tag File Name: Tivoli_Storage_Manager_Server.cmptagTag Type: Component Tag Tag File Location: > AIX: /opt/tivoli/tsm/server/properties/version > Solaris: /opt/tivoli/tsm/server/properties/version > HP: /opt/tivoli/tsm/server/properties/version > Linux: /opt/tivoli/tsm/server/properties/version > Windows: C:\Program Files\Tivoli\TSM\server\properties\version Tag File Contents: > <Component> > <ComponentName>Tivoli Storage Manager Server</ComponentName> > <ComponentVersion>6.3.0</ComponentVersion> > <SWGFMIDX>to_be_requested</SWGFMIDX> > </Component>

42

Persistent Reserve
Newer SCSI/Fiber/SAS tape drives allows a host or set of hosts to protect against multiple accesses to a drive
Finer grained recovery than older drives reserve/release support

1. Persistent reserve support added to TSM device driver


For tested drives: Today HP LTO 4 and 5 and STK T10KA, B and C drives IBM device driver has had support for quite some time

2. TSM server support for persistent reserve


Previously when a server using a drive failed only AIX and Windows library managers could recover the drive With TSM 6.3 persistent reserve, a library manager on any supported host platform can release the reservation held by the downed host Future will be enabling feature for clustering on Linux, Solaris and HP Currently it improves the ability of TSM to regain access to drives with less disruption

43

Microsoft Windows Cluster Configuration Wizard


Previously: must manually configure TSM to protect data on cluster disks in MSCS TSM 6.3:
Makes the Windows configuration faster, easier, and more accurate Has a customized resource type for TSM applications/resources Eliminates as much of the TSM configuration as necessary and automates the tasks Eliminates as much of the required duplication on other nodes Supports Windows 2008 (R2 64bit) & Windows 2003 (32 or 64bit) BA client
No support on Hyper-V, SQL, Exchange or Microsoft Dfs Windows 2008 & Windows 2003 wizards are slightly different

TSM Configuration Wizard launches the wizard to configure the TSM B/A client to backup data in a cluster environment

44
44

Updated Admin Center Client Options


Redesigned the client options in the wizard and form Removed the platform selection step in the wizard Added in new options
CASESENSITIVEAWARE DEDUPLICATION DEDUPCACHEPATH DEDUPCACHESIZE DISABLENQR DISKCACHELOCATION POSTSNAPSHOTCMD PRESERVELASTACCESSDATE PRESNAPSHOTCMD Specifies whether the client filters file and directory objects that have names conflicting in case only Enables client-side data deduplication for backup and archive processing. Specifies the location where the client-side data deduplication cache database is created. Determines the maximum size, in megabytes, of the data deduplication cache file. Specifies whether or not client can use the "no query restore" method. Specifies the location where the disk cache database is created. Allows the user to run operating system commands after the client starts a snapshot. Specifies whether to reset the last access date of any specified files to their original value. Allows the user to run operating system commands before the client starts a snapshot.

New options added to the option drop-down list.


45

Journal Based Backup on Linux


JBB on Linux using FilePath technology (same as on AIX since TSM 5.3.2.)
Support Linux local file systems: EXT2/3/4, XFS, ReiserFS, JFS, VxFS, NSS Not supporting GPFS for journal based backup

Linux vs AIX*
Similarities

Journal daemon code Configuration options


Differences

Kernel extension code Daemon startup script


Install & Configure

Two RPM packages


TIVsm-filepath-<vendor>.<arch>.rpm TIVsm-JBB.<arch>.rpm Configuration

Same as AIX (tsmjbbd.ini)


Runtime

Kernel module (filepath) is loaded automatically Daemon startup script is provided

46

Journal Based Backup All platforms


Extend the number of file systems monitored for journaling
New JournaledFileSystems.Extended journal daemon configuration New setting implemented as a stanza section

For example:
[JournaledFileSystems.Extended]

/fs1 /fs2

Existing JournaledFileSystems setting still supported The older setting ignored if the newer JournaledFileSystems.Extended setting is specified

47

64 Bit Client Support


64 bit client support
64-bit native Linux Intel (x64) Backup-Archive client 64-bit native Solaris (SPARC and x64) Backup-Archive client 64-bit native Macintosh (Intel) Backup-Archive client

48

TSM HSM for Windows 2003 or 2008 - 6.3


A set of move tools to move individual stubs from one file server to another or from one volume to another with out recall

Use this set of tools when:


Data is to be moved from one HSM client to another HSM client Only some but not all migrated files should be moved around > If all files (including the resident files) of a volume should be moved, consider the HW volume mapping

Hardware volume mapping accommodates file server configuration changes

Use this mapping when:


Your cluster name has to be changed Your host name has to be changed You need to assign a different drive letter to your volume You want to attach your disk to a new file server One node in your MSCS cluster has to be renamed You are switching off an old file server and you are going to attach the old drive to a new file server, using the same or a different drive letter

49

TSM HSM for Windows 2003 or 2008


Move Tools in TSM 6.3
Same Volume - Same TSM server
MOVE will update the path in the stub AND in the TSM object.

Another volume, Same TSM server


MOVE the stub without RECALL and update path in stub AND in the TSM object

Another File Server, Same TSM server


PULL the stub without RECALL from the remote file server and update path in stub AND in the TSM object

Copy Data from Remote to local TSM server


The path in the stub is updated to the local TSM server The path in the data object in TSM is updated to the stubs path The data in the remote TSM server can be deleted

Possible to configure the connection to the remote TSM server


HSMGUI

Tool that moves the stub with out recall and updates the path in the stub and the TSM data object
DSMMOVE.EXE

Tool to copy the TSM data objects from one server to the other and updates the path in the stub and the TSM data object
HSM Tasks Service

50

TSM HSM for Windows 2003 or 2008 - 6.3


Hardware Volume Mapping Important, while changing HW Volume Mapping:
Avoid all HSM activities until the HW volume mapping is applied Dont run any migrations or reconcile or stub moving tools You may allow recalls

1st Stop HSM activities


recall, monitor, tasks, migrations dsmmove, scheduled migrations

2nd Change your configuration


Add drive from other host or change hostname, or

3rd Configure HW volume mappings


HSM for Windows GUI (Tools > HW Volume Mapping > Create)

4th Consider implications on Backup


Use TSMs Rename filespace command

5th Resume HSM activities (undo step 1)

51

Space Management on Unix Multiple Server support on GPFS What does multiple server support do?
Enables HSM to migrate files to multiple TSM servers, increasing scalability and performance Server distribution works automatically via GPFS policies Transparent recall from multiple servers is possible Backup is integrated into server distribution GPFS
Migration / Recall / Reconciliation

Previously:
Limited to handle one TSM Server for each single file system Limited to handle 1 billion files per TSM Server instance

How does it help?

TSM server 1 TSM server 2 TSM server 3

In GPFS v3.3 the number of object in a single file system limited to 4 billion Single GPFS v3.4 or later can migrate to 2 or more TSM servers As FS exceeds capacity of TSM servers, can add more servers Essential part of high scaling archiving solutions like SoNAS and IIA in conjunction with GPFS Function coded in TSM 6.2 for SONAS, made external in TSM 6.3

52

Space Management on Unix Multiple Server support on GPFS


Requirements
GPFS version 3.4 or later DMAPI enabled on GPFS file system Automatic migration driven by GPFS policy engine Before upgrading to multiple-server support, must configure GPFS appropriately

Limitations when using multiple TSM server


mmbackup doesnt backup to multiple TSM servers GPFS will add this support in the near future HSM scout based automigration not supported TSM archive/retrieve function not supported Not possible to encrypt multiple TSM server names on a single node" TSM server node replicationcan not supported" TSM server LAN-free conguration is not supported"

53

Space Management on Unix Multiple Server support on GPFS


Preparation of a existing file system
1. Set option hsmdisableautomigdaemons=YES for all HSM clients in the GPFS cluster 2. Set option hsmmultiserver=YES for all HSM clients in the GPFS cluster 3. Add to the list of servers the TSM server that currently manages the file system: dsmmigfs addmultiserver -server=server_name file_system_name. 4. Run the dsmMultiServerUpgrade.pl script. uses dsmreconcile to couple all FS files with TSM server which manages the migration copies & backup versions

Adding a TSM server For each additional server: dsmmigfs addmultiserver -server=server_name file_system_name
Adds the server name and server specific paramaters to a internal server list dsmmigfs querymultiserver file_system_name used to query the content of the server list

Automatically generated GPFS policy dsmmigfs addmultiserver command generates a GPFS policy which can be used for GPFS driven threshold migration For each added server a new rule is genrated in the policy file under /<FSname>/.SpaceMan/ multiserver/ruleset/BasicRuleSet To activate GPFS driven threshold migration with multiple servers:
mmchpolicy Device /<FSname>/.SpaceMan/multiserver/ruleset/BasicRuleSet If GPFS policy already in place, must merge actual policy file with the generated HSM policy file

54

Space Management on Unix Multiple Server support on GPFS


GPFS driven Threshold Migration: The GPFS policy engine decides which TSM server is used
IF file previously migrated or backed up the same TSM server will be used ELSE GPFS policy engine will invoke via round robin the dsmNextServername.pl script

The script can be modified to meet the user expectation

Selective migration: dsmmigrate -server=<servername> <filename> command.


If file was migrated before to an other TSM server the file will be skipped for migration After the migration the file will be coupled with that server

Recall from multiple servers: Rrecall daemon gets the server from a EA stored with the file. dsmrecall -SErver = <servername> command can be used for selective recalls of files Reconciliation from multiple servers: Reconciliation uses two way orphan check reconciliation in script dsmreconcileGPFS.pl For full reconciliation of the fs, script must be started against all TSM servers in the environment Backup: dsmc incr/sel server=<servername> (If no server name is specified the default server used) The first backup of a file will couple the file to the specified TSM server Restore: dsmc rest server=<servername> To find right server for the restore use command dsmc query backup <fileName To restore whole directory structures use a script to restore from all servers
55

TSM for Space Management UNIX/Linux


TSM/HSM Missing logging per filesystem Use to analyze the current state of the system via logging of recalls & migrations Use to optimize HSM usage Configured by setting dsm.sys options
HSM daemons need to be restarted to pick up option changes in dsm.sys No logging takes place unless filter enables for one or more entry types

Sample dsm.sys server clause


HSMLOGMAX 10 HSMLOGEVENTFLAGS SYSTEM FILE FS HSMLOGNAME /tmp/hsm.log Statistics reported once an hour HSMLOGSAMPLEINTERVAL 3600

Sample Results
07/06/2011 20:59:18 File system statistics node: PIRATES pid: 9444 file system: /gpfs1 state: active migrated bytes: 29360128 premigrated bytes: 24721408 migrated files: 28 premigrated files: 285 unused inodes: 1901634 free bytes: 1996578095104 07/06/2011 21:00:55 File selective migrate begin node: PIRATES pid: 11040 file: /gpfs1/dsn1 handle: 099B34284D593EB1-000000000004B403-0000000000000000-0001000200000000 07/06/2011 21:00:55 File migration end node: PIRATES pid: 11040 file: /gpfs1/dsn1 handle: 099B34284D593EB1-000000000004B403-0000000000000000-0001000200000000 extobjid: 0101020C00000000-9B0928340266606F-01939077F352BF20-866064A4 result: 0 bytes: 1048576 state: migrated 07/06/11 21:04:06 File transparent recall begin node: PIRATES pid: 14115 handle: 099B34284D593EB1-000000000004B408-0000000000000000-0001000200000000 extobjid: 0101020C00000000-9B0928340E66606F-037F863124302D98-43C22CA4 07/06/11 21:04:06 File transparent recall end node: PIRATES pid: 14115 handle: 099B34284D593EB1-000000000004B408-0000000000000000-0001000200000000 bytes: 1048576 56

Thank You

Disclaimers
Product data has been reviewed for accuracy as of the date of initial publication. Product data is subject to change without notice. This information could include technical inaccuracies or typographical errors. IBM may make improvements and/or changes in the product(s) and/or program(s) at any time without notice. Any statements regarding IBM's future direction and intent are subject to change or withdrawal without notice, and represent goals and objectives only. The performance data contained herein was obtained in a controlled, isolated environment. Actual results that may be obtained in other operating environments may vary significantly. While IBM has reviewed each item for accuracy in a specific situation, there is no guarantee that the same or similar results will be obtained elsewhere. Customer experiences described herein are based upon information and opinions provided by the customer. The same results may not be obtained by every user. Reference in this document to IBM products, programs, or services does not imply that IBM intends to make such products, programs or services available in all countries in which IBM operates or does business. Any reference to an IBM Program Product in this document is not intended to state or imply that only that program product may be used. Any functionally equivalent program, that does not infringe IBM's intellectual property rights, may be used instead. It is the user's responsibility to evaluate and verify the operation on any non-IBM product, program or service. THE INFORMATION PROVIDED IN THIS DOCUMENT IS DISTRIBUTED "AS IS" WITHOUT ANY WARRANTY, EITHER EXPRESS OR IMPLIED. IBM EXPRESSLY DISCLAIMS ANY WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE OR INFRINGEMENT. IBM shall have no responsibility to update this information. IBM products are warranted according to the terms and conditions of the agreements (e.g. IBM Customer Agreement, Statement of Limited Warranty, International Program License Agreement, etc.) under which they are provided. IBM is not responsible for the performance or interoperability of any non-IBM products discussed herein. Information concerning non-IBM products was obtained from the suppliers of those products, their published announcements or other publicly available sources. IBM has not tested those products in connection with this publication and cannot confirm the accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the capabilities of non-IBM products should be addressed to the suppliers of those products. The providing of the information contained herein is not intended to, and does not, grant any right or license under any IBM patents or copyrights. Inquiries regarding patent or copyright licenses should be made, in writing, to:

IBM Director of Licensing IBM Corporation North Castle Drive Armonk, NY 10504-1785 USA

58

Trademarks
The following terms are trademarks or registered trademarks of the IBM Corporation in either the United States, other countries or both. AIX AIX 5L BladeCenter Chipkill DB2 DB2 Universal Database DFSMSdss DFSMShsm DFSMSrmm Domino e-business logo Enterprise Storage Server ESCON eServer FICON FlashCopy GDPS Geographically Dispersed Parallel Sysplex HiperSockets i5/OS IBM IBM eServer IBM logo iSeries Lotus ON (button device) On demand business OnForever OpenPower OS/390 OS/400 Parallel Sysplex POWER POWER5 Predictive Failure Analysis pSeries S/390 Seascape ServerProven System z9 System p5 System Storage Tivoli TotalStorage TotalStorage Proven TPF Virtualization Engine X-Architecture xSeries z/OS z/VM zSeries

Linear Tape-Open, LTO, LTO Logo, Ultrium logo, Ultrium 2 Logo and Ultrium 3 logo are trademarks in the United States and other countries of Certance, HewlettPackard, and IBM. Java and all Java-based trademarks are trademarks of Sun Microsystems, Inc. in the United States and/or other countries. Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft Corporation in the United States and/or other countries. Intel, Intel Inside (logos), MMX and Pentium are trademarks of Intel Corporation in the United States and/or other countries. UNIX is a registered trademark of The Open Group in the United States and other countries. Linux is a trademark of Linus Torvalds in the United States and other countries. Other company, product, or service names may be trademarks or service marks of others.

59

Vous aimerez peut-être aussi