Vous êtes sur la page 1sur 508

Data Domain Operating System User Guide

Software Version 4.5.1

Disclaimer The information contained in this publication is subject to change without notice. Data Domain, Incorporated makes no warranty of any kind with regard to this manual, including, but not limited to, the implied warranties of merchantability and fitness for a particular purpose. Data Domain, Incorporated shall not be liable for errors contained herein or for incidental or consequential damages in connection with the furnishing, performance, or use of this manual. Notices NOTE: Data Domain hardware has been tested and found to comply with the limits for a Class A digital device, pursuant to Part 15 of the FCC Rules. This Class A digital apparatus complies with Canadian ICES-003. Cet appareil numrique de la classe A est conforme la norme NMB-0003 du Canada. These limits are designed to provide reasonable protection against harmful interference when the equipment is operated in a commercial environment. This equipment generates, uses, and can radiate radio frequency energy and, if not installed and used in accordance with the instruction manual, may cause harmful interference to radio communications. Operation of this equipment in a residential area is likely to cause harmful interference in which case the user will be required to correct the interference at his own expense. Changes or modifications not expressly approved by Data Domain can void the user's authority to operate the equipment. Data Domain Patents Data Domain products are covered by one or more of the following patents issued to Data Domain: U.S. Patents: 6928526, 7007141, 7065619, 7143251, 7305532. Data Domain has other patents pending. Copyright Copyright 2005 - 2008 Data Domain, Incorporated. All rights reserved. Data Domain, the Data Domain logo, Data Domain Operating System, Data Domain OS, Global Compression, Data Invulnerability Architecture, and all other Data Domain product names and slogans are trademarks or registered trademarks of Data Domain, Incorporated in the USA and/or other countries. Microsoft and Windows are either registered trademarks or trademarks of Microsoft Corporation in the United States and/or other countries. Portions of this product are software covered by the GNU General Public License Copyright 1989, 1991 by Free Software Foundation, Inc. Portions of this product are software covered by the GNU Library General Public License Copyright 1991 by Free Software Foundation, Inc. Portions of this product are software covered by the GNU Lesser General Public License Copyright 1991, 1999 by Free Software Foundation, Inc. Portions of this product are software covered by the GNU Free Documentation License Copyright 2000, 2001, 2002, by Free Software Foundation, Inc. Portions of this product are software covered by the GNU Free Documentation License Copyright 2000, 2001, 2002 by Free Software Foundation, Inc. Portions of this product are software Copyright 1999 - 2003, by The OpenLDAP Foundation. Portions of this product are software developed by the OpenSSL Project for use in the OpenSSL

Toolkit (http://www.openssl.org/), Copyright 1998-2005 The OpenSSL Project, all rights reserved. Portions Copyright 1999-2003 Apple Computer, Inc. All rights Reserved. Portions of this product are Copyright 1995 - 1998 Eric Young (eay@cryptsoft.com) All rights reserved. Portions of this product are Copyright Ian F. Darwin 1986-1995. All rights reserved. Portions of this product are Copyright Mark Lord 1994-2004. All rights reserved. Portions of this product are Copyright 1989-1997 Larry Wall All rights reserved. Portions of this product are Copyright Mike Glover 1995, 1996, 1997, 1998, 1999. All rights reserved. Portions of this product are Copyright 1992 by Panagiotis Tsirigotis. All rights reserved. Portions of this product are Copyright 2000-2002 Japan Network Information Center. All rights reserved. Portions of this product are Copyright 1988-2003 by Bram Moolenaar. All rights reserved. Portions of this product are Copyright 1994-2006 Lua.org, PUC-Rio. Portions of this product are Copyright 1990-2005 Info-ZIP. All rights reserved. Portions of this product are under the Boost Software License - Version 1.0 - August 17th, 2003. All rights reserved. Portions of this product are Copyright 1994 Purdue Research Foundation. All rights reserved. This product includes cryptographic software written by Eric Young (eay@cryptsoft.com). Portions of this product are Berkeley Software Distribution software, Copyright 1988 - 2004 by the Regents of the University of California, University of California, Berkeley. Portions of this product are software Copyright 1990 - 1999 by Sleepycat Software. Portions of this product are software Copyright 1985-2004 by the Massachusetts Institute of Technology. All rights reserved. Portions of this product are Copyright 1999, 2000, 2001, 2002 The Board of Trustees of the University of Illinois. All rights reserved. Portions of this product are LILO program code, Copyright 1992 1998 Werner Almesberger. All rights reserved. Portions of this product are software Copyright 1999 - 2004 The Apache Software Foundation, licensed under the Apache License, Version 2.0 (http://www.apache.org/licenses /LICENSE-2.0). Portions of this product are derived from software Copyright 1994, 1995, 1996, 1997, 1998, 1999, 2000, 2001, 2002 by Cold Spring Harbor Laboratory. Funded under Grant P41-RR02188 by the National Institutes of Health. Portions of this product are derived from software Copyright 1996, 1997, 1998, 1999, 2000, 2001, 2002 byBoutell.Com, Inc. Portions of this product relating to GD2 format are derived from software Copyright 1999, 2000, 2001, 2002 Philip Warner. Portions of this product relating to PNG are derived from software Copyright 1999, 2000, 2001, 2002 Greg Roelofs. Portions of this product relating to gdttf.c are derived from software Copyright 1999, 2000, 2001, 2002 John Ellson (ellson@lucent.com). Portions of this product relating to gdft.c are derived from software Copyright 2001, 2002 John Ellson (ellson@lucent.com). Portions of this product relating to JPEG and to color quantization are derived from software Copyright 2000,2001, 2002, Doug Becker and copyright (C) 1994, 1995, 1996, 1997, 1998, 1999, 2000, 2001, 2002, Thomas G. Lane. This software is based in part on the work of the Independent JPEG Group. Portions of this product relating to WBMP are derived from software Copyright 2000, 2001, 2002 Maurice Szmurlo and Johan Van den Brande. Portions of this product are Apache Tomcat version 5.5.23 software covered by the Apache License, Version 2.0 Copyright 2004 by the Apache Software Foundation. Portions of this product are Apache log4j version 1.2.14 software covered by the Apache License, Version 2.0 Copyright 2004 by the Apache Software Foundation. .Portions of this product are Google Web Toolkit version 1.3.3 software covered by the Apache License, Version 2.0 Copyright 2004 by the Apache Software Foundation. Portions of this product are Java Runtime

Environment version 6u1 Copyright 2008 Sun Microsystems, Inc. Other product names and/or slogans mentioned herein may be trademarks or registered trademarks of their respective companies. Data Domain, Incorporated 2421 Mission College Blvd. Santa Clara, CA 95054-1214 USA Phone 408-980-4800 Direct 877-207-3282 Toll-free Fax 408-980-8620 www.datadomain.com Data Domain Software Release 4.5.1 April 28, 2008 Part number: 760-0405-0100 Rev. A

Contents

About This

Guide . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxiii High-Level Table of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxiii Descriptions of Chapters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxiv Conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxvi Audience . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxvi Contacting Data Domain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxvii SECTION 1: Data Domain Systems - Appliance, Gateway, and Expansion Shelf. . . 1 Chapter 1: Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 Documents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 Applications that Send Data to a Data Domain System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 Data Domain System Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 Data Streams Sent to a Data Domain system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 Data Integrity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 Data Compression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 Restore Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 Data Domain Replicator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 Data Domain System Hardware Interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 Licensing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 User Interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 Multipath . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 Related Documentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 Initial system Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

Command Line Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 Chapter 2: Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 Backup Software Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 CIFS Backup Server Timeout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 Login and Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 Additional Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 Administering a Data Domain System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 Command Line Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 Data Domain Enterprise Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 Chapter 3: ES20 Expansion Shelf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 RAID groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 Disk Failures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 Add a Shelf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 Disk Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 Look for New Disks, LUNs, and Expansion Shelves . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 Add an Expansion Shelf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 Display Disk Status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 Shelf (enclosure) Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 List Enclosures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 Identify an Enclosure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 Display Fan Status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 Display Component Temperatures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 Display Port Connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 Display All Hardware Status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 Display Power Supply Status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 Display HBA Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 Display Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 Display Target Storage Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 Display the Layout of SAS Enclosures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42

vi

Data Domain Operating System User Guide

Component Relationship and Commands to show it . . . . . . . . . . . . . . . . . . . . . . . . . . 45 Volume Expansion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 Procedure: Create RAID group on new shelf that has lost disks . . . . . . . . . . . . . . . . . 45 RAID Groups, Failed Disks, and Enclosures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 Chapter 4: Gateway systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 Gateway Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 DD4xxg and DD5xxg series Gateways . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 DD6xxg Gateways . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 Commands not valid for Gateway . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 Commands for Gateway only . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 Disk Commands at LUN level . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 Installation Procedure on DD4xxg and DD5xxg Gateways . . . . . . . . . . . . . . . . . . . . . . . . 54 Installation Procedure on DD6xxg Gateways . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 Procedure: Adding a LUN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 SECTION 2: Configuration - System Hardware, Users, Network, and Services. . . . 61 Chapter 5: System Maintenance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 The system Command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 Shut down the Data Domain System Hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 Reboot the Data Domain System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 Upgrade the Data Domain System Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 To upgrade using HTTP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 To upgrade using FTP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 Set the Date and Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 Restore system configuration after a head unit replacement (with DD690/DD690G) . . . . 66 Procedure to Swap Filesystems: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 Upgrading DD690 and DD690g . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 Create a Login Banner . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70

vii

Reset the Login Banner . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 Display the Login Banner Location . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 Display the Ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 Display the Data Domain System Serial Number . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 Display system Uptime . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 Display system Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 Display Detailed system Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 Display system Statistics Graphically . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 Display system Status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 Display Data Transfer Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 Display the Date and Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 Display NVRAM Status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 Display the Data Domain System Model Number . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 Display Hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 Display Memory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 Display the Data Domain OS Version . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 Display All system Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 The alias Command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 Add an Alias . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 Remove an Alias . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 Reset Aliases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 Display Aliases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 Time Servers and the NTP Command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 Enable NTP Service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 Disable NTP Service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 Add a Time Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 Delete a Time Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 Reset the List . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 Reset All NTP Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 Display NTP Status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
viii Data Domain Operating System User Guide

Display NTP Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 Chapter 6: Network Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 Ethernet Failover and Net Aggregation - Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . 87 Supported Pairs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 Ethernet Failover - Set Up Failover Between Ethernet Interfaces . . . . . . . . . . . . . . . . . . . 90 Set up Failover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 Remove a Physical Interface from a Failover Virtual Interface . . . . . . . . . . . . . . . . . . 90 Display Failover Virtual Interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 Delete a Virtual Failover Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 Sample Failover Workflow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 Net Aggregation/Ethernet Trunking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92 Set up link aggregation between Ethernet interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . 92 Remove selected physical interfaces from an aggregate virtual interface . . . . . . . . . . 93 Display basic information on the aggregate setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93 Remove all physical interfaces from an aggregate virtual interface . . . . . . . . . . . . . . . 94 Sample Aggregation Workflow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94 The net Command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 Enable an Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 Disable an Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 Enable DHCP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96 Disable DHCP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96 Change an Interface Netmask . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96 Change an Interface Transfer Unit Size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 Add or Change DNS servers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 Ping a Host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 Change the Data Domain System Hostname . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 Change an Interface IP Address . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 Change the Domain Name . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 Add a Hostname/IP Address to the /etc/hosts File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 Reset Network Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
ix

Set Interface Duplex Line Use . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 Set Interface Line Speed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 Set Autonegotiate for an Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 Delete a Hostname/IP address from the /etc/hosts File . . . . . . . . . . . . . . . . . . . . . . . . . . 100 Delete All Hostname/IP addresses from the /etc/hosts File . . . . . . . . . . . . . . . . . . . . . . . 100 Display Hostname/IP addresses from the /etc/hosts File . . . . . . . . . . . . . . . . . . . . . . . . . 100 Display an Ethernet Interface Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 Display Interface Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 Display Ethernet Hardware Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102 Display the Data Domain System Hostname . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 Display the Domain Name Used for Email . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 Display DNS Servers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 Display Network Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 Display All Networking Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 The route Command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105 Add a Routing Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105 Remove a Routing Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105 Change the Routing Default Gateway . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106 Reset the Default Routing Gateway . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106 Display a Route . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106 Display the Configured Static Routes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106 Display the Kernel IP Routing Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 Display the Default Routing Gateway . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 Multiple Network Interface Usability Improvement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108 Chapter 7: Access Control for Administration . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109 Add a Host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109 Remove a Host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110 Allow Access from Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110 Restrict Administrative Access from Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110 Reset Windows Administrative Access to the Default . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
x Data Domain Operating System User Guide

Enable a Protocol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 Disable a Protocol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 Reset system Access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 Add an Authorized SSH Public Key . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112 Remove an SSH Key File Entry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112 Remove the SSH Key File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112 Create a New HTTPS Certificate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 Display the SSH Key File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 Display Hosts and Status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 Display Windows Access Setting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114 Procedure: Return Command Output to a Remote machine . . . . . . . . . . . . . . . . . . . . . . . . . . 114 Chapter 8: User Administration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115 Add a User . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115 Remove a User . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115 Change a Password . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116 Reset to the Default User . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116 Change a Privilege Level . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116 Display Current Users . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117 Display All Users . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118 Chapter 9: Configuration Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 The config Command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 Change Configuration Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 Save and Return a Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120 Reset the Location Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121 Reset the Mail Server to a Null Entry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121 Reset the Time Zone to the Default . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121 Set an Administrative Email Address . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121 Set an Administrative Host Name . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122 Change the system Location Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122

xi

Change the Mail Server Hostname . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122 Set a Time Zone for the system Clock . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123 Display the Administrative Email Address . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123 Display the Administrative Host Name . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123 Display the system Location Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124 Display the Mail Server Hostname . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124 Display the Time Zone for the system Clock . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124 The license Command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124 Add a License . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124 Display Licenses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125 Remove All Feature Licenses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126 Remove a License . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126 SECTION 3: Remote Monitoring - Alerts, SNMP, and Log Files. . . . . . . . . . . . . . . 127 Chapter 10: Alerts and System Reports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129 Alerts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130 Add to the Email List . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130 Test the Email List . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130 Remove from the Email List . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130 Reset the Email List . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131 Display Current Alerts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131 Display the Alerts History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132 Display the Email List . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132 Display Current Alerts and Recent History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133 Display the Email List and Administrator Email . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133 Autosupport Reports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134 Add to the Email List . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134 Test the Autosupport Report Email List . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134 Send an Autosupport Report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135 Remove from the Email List . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
xii Data Domain Operating System User Guide

Reset the Email List . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135 Run the Autosupport Report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135 Email Command Output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136 Set the Schedule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136 Reset the Schedule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137 Reset the Schedule and the List . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137 Display all Autosupport Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137 Display the Autosupport Email List . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137 Display the Autosupport Report Schedule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138 Display the Autosupport History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138 Hourly system Status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138 Collect and Send Log Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139 Chapter 11: SNMP Management and Monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . 141 Enable SNMP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142 Disable SNMP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142 Set the system Location . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142 Reset the system Location . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142 Set a system Contact . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142 Reset a system Contact . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143 Add a Trap Host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143 Delete a Trap Host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143 Delete All Trap Hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143 Add a Community String . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144 Delete a Community String . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144 Delete All Community Strings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144 Reset All SNMP Values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144 Display SNMP Agent status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145 Display Trap Hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145 Display All Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145 Display the system Contact . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
xiii

Display the system Location . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146 Display Community Strings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146 Display the MIB and Traps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147 More about the MIB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147 What is a MIB? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147 MIB Browser . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147 Entire MIB Tree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147 Top-Level Organization of the MIB: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150 Mid-Level Organization of the MIB: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151 The MIB (Current Alerts Section) in Text Form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151 Entries in the MIB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153 Important Areas of the MIB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153 Alerts (.1.3.6.1.4.1.19746.1.4) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154 Data Domain MIB Notifications (.1.3.6.1.4.1.19746.2) . . . . . . . . . . . . . . . . . . . . . . . 154 Filesystem Space (.1.3.6.1.4.1.19746.1.3.2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161 Replication (.1.3.6.1.4.1.19746.1.8) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162 Chapter 12: Log File Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165 Scroll New Log Entries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165 Send Log Messages to Another system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165 Add a Host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166 Remove a Host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166 Enable Sending Log Messages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166 Disable Sending Log Messages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166 Reset to Default . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166 Display the List and State . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167 Display a Log File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167 List Log Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168 Procedure: Understand a Log Message . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169 Procedure: Archive Log Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170

xiv

Data Domain Operating System User Guide

SECTION 4: Capacity - Disk Management, Disk Space, System Monitoring, and Multipath. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171 Chapter 13: Disk Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173 Expand from 9 disks to 15 disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174 Add a LUN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174 Fail a Disk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174 Unfail a Disk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175 Look for New Disks, LUNs, and Expansion Shelves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175 Identify a Physical Disk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175 Add an Expansion Shelf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175 Reset Disk Performance Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176 Display Disk Status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176 Output Format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176 Output Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177 Display Disk Type and Capacity Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178 Display RAID Status for Disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180 Display the History of Disk Failures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181 Display Detailed RAID Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181 Display Disk Performance Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183 Display Disk Reliability Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185 Chapter 14: Disk Space and System Monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . 187 Space Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187 Estimate Use of Disk Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188 Manage File system Use of Disk Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189 Display the Space Graph . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190 Reclaim Data Storage Disk Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190 Maximum Number of Files and Other Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191 Number of Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191 Inode Reporting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191

xv

Path Name Length . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191 Directory Size for Directory Replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192 When a Data Domain System is Full . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192 Chapter 15: Multipath . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193 Multipath Commands for Gateway only . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193 Suspend or Resume a Port Connection (Gateway only) . . . . . . . . . . . . . . . . . . . . . . . . . 193 Enable Auto-Failback (Gateway only) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194 Disable Auto-Failback (Gateway only) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194 Reset Auto-Failback to its Default of enabled (Gateway only) . . . . . . . . . . . . . . . . . . . . 194 Go back to using the optimal path (Gateway only) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194 Allow I/O on a specified initiator port (Gateway only) . . . . . . . . . . . . . . . . . . . . . . . . . . 194 Disallow I/O on a specified initiator port (Gateway only) . . . . . . . . . . . . . . . . . . . . . . . . 195 Multipath Commands for all systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195 Display Port Connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195 Enable Monitoring of Multipath Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196 Disable Monitoring of Multipath Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196 Show Monitoring of Multipath Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196 Show Multipath Status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197 Show Multipath History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198 Show Multipath Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199 Clear Multipath Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200 SECTION 5: File System and Data Protection . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201 Chapter 16: Data Layout Recommendations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203 Issue . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203 Reporting on compression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204 Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205

xvi

Data Domain Operating System User Guide

NFS issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207 Filesystem organizations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207 Mount options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207 CIFS issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208 VTL issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208 OST issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209 Archive implications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209 Very large environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210 IMPORTANT NOTE! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210 Additional Notes on the Filesys Show Compression command . . . . . . . . . . . . . . . . . . . . . . . 210 Chapter 17: File System Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213 The filesys command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213 Statistics and Basic Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213 Start the Data Domain System File system Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213 Stop the Data Domain System File system Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214 Stop and Start the Data Domain System File system . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214 Delete All Data in the File system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214 Fastcopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215 Display File system Space Utilization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215 Display File system Status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217 Display File system Uptime . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217 Display Compression - For Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217 Display Compression - Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218 Display Compression - Daily . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219 Clean Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220 Start Cleaning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221 Stop Cleaning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222 Change the Schedule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222 Set the Schedule or Throttle to the Default . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223
xvii

Set Network Bandwidth Used . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223 Update Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223 Display All Clean Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223 Display the Schedule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224 Display the Throttle Setting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224 Display the Clean Operation Status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224 Monitor the Clean Operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225 Compression Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225 Local Compression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225 Set Local Compression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225 Reset Local Compression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226 Display the Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226 Global Compression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226 Set Global Compression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226 Reset Global Compression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227 Display the Type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227 Replicator Destination Read/Write Option . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227 Report as Read/Write . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227 Report as Read-Only . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228 Return to the Default Read-Only Setting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228 Display the Setting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228 Tape Marker Handling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228 Set a Marker Type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228 Reset to the Default . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229 Display the Marker Setting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229 Chapter 18: Snapshots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231 Create a Snapshot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231 List Snapshots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232 Set a Snapshot Retention Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232 Expire a Snapshot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233
xviii Data Domain Operating System User Guide

Rename a Snapshot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233 Snapshot Scheduling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233 Add a Snapshot Schedule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234 Syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234 Further Examples: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235 Modify a Snapshot Schedule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237 Remove All Snapshot Schedules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238 Display a Snapshot Schedule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238 Display all Snapshot Schedules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238 Delete a Snapshot Schedule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238 Delete all Snapshot Schedules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238 Chapter 19: Retention Lock . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241 The Retention Lock Feature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241 Enable the Retention Lock Feature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242 Disable the Retention Lock Feature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242 Set the Minimum and Maximum Retention Periods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242 Reset the Minimum and Maximum Retention Periods . . . . . . . . . . . . . . . . . . . . . . . . . . . 243 Show the Minimum and Maximum Retention Periods . . . . . . . . . . . . . . . . . . . . . . . . . . . 243 Reset Retention Lock for Files on a Specified Path . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243 Show Retention Lock Status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243 Client-Side Retention Lock File Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244 Create Retention-Locked File and Set Retention Date . . . . . . . . . . . . . . . . . . . . . . . . 244 Extend Retention Date: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244 Identify Retention-Locked Files and List Retention Date: . . . . . . . . . . . . . . . . . . . . . 245 Delete an Expired Retention-Locked File: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245 Retention Lock Sample Procedure: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245 Notes on Retention Lock: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247 Retention Lock and Replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247 Retention Lock and Fastcopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247 Retention Lock and Filesys Destroy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247
xix

Chapter 20: Replication - CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249 Collection Replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249 Directory Replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249 Using Context . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250 Configure Replicator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251 Replicating VTL Tape Cartridges and Pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252 Start Replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253 Suspend Replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254 Resume Replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254 Remove Replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255 Reset Authentication between the Data Domain Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . 255 Move Data to a New Source . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256 Recover from an aborted recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256 Resynchronize Source and Destination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256 Convert from Collection to Directory Replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257 Abort a Resync . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257 Change a Source or Destination Hostname . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257 Connect with a Network Name . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258 Change a Destination Port . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258 Change the Port on a Destination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259 Throttling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259 Add a Scheduled Throttle Event . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259 Set a Temporary Throttle Rate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260 Delete a Scheduled Throttle Event . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260 Set an Override Throttle Rate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261 Reset Throttle Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262 Throttle Reset Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262 TOE versus Throttling: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262 Scripted Cascaded Directory Replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263 Procedure: Set Replication Bandwidth and Network Delay . . . . . . . . . . . . . . . . . . . . . . . . . 263
xx Data Domain Operating System User Guide

Display Bandwidth and Delay Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264 Display Replicator Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264 Display Replication History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266 Display Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267 Display Throttle settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267 Display Replication Complete for Current Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268 Display Initialization, Resync, or Recovery Progress . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268 Display Status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268 Display Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270 Actual example of show stats all: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271 Hostname Shorthand . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272 Procedure: Set Up and Start Directory Replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273 Procedure: Set Up and Start Collection Replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274 Procedure: Set Up and Start Bidirectional Replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274 Procedure: Set Up and Start Many-to-One Replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275 Procedure: Replace a Directory Source - New Name . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275 Procedure: Replace a Collection Source - Same Name . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 276 Procedure: Recover from a Full Replication Destination . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277 Procedure: Convert from Collection to Directory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277 Procedure: Seeding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278 One-to-One . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279 Bidirectional . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282 Many-to-One . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 287 Migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291 Set Up the Migration Destination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 292 Start Migration from the Source . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 292 Create an End Point for Data Migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 294 Display Migration Progress . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 294 Stop the Migration Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 294 Display Migration Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295
xxi

Display Migration Status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295 Procedure: Migrate between Source and Destination . . . . . . . . . . . . . . . . . . . . . . . . . . . 296 Procedure: Migrate with Replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296 SECTION 6: Data Access Protocols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 299 Chapter 21: NFS Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 301 Quicker Start Guide for NFS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 301 Shorthand steps: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 301 Add NFS Clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303 Remove Clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304 Enable Clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304 Disable Clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304 Reset Clients to the Default . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305 Clear the NFS Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305 Display Active Clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305 Display Allowed Clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305 Display Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 306 Display Detailed Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 307 Display Status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 307 Display Timing for NFS Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 308 Chapter 22: CIFS Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309 CIFS Access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309 Add a User . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 310 Add a Client . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 311 Secured LDAP with Transport Layer Security (TLS) . . . . . . . . . . . . . . . . . . . . . . . . . . . 311 CIFS Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 312 Enable Client Connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 312 Disable Client Connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 312 Remove a Backup Client . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 312

xxii

Data Domain Operating System User Guide

Remove an Administrative Client . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313 Remove All CIFS Clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313 Set a NetBIOS Hostname . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313 Remove the NetBIOS Hostname . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313 Create a Share on the Data Domain System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313 Delete a share . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315 Enable a Share . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315 Disable a Share . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315 Modify a Share . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315 Set the Authentication Mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 316 Remove an Authentication Mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 317 Add an IP Address/NetBIOS hostname Mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 317 Remove All IP Address/NetBIOS hostname Mappings . . . . . . . . . . . . . . . . . . . . . . . . . . 317 Remove an IP Address/NetBIOS hostname Mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . 317 Resolve a NetBIOS Name . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 318 Identify a WINS server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 318 Remove the WINS server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 318 Set Authentication to the Active Directory Mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 318 Set CIFS Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319 Set Organizational Unit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319 Allow Trusted Domain Users . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319 Allow Administrative Access for a Windows Domain Group . . . . . . . . . . . . . . . . . . . . . 320 Set CIFS Logging Levels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 320 Increase Memory to Allow More User Accounts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 320 Set the Maximum Transmission Size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321 Control Anonymous User Connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321 Increase Memory for SMBD Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321 Allow Certificate Authority Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321 Reset CIFS Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321 Display CIFS Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322
xxiii

Display . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322 Display CIFS Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322 Display Active Clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322 Display All Clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323 Display the CIFS Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323 Display Detailed CIFS Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 324 Display All IP Address/NetBIOS hostname Mappings . . . . . . . . . . . . . . . . . . . . . . . . . . 324 Display CIFS Users . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 324 Display CIFS Status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325 Display Shares . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325 Display CIFS Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325 Display CIFS User Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325 Display CIFS Group Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 326 Procedure: Time Servers and Active Directory Mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 326 Synchronizing from a Windows Domain Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . 326 Synchronizing from an NTP Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 327 Procedure: Add a Share on the CIFS Client . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 327 Adding a Share on a UNIX CIFS Client . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 327 Adding a Share on a Windows CIFS Client (MMC) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 328 File Security With ACLs (Access Control Lists) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 336 How to set ACL Permissions/Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 337 Granular and complex permissions (DACL) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 337 Audit ACL (SACL) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 338 Owner SID . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339 ntfs-acls and idmap-type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 340 Procedure to Turn on ACLs: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 341 If this is a new installation: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 341 If this is an existing installation, with pre-existing CIFS data residing on the system: . . 341 Chapter 23: Open STorage (OST) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343
xxiv Data Domain Operating System User Guide

Overview: steps to enable OST on the DDR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344 Add the OST license. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344 Add the ost user - set the ost user to user-name . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345 Reset the ost user back to the default (no user set) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345 Display the current ost user . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345 Enable the OST feature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345 Disable the OST feature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 346 Show the current status (enabled or disabled) for ost . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 346 Create an LSU (logical storage unit) with the given LSU-name . . . . . . . . . . . . . . . . . . . . . . 346 Delete an LSU . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 346 Delete all images and LSUs on the Data Domain system . . . . . . . . . . . . . . . . . . . . . . . . . . . 347 Display LSU / or all the LSUs on the Data Domain system . . . . . . . . . . . . . . . . . . . . . . . . . . 347 Show ost statistics for the Data Domain system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349 Show ost statistics for the Data Domain system over an interval . . . . . . . . . . . . . . . . . . . . . . 350 Display an ost histogram for the Data Domain system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 351 Clear all ost statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 352 Display ost connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 352 Display statistics on active optimized duplication operations . . . . . . . . . . . . . . . . . . . . . . . . 352 Sample workflow sequence: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 353 Chapter 24: Virtual Tape Library (VTL) - CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 357 Compatibility Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 359 Enable VTLs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 359 Create a VTL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 359 Delete a VTL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 360 Disable VTLs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 360 Broadcast new VTLs and VTL Changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 360 Create New Drives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 361 Remove Drives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 361 Use a Changer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 361 Display a Summary of All Tapes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 362
xxv

Create New Tapes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 363 Import Tapes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 364 Examples of importing: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365 Export Tapes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 366 Remove Tapes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 366 Move Tape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 367 Search Tapes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 367 Set a Private-Loop Hard Address . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 367 Reset a Private-Loop Hard Address . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 368 Enable Auto-Eject . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 368 Enable Auto-Offline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 368 Disable Auto-Eject . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 368 Disable Auto-Offline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 368 Display the Auto-Offline Setting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 369 Display the Private-Loop Hard Address Setting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 369 Display VTL Status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 369 Display VTL Configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 369 Display All Tapes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 370 Display Tapes by VTL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 371 Display All Tapes in the Vault . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 372 Display Tapes by Pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 372 Display VTL Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373 Display Tapes using sorting and wildcard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 374 Procedure: Manually Export a Tape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 375 Procedure: Retrieve a Replicated Tape from a Destination . . . . . . . . . . . . . . . . . . . . . . . . . . 376 Access Groups (for VTL Only) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 377 The vtl group Command (Access Group) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 378 Create a Access Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 379 Remove an Access Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 379 Rename an Access Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 379
xxvi Data Domain Operating System User Guide

Add to an Access Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 380 Delete from an Access Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 381 Modify an Access Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 381 Display Access Group information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 382 Switch Virtual Devices between Primary & Secondary Port List . . . . . . . . . . . . . . . . 383 Procedure: Create an Access Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 384 The vtl initiator Command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 385 Add an Initiator (= add WWPN = set alias) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 386 Delete an Initiator (reset alias) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 386 Display Initiators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 387 Pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 387 Add a Pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 388 Delete a Pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 388 Display Pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 388 The vtl port Command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 388 Enable HBA ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 389 Disable HBA ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 389 Show VTL information in per-port format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 389 Chapter 25: Backup/Restore Using NDMP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393 Add a Filer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393 Remove a Filer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393 Backup from a Filer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 394 Restore to a Filer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 394 Remove Filer Passwords . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395 Stop an NDMP Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395 Stop All NDMP Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395 Check for a Filer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 396 Display Known Filers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 396 Display NDMP Process Status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 396

xxvii

SECTION 7: GUI - Graphical User Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 397 Chapter 26: Enterprise Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 399 Graphical User Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 399 Display the Space Graph . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 402 Monitor Multiple Data Domain Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405 Chapter 27: Virtual Tape Library (VTL) - GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 409 Virtual Tape Libraries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 410 Enable VTLs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 410 Disable VTLs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 411 Create a VTL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 411 Delete a VTL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 411 VTL Drives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 412 Create New Drives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 413 Remove Drives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 413 Use a Changer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 414 Display a Summary of All Tapes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 414 Create New Tapes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 415 Import Tapes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 416 Export Tapes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 418 Remove Tapes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 419 Move Tape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 420 Search Tapes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 420 Set Option/Reset Option (loop-id and auto-eject) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 421 Display VTL Status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 422 Display All Tapes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 423 Display Summary Information About Tapes in a VTL . . . . . . . . . . . . . . . . . . . . . . . . . . 423 Display Summary Information About the Tapes in a Vault . . . . . . . . . . . . . . . . . . . . . . . 424 Display All Tapes in a Vault . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 424 Access Groups (for VTL Only) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 425

xxviii

Data Domain Operating System User Guide

Create an Access Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 426 Remove an Access Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 426 Rename an Access Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 426 Add to an Access Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 426 Delete from an Access Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 427 Modify an Access Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 428 Display Access Group information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 428 UPGRADE NOTE: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 429 Switch Virtual Devices between Primary & Secondary Port List . . . . . . . . . . . . . . . . . . 429 Procedure: Use a VTL Library / Use an Access Group . . . . . . . . . . . . . . . . . . . . . . . . . . 430 Physical Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 432 Initiators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 432 Add an Initiator (= Add WWPN = Set Initiator Alias) . . . . . . . . . . . . . . . . . . . . . . . . 432 Change an Existing Initiator Alias . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 433 Delete an Initiator (Reset Initiator Alias) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 433 Display Initiators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 433 Add an Initiator to an Access Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 434 Remove an Initiator from an Access Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 434 HBA Ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 434 Enable HBA ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 434 Disable HBA ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 434 Show VTL information on all ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 435 Show more detailed information on all ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 435 Show very detailed information on a single port . . . . . . . . . . . . . . . . . . . . . . . . . . . . 436 Pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 437 Add a Pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 438 Delete a Pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 438 Display Pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 438 Display Summary Information about a Single Pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 439 Display All Tapes in a Pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 439
xxix

Chapter 28: Replication - GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 441 Distinction Between Overview Bar/Box and Replication Pair Bar/Boxes . . . . . . . . . . . 444 Pre-Compression and Post-Compression Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 445 Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 446 Throttle Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 446 Bandwidth . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 446 Network Delay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 447 Listen Port . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 447 Status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 447 Current State . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 447 Synchronized as of . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 448 Backup Replication Tracker . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 448 General Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 449 Appendix A Time Zones . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 451 Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 459

xxx

Data Domain Operating System User Guide

About This Guide

This guide

explains the use of Data Domain systems. A high-level Table of Contents is shown, followed by descriptions of the individual chapters, conventions, audience, and contact information.

High-Level Table of Contents


About This Guide SECTION 1: Data Domain Systems - Appliance, Gateway, and Expansion Shelf. 1. Introduction 2. Installation 3. ES20 Expansion Shelf 4. Gateway systems SECTION 2: Configuration - System Hardware, Users, Network, and Services. 5. System Maintenance 6. Network Management 7. Access Control for Administration 8. User Administration 9. Configuration Management SECTION 3: Remote Monitoring - Alerts, SNMP, and Log Files. 10. Alerts and system Reports 11. SNMP Management and Monitoring 12. Log File Management SECTION 4: Capacity - Disk Management, Disk Space, System Monitoring, and Multipath. 13. Disk Management 14. Disk Space and system Monitoring 15. Multipath SECTION 5: File System and Data Protection 16. File system Management 17. Snapshots 18. Retention Lock 19. Replication - CLI SECTION 6: Data Access Protocols
xxiii

Descriptions of Chapters

20. 21. 22. 23. 24.

NFS Management CIFS Management Open STorage (OST) Virtual Tape Library (VTL) - CLI Backup/Restore Using NDMP

SECTION 7: GUI - Graphical User Interface 25. Enterprise Manager 26. Virtual Tape Library (VTL) - GUI 27. Replication - GUI Time Zones Index

Descriptions of Chapters
SECTION 1: Data Domain Systems - Appliance, Gateway, and Expansion Shelf.

The Introduction chapter explains what the Data Domain Systems are and how they work, details features, and gives overviews of configuration tasks, the default configuration, and user interface commands. The Installation chapter gives all configuration steps and information for setting up backup software to use a Data Domain System. The ES20 Expansion Shelf chapter explains how to add and use the Data Domain ES20 disk expansion shelf for increased data storage. The Gateway systems chapter gives installation steps and other information specific to Data Domain Systems that use 3rd-party physical storage disk arrays instead of internal disks or external shelves.

SECTION 2: Configuration - System Hardware, Users, Network, and Services.

The System Maintenance chapter describes how to manage the background maintenance task that continually checks the integrity of backup images, how to connect to time servers, and how to set up alias commands. The Network Management chapter describes how to manage network tasks such as routing rules, the use of DHCP and DNS, and the setting IP addresses. The Access Control for Administration chapter describes how to give HTTP, FTP, TELNET, and SSH access from remote hosts. The User Administration chapter explains how to deal with users and passwords. The Configuration Management chapter describes how to examine and modify configuration parameters.
Data Domain Operating System User Guide

xxiv

Descriptions of Chapters

SECTION 3: Remote Monitoring - Alerts, SNMP, and Log Files.


The Alerts and System Reports chapter details messages that the Data Domain Operating system (DDOS) sends when monitoring components and details the daily system report. The SNMP Management and Monitoring chapter details the use of SNMP operations between a Data Domain System and remote machines. The Log File Management chapter explains how to view, archive, and clear the log file.

SECTION 4: Capacity - Disk Management, Disk Space, System Monitoring, and Multipath.

The Disk Management chapter explains how to monitor and manage disks on a Data Domain System. The Disk Space and System Monitoring chapter gives guidelines for managing disk space on Data Domain Systems and for setting up backup servers to get the best performance. The Multipathchapter explains how to use external storage I/O paths for failover and load balancing across paths..

SECTION 5: File System and Data Protection


The Data Layout Recommendationschapter gives recommendations for data layout on Data Domain Systems. The File System Management chapter gives details about file system statistics and capacity. The Snapshots chapter describes how to create and manage read-only copies of the Data Domain file system. The Retention Lock chapter describes how to lock files so that they cannot be changed or deleted. The Replication - CLI chapter details use of the Data Domain Replicator product for replication of data from one Data Domain System to another.

SECTION 6: Data Access Protocols


The NFS Management chapter describes how to deal with NFS clients and status. The CIFS Management chapter details the use of Windows backup servers with a Data Domain System. The Open STorage (OST)chapter explains the use of the OST feature. The Virtual Tape Library (VTL) - CLIchapter explains the use of the Virtual Tape Library feature. The Backup/Restore Using NDMP chapter explains how to do direct backup and restore operations between a Data Domain System and an NDMP-type filer.

SECTION 7: GUI - Graphical User Interface


xxv

Conventions

This set of chapters detail the use of all graphical user interface commands and operations. Each chapter has headings that are a task-oriented list of the operations detailed in that chapter. For any task that you want to perform, look in the table of contents for the heading that describes the task.

The Enterprise Manager chapter explains how to use the main GUI. The Virtual Tape Library (VTL) - GUI chapter explains how to use the VTL GUI. The Replication - GUI chapter explains how to use the Replication GUI.

The Appendix lists all time zones around the world.

Conventions
The following table describes the typographic conventions used in this guide.
Typeface Monospace Usage Commands, computer output, directories, files, software elements such as command options, and parameters New terms, book titles, variables, and labels of boxes and windows as seen on a monitor User input; the # symbol indicates a command prompt. Examples Find the log file under /var/log. See the net help page for more information. The name is a path for the device... # config setup

Italic Monospace bold Symbol # [] | {}

Usage Administrative user prompt In a command synopsis, brackets indicate an optional argument In a command synopsis, a vertical bar separates mutually exclusive arguments In a command synopsis, curly brackets indicate that one of the exclusive arguments is required.

Examples

log view [filename] net dhcp [true | false] adminhost add {ftp | telnet | ssh}

Audience
This guide is for system administrators who are familiar with standard backup software packages and with general backup administration.

xxvi

Data Domain Operating System User Guide

Contacting Data Domain

Contacting Data Domain


For comments or problems with Data Domain products, contact Data Domain Technical Support:

24 hours a day, 7 days a week at 877-207-DATA (3282) (toll free) or 408-980-4900 (direct) email: support@datadomain.com

For sales and license information:


877-622-2587 email: sales@datadomain.com Fax: 408-980-8620

Data Domain, Incorporated 2421 Mission College Blvd., Santa Clara CA 95054 USA Phone 408-980-4800 Direct 877-207-3282 Toll-free Fax 408-980-8620

xxvii

Contacting Data Domain

xxviii

Data Domain Operating System User Guide

SECTION 1: Data Domain Systems - Appliance, Gateway, and Expansion Shelf.

This page intentionally left blank.

Data Domain Operating System User Guide

Introduction

Data Domain Systems are disk-based recovery appliances. A Data Domain System makes backup data available with the performance and reliability of disks at a cost competitive with tape-based storage. Data integrity is assured with multiple levels of data checking and repair. A Data Domain System works seamlessly with your existing backup software. To a backup server, the Data Domain System appears as a file server supporting NFS or CIFS over Gigabit Ethernet, or as a virtual tape library (VTL) over a Fibre Channel connection. Add a Data Domain System to your site as a disk storage device, as defined by your backup software, or as a tape library. Multiple backup servers can share one Data Domain System, and one Data Domain System can handle multiple simultaneous backup and restore operations. For additional throughput and capacity, you can attach multiple Data Domain Systems to one or more backup servers. Figure 1 shows a Data Domain System in a basic backup configuration. SCSI/ Fibre Channel

Backup Server Ethernet from primary storage Gigabit Ethernet or Fibre Channel

NFS/CIFS/VTL/OST Data Verification Data Domain File System Global Compression RAID Management

Tape system

Data Domain system


Figure 1: A Data Domain System 3

Documents

Referring to Figure 1 on page 3, data flows to a Data Domain System through an Ethernet or Fibre Channel connection. Immediately, data verification processes begin that follow the data for as long as it is on the Data Domain System. In the file system, Data Domain OS Global Compression algorithms prepare the data for storage. Data is then sent to the disk RAID subsystem. The algorithms constantly adjust the use of storage as the Data Domain System receives new data from backup servers. Restore operations flow back from storage, through decompression algorithms and verification consistency checks, and then through the Ethernet connection to the backup servers.

Documents
The main Data Domain system guides are the following:

Data Domain Operating System User Guide Data Domain System Hardware User Guide Data Domain ES20 Expansion Shelf Hardware Guide Data Domain Open Storage (OST) User Guide

Applications that Send Data to a Data Domain System


The Data Domain operating system (Data Domain OS) is designed specifically to accomodate relatively large streams of sequential data from backup software, and is optimized for high throughput, continuous data verification, and high compression, although it is also designed to accomodate the large numbers of smaller files in nearline storage. Data Domain System performance when storing data from applications that are not specifically backup software is best when:

Data is sent to the Data Domain System as sequential writes (no overwrites). No compression or encryption is used before sending the data to the Data Domain System.

Data Domain System Models


From a high-level viewpoint, the differences between Data Domain systems are thethroughput and the amount of data storage capacity. The gateway systems store all data on 3rd-party physical storage disk arrays through a fiber channel connection. An expansion shelf increases storage space for and is managed by a Data Domain System. See the table Data Domain System Capacities in the Data Domain System Hardware User Guide for the capacities of each Data Domain system model.

Data Domain Operating System User Guide

Data Streams Sent to a Data Domain system

Note that some storage capacity is used by Data Domain system internal indexes and other product components. The amount of storage used over time for such components depends on the type of data stored and the sizes of files stored. With two otherwise identical systems, one system may, over time, have room for more or less actual backup data than the other if different data sets are sent to each.

Data Streams Sent to a Data Domain system


Each backup file written to or read from a Data Domain system is seen as a stream. For optimal performance, Data Domain recommends the following limits on streams between Data Domain systems and your backup servers (see the table Data Streams Sent to a Data Domain system on page 5): :
Data Streams Sent to a Data Domain system Platforms DD690, DD690g DD690 RAM 24GB Total 50 Maximum Write Only 40 Maximum Read Only 50 Mixed Less than or equal to 40 writes and less than or equal to 40 reads Less than or equal to 40 writes and less than or equal to 30 reads Less than or equal to 16 writes and less than or equal to 30 reads Less than or equal to 16 writes and less than or equal to 16 reads Less than or equal to 12 writes and less than or equal to 18 reads Less than or equal to 12 writes and less than or equal to 4 reads

16GB

40

40

30

DD580, DD580g DD565, DD560, DD560g DD565, DD560

16GB

30

20

30

12GB

20

20

20

8GB

20

20

16

DD4xx,DD460g ,DD510, DD530

4GB

16

16

Data Integrity
The Data Domain OS Data Invulnerability Architecture protects against data loss from hardware and software failures.

Introduction

Data Compression

When writing to disk, the Data Domain OS creates and stores self-describing metadata for all data received. After writing the data to disk, the Data Domain OS then creates metadata from the data on the disk and compares it to the original metadata. An append-only write policy guards against overwriting valid data. After a backup completes, a validation process looks at what was written to disk to see that all file segments are logically correct within the file system and that the data is the same on the disk as it was before being written to disk. In the background, the Online Verify operation continuously checks that data on the disks is still correct and that nothing has changed since the earlier validation process. Storage in a Data Domain System is set up in a double parity RAID 6 configuration (two parity drives) with a hot spare in 15-disk systems. Eight-disk systems have no hot spare. Each parity stripe has block checksums to ensure that data is correct. The checksums are constantly used during the online verify operation and when data is read from the Data Domain System. With double parity, the system can fix simultaneous errors on up to two disks. To keep data synchronized during a hardware or power failure, the Data Domain System uses NVRAM (non-volatile RAM) to track outstanding I/O operations. An NVRAM card with fully-charged batteries (the typical state) can retain data for a minimum of 48 hours. When reading data back for a restore operation, the Data Domain OS uses multiple layers of consistency checks to verify that restored data is correct.

Data Compression
The Data Domain OS compression algorithms:

store only unique data. Through Global Compression, a Data Domain System pools redundant data from each backup image. Any duplicated data or repeated patterns from multiple backups are stored only once. The storage of unique data is invisible to backup software, which sees the entire virtual file system. are independent of data format. Data can be structured, such as databases, or unstructured, such as text files. Data can be from file systems or raw volumes. All forms are compressed.

Typical compression ratios are 20:1 on average over many weeks assuming weekly full and daily incremental backups. A backup that includes many duplicate or similar files (files copied several times with minor changes) benefits the most from compression. Depending on backup volume, size, retention period, and rate of change, the amount of compression can vary. The best compression happens with backup volume sizes of at least 10 MiB (the base 2 equivalent of MB). See Display File system Space Utilization on page 215 for details on displaying the amount of user data stored and the amount of space available.

Data Domain Operating System User Guide

Restore Operations

Global Compression functions within a single Data Domain System. To take full advantage of multiple Data Domain Systems, a site that has more than one Data Domain System should consistently backup the same client system or set of data to the same Data Domain System. For example, if a full backup of all sales data goes to Data Domain SystemA, the incremental backups and future full backups for sales data should also go to Data Domain SystemA.

Restore Operations
With disk backup through the Data Domain System, incremental backups are always reliable and access time for files is measured in milliseconds. Furthermore, with a Data Domain System, you can perform full backups more frequently without the penalty of storing redundant data. With tape backups, a restore operation may rely on multiple tapes holding incremental backups. Unfortunately, the more incremental backups a site has on multiple tapes, the more time-consuming and risky the restore process. One bad tape can kill the restore. From a Data Domain System, file restores go quickly and create little contention with backup or other restore operations. Unlike tape drives, multiple processes can access a Data Domain System simultaneously. A Data Domain System allows your site to offer safe, user-driven, single-file restore operations.

Data Domain Replicator


The Data Domain Replicator product sets up and manages the replication of backup data between two Data Domain Systems. After replication is started, the source Data Domain System automatically sends any new backup data to the destination Data Domain System. A Replicator pair deals with either a complete data set or a directory from a source Data Domain System that is sent to a destination Data Domain System. An individual Data Domain System can be a part of multiple directory pairs and can serve as a source for one or more pairs and a destination for one or more pairs.

Data Domain System Hardware Interfaces


You can configure and administer a Data Domain System using a directly-connected serial console, an Ethernet connection from another system, or a monitor and keyboard. All hardware interfaces are on the back panel of the Data Domain System. See the Data Domain system Hardware Guide for interface locations.

Introduction

Licensing

Licensing
The licensed features on a Data Domain System are:

Data Domain Expanded Storage, which allows a user to add an expansion shelf to their system. Data Domain Open Storage (OST), which allows a DDR (Data Domain system) to be a storage server for Symantecs NetBackup OpenStorage feature. Data Domain Replicator, which sets up and manages the replication of data between two Data Domain Systems. Data Domain Retention-Lock, which protects locked files from deletion and modification for up to 70 years. Data Domain Virtual Tape Library (VTL), which allows backup software to see a Data Domain System as a tape library.

The license command allows you to add new licenses, delete current licenses, or display current licenses. See The license Command on page 124 for command details. Contact your Data Domain representative to purchase licensed features.

User Interfaces
A Data Domain System has a complete command set available to users in a command line interface. Commands allow initial system configuration, changes to individual system settings, and give displays of system states and the state of system operations. The command line interface is available through a serial console or keyboard and monitor attached directly to the Data Domain System, or through Ethernet connections. A web-based graphical user interface, the Data Domain Enterprise Manager, is available through Ethernet connections. Using a Data Domain Enterprise Manager, you can do the initial system configuration, make some configuration updates after initial configuration, and display system states and the state of system operations.

Multipath
Multipath allows external storage I/O paths for failover and load balancing across paths. For more on multipath commands, see the chapter Multipath. See also the Data Domain System Hardware Guide.

Related Documentation
8

See the Data Domain Quick Start folder for a simplified list of installation tasks.
Data Domain Operating System User Guide

Initial system Settings

See the Data Domain Command Reference for Data Domain System command summaries. See the Release Notes for a specific Data Domain software release for late changes and fixes.

Initial system Settings


A Data Domain System as delivered and installed needs very little configuration. When you first log in through the command line interface, the Data Domain System automatically starts the config setup command. From the Data Domain Enterprise Manager, you can open the Configuration Wizard for initial system configuration. After configuration, the following parameters are set in the Data Domain System:

If using DNS, one to three DNS servers are identified for IP address resolution. DHCP is enabled or not enabled for each Ethernet interface, as you choose during installation. Each active interface has an IP address. The Data Domain System hostname is set (for use by the network). The IP addresses are set for the backup servers, SMTP server, and administrative hosts. An SMTP (mail) server is identified. For NFS clients, the Data Domain System is set up to export the /backup and /ddvar directories using NFSv3 over TCP. For CIFS clients, the Data Domain System has shares set up for /backup and /ddvar. The directories under /ddvar are: core The default destination for core files created by the system. log The destination for all system log files. See Log File Management on page 165 for details. releases The default destination for operating system upgrades that are downloaded from the Data Domain Support web site. snmp The location of the SNMP MIB (management information base). traces The destination for execution traces used in debugging performance issues.

One or more backup servers are identified as Data Domain System NFS or CIFS clients. A host is identified for Data Domain System administration. Administrative users have access to the partition /ddvar. The partition is small and data in the partition is not compressed. The time zone you select is set. The initial user for the system is sysadmin with the password that you give during setup. The user command allows you to later add administrative and non-administrative users later. The SSH service is enabled and the HTTP, FTP, TELNET, and SNMP services are disabled. Use the adminaccess command to enable and disable services.
9

Introduction

Command Line Interface

The user lists for TELNET and FTP are empty, SNMP is not configured, and the protocols are disabled, meaning that no users can connect through TELNET, FTP, or SNMP. A system report runs automatically every day at 3 a.m. The report goes to a Data Domain email address and an address that you give during set up. You can add addresses to the email list using the autosupport command. An email list for system alerts that are automatically generated has a Data Domain email address and a local address that you enter during set up. You can add addresses to the email list using the alerts command The clean operation is scheduled for Tuesday at 6:00 a.m. To review or change the schedule, use the filesys clean commands. The background verification operation that continuously checks backup images is enabled.

Command Line Interface


A Data Domain System is administered through a command line interface. Use the SSH or TELNET (if enabled) utilities to access the command prompt. The majority of this manual gives details for using the commands to accomplish specific administration tasks. Each command also has a help page that gives the complete command syntax. Help pages are available through the Data Domain System help command and in an appendix at the back of this manual.

To list Data Domain System commands, enter a question mark (?) at the prompt. To list the options for a particular command, enter the command with no options at the prompt. To find a keyword used in a command option when you do not remember which command to use, enter a question mark (?) or the help command followed by the keyword. For example, the question mark followed by the keyword password displays all Data Domain System command options that include password. If the keyword matches a command, such as net, then the command explanation appears. To display a detailed explanation of a particular command, enter the help command followed by a command name. Use the up and down arrow keys to move through a displayed command. Use the q key to exit. Enter a slash character (/) and a pattern to search for and highlight lines of particular interest. The Tab key completes a command entry when that entry is unique. Tab completion works for the first three levels of command components. For example, entering syst(tab) sh(tab) st(tab) displays the command system show stats. Any Data Domain System command that accepts a list, such as a list of IP addresses, accepts entries as comma-separated, space-separated, or both. Commands that display the use of disk space or the amount of data on disks compute amounts using the following definitions:
Data Domain Operating System User Guide

10

Command Line Interface

1 KiB = 210 bytes = 1024 bytes 1 MiB = 220 bytes = 1,048,576 bytes 1 GiB = 230 bytes = 1,073,741,824 bytes 1 TiB = 240 bytes = 1,099,511,627,776 bytes Note The one exception to displays in powers of 2 is the system show performance command, in which the Read, Write, and Replicate values are calculated in powers of 10 (1KB = 1000). The commands are: adminaccess Manages the HTTP, FTP, TELNET, and SSH services. See Access Control for Administration on page 109. alerts Creates alerts for system problems. Alerts are emailed to Data Domain and to a user-configurable list. See Alerts on page 130. alias Creates aliases for Data Domain System commands See The alias Command on page 81. autosupport Generates a system status and health report. Reports are emailed to Data Domain and to a user-configurable list. See Autosupport Reports on page 134. cifs Manages Common Internet File system backups and restores on a Data Domain System and displays CIFS status and statistics for a Data Domain System. See CIFS Management on page 309. config Shows, resets, copies, and saves Data Domain System configuration settings. See Configuration Management on page 119. disk Displays disk statistics, status, usage, reliability indicators, and RAID layout and usage. See Disk Management on page 173. enclosure Use this command to identify and display information about the Data Domain system and about expansion shelves. filesys Displays filesystem status and statistics. See Statistics and Basic Operations on page 213 for details. Manages the clean feature that reclaims physical disk space held by deleted data. See Clean Operations on page 220 for details. help Displays a list of all Data Domain System commands and detailed explanations for each command. license Displays current licensed features and allows adding or deleting licenses. log Displays and administers the Data Domain System log file. See Log File Management on page 165. ndmp Manages direct backup and restore operations between a Network Appliance filer and a Data Domain System using the Network Data Management Protocol Version 2. See Backup/Restore Using NDMP on page 393.
Introduction 11

Command Line Interface

net Displays network status and set up information. See Network Management on page 87. nfs Displays NFS status and statistics. See NFS Management on page 301 for details. ntp Manages Data Domain System access to one or more time servers. The default setting is multicast. See Time Servers and the NTP Command on page 83. ost This command allows a DDR (Data Domain system) to be a storage server for Symantecs NetBackup OpenStorage feature. OST stands for Open STorage. replication Manages the Replicator for replication of backup data from one Data Domain System to another. See Replication - CLI on page 249. route Manages Data Domain System network routing rules. See The route Command on page 105. snapshot This command manages file system snapshots. A snapshot is a read-only copy of the Data Domain system file system from the top directory: /backup. snmp Enables or disables SNMP access to a Data Domain System, adds community strings, and gives contact and location information. See SNMP Management and Monitoring on page 141. support Send log files to Data Domain Technical Support. See Collect and Send Log Files on page 139. system Displays Data Domain System status, faults, and statistics, enables, disables, halts, and reboots a Data Domain System. See The system Command on page 63. Also sets and displays the system clock and calendar and allows the Data Domain System to synchronize the clock with an external time server. See Set the Date and Time on page 66. user Administers user accounts for the Data Domain System. See User Administration on page 115.

12

Data Domain Operating System User Guide

Installation

Installation and site configuration for a Data Domain System consist of the tasks listed below. After configuration, the Data Domain System is fully functional and ready for backups. For site hardware and backup software requirements, see Data Domain System Hardware Interfaces on page 7. Note Installation and configuration for a Gateway Data Domain System (using 3rd-party physical disk storage systems) is explained in the chapter Gateway systems.

Check the site and backup software requirements. Set up the Data Domain System hardware and a serial console or a monitor and keyboard if you are not using an Ethernet interface for configuration. See the Data Domain system Hardware Guide for details. Login to the Data Domain System as sysadmin using a serial console, monitor and keyboard, SSH and an Ethernet interface, or the Data Domain Enterprise Manager through a web browser. To configure the system from a browser, the browser must be able to locate the Data Domain system on the network, which means that the Data Domain system must have an IP address (from DHCP, for example). Answer questions asked by the configuration process. The process starts automatically when sysadmin first logs in through the command line interface. To start configuration in the Data Domain Enterprise Manager, click on Configuration Wizard. The process requests all of the basic information needed to use the Data Domain System. Optionally, after completing the initial configuration, follow the steps in Additional Configuration on page 27 to add to the configuration. Configure the backup software and servers. See the Data Domain Support web site (https://support.datadomain.com), Technical Notes section for details about configuring a Data Domain System with specific backup servers and software.

To upgrade Data Domain OS software to a new release, see Upgrade the Data Domain System Software on page 64. Note The Data Domain OS is pre-installed on the Data Domain System. You do not need to install software. In emergency situations, such as when a Data Domain System fails to boot up by itself, call Data Domain Technical Support for step-by-step instructions.

13

Backup Software Requirements

Backup Software Requirements


A Data Domain System accepts data from many combinations of backup software and servers. See the Data Domain Support web site (https://support.datadomain.com), Compatibility Matrix section for the latest updates on supported backup software and server combinations. Note See the Data Domain Support web site, Technical Notes section for configuration details for using specific backup software and server types with a Data Domain System.

CIFS Backup Server Timeout


Internal activities on a Data Domain System can take longer than a default CIFS timeout, leading to an error message from the media server. The message is similar to: Network name no longer existed. On all CIFS backup servers using a Data Domain System, change the SESSTIMEOUT value from the default of 45 (seconds) to decimal 3600 (one hour).

If you want detailed background information, see the following web page: http://support.microsoft.com/default.aspx?scid=http://support.m icrosoft.com:80/support/kb/articles/Q102/0/67.asp&NoWebContent= 1

Open REGEDT32 and navigate to: HKEY_LOCAL_MACHINE\SYSTEM\CURRENTCONTROLSET\SERVICES\ LANMANWORKSTATION\PARAMTERS

If the SESSTIMEOUT key does not exist, click in the right panel and select New and DWORD value. Create a new key, SESSTIMEOUT. Note that the registry is case sensitive. Use all caps for the new key name. Double click on the new (or existing) key and set to the decimal value of 3600.

Login and Configuration


After the hardware is installed and running, configure the system with the config setup command through the command line interface or use the Data Domain Enterprise Manager. The config setup command starts automatically the first time sysadmin logs in through the command line interface. The command reappears at each login until configuration is complete. To bring up the Data Domain Enterprise Manager and start the configuration:

14

Data Domain Operating System User Guide

Login and Configuration

1. Open a web browser. 2. Enter a path to the Data Domain System. For example: http://rstr01/ for a Data Domain System named rstr01 on a local network. 3. Enter a login name and password. The default password for the sysadmin login is the serial number that appears on the rear panel of the Data Domain System. Note that all characters in a serial number are numeric except for the third and fourth characters. Other than the third and fourth characters, all 0 characters are zeros. See Figure 4 on page 17 for the location. The Data Domain System Summary screen appears. 4. Click on the Configuration Wizard link as shown in Figure 2 on page 15.

Configuration link

Figure 2: Configuration Wizard link

Note Most of the installation procedure in this chapter uses the command line interface as an example. However, the Configuration Wizard of the Data Domain Enterprise Manager has the same configuration groups and sets the same configuration parameters. With the Data Domain Enterprise Manager, click on links and fill in boxes that correspond to the command line examples that follow. To return to the list of configuration sections from within one of the sections, click on the Wizard List link in the top left corner of the Configuration Wizard screen. If you earlier set up DHCP for one or more Data Domain System Ethernet interfaces, a number of the config setup prompts display the values given to the Data Domain System from a DHCP server. DHCP servers normally supply values for a number of networking parameters. Press Return during the installation to accept DHCP values. If you do not use DHCP for an interface, determine what you will use for the following values before starting the configuration:
Installation

Interface IP addresses. Interface netmasks. Routing gateway. DNS server list (if using DNS).
15

Login and Configuration

A site domain name, such as yourcompany.com. A fully-qualified hostname for the Data Domain System, such as rstr01.yourcompany.com.

You can configure different network interfaces on a Data Domain System to different subnets. When configuring Data Domain System software:

At any prompt, enter a question mark (?) for detailed information about the prompt. Press Return to accept a displayed value. Enter either hostnames or IP addresses where ever a prompt mentions a host. Hostnames must be fully qualified, such as srvr22.yourcompany.com. For any entry that accepts a list, the entries in the list can be comma-separated, spaceseparated, or both. When configuration is complete, the system is ready to accept backup data. For NFS clients, the Data Domain System is set up to export the /backup and /ddvar directories using NFSv3 over TCP. For CIFS clients, the Data Domain System has shares set up for /backup and /ddvar.

The configuration utility has five sections: Licenses, Network, NFS, CIFS, and system. You can configure or skip any section. The command line interface automatically moves from one section to the next. With the Data Domain Enterprise Manager, click on the sections as shown in Figure 3.

16

Data Domain Operating System User Guide

Login and Configuration

Configuration sections (wizards)

Figure 3: Configuration sections

1. The first login to the Data Domain System can be from a serial console, keyboard and monitor, through an Ethernet connection, or through a web browser. Log in as user sysadmin. The default password is the serial number from the rear panel of the Data Domain System. See Figure 4 for the location.

Serial number
Figure 4: Serial number location

From a serial console or keyboard and monitor, log in to the Data Domain System at the login prompt.
17

Installation

Login and Configuration

From a remote machine over an Ethernet connection, give the following command (with the hostname you chose for the Data Domain System) and then give the default password. # ssh -l sysadmin host-name sysadmin@host-names password:

From a web browser, enter a path to the Data Domain System. For example: http://rstr01/ for a Data Domain System named rstr01 on a local network.

2. When using the command line interface, the first prompt after login gives the opportunity to change the sysadmin password. The prompt appears only once, at the first login to a new system. You can change the sysadmin password immediately at the prompt or later with the user change password command. To improve security, Data Domain recommends that you change the 'sysadmin' password before continuing with the system configuration. Change the 'sysadmin' password at this time? (yes|no) [yes]: 3. When using the command line interface, the Data Domain System command config setup starts next. 4. The first configuration section is for licensing. Licenses that you ordered with the Data Domain System are already installed. At the first prompt, enter yes to configure or view licenses. Enter the license characters, including dashes, for each license category. Make no entry and press Enter for categories that you have not licensed.

18

Data Domain Operating System User Guide

Login and Configuration

Licenses Configuration Configure Licenses at this time (yes|no) [no]: yes Expanded Storage License Code Enter your Expanded Storage license code []: Open Storage (OST) License Code Enter your Open Storage (OST) license code []: Replication License Code Enter your Replication license code []: Retention-Lock License Code Enter your Retention-Lock license code []: VTL License Code Enter your VTL license code []: Note (If customers want to use the optimized duplication feature of OST, then they need to get the Replication license as well.) A listing of your choices appears. You can accept the settings (Save), reject the settings and go to the next section (Cancel), or return to the beginning of the current section and change settings (Retry). A Retry shows your previous choice for each prompt. Press Return to accept the displayed value or enter a new value. Pending License Settings. Expanded Storage License: Open Storage (OST) License: Replication License: Retention-Lock License: VTL License: ABCD-ABCD-ABCD-ABCD ABCD-ABCD-ABCD-ABCD ABCD-ABCD-ABCD-ABCD ABCD-ABCD-ABCD-ABCD ABCD-ABCD-ABCD-ABCD

Do you want to save these settings (Save|Cancel|Retry): 5. The second section is for network configuration. At the first prompt, enter yes to configure network parameters. NETWORK Configuration Configure NETWORK parameters at this time (yes|no) [no]: Note After configuring the Data Domain System to use DNS, the Data Domain System must be rebooted. Also, if DHCP is disabled for all interfaces and then later enabled for one or more interfaces, the Data Domain System must be rebooted.

Installation

19

Login and Configuration

a. The first prompt is for a Data Domain System machine name. Enter a fully-qualified name that includes the domain name. For example: rstr01.yourcompany.com. Note: With CIFS using domain mode authentication, the first component of the name is also used as the netBIOS name, which cannot be over 15 characters. If you use domain mode and the hostname is over 15 characters, use the cifs set nb-hostname command for a shorter netBIOS name. Hostname Enter the hostname for this system (fully-qualified domain name)[]: b. Supply a domain name, such as yourcompany.com, for use by Data Domain System utilities, or accept the display of the domain name used in the hostname. Domainname Enter your DNS domainname []: c. Configure each Ethernet interface that has an active Ethernet connection. If you earlier set up DHCP for an interface, the IP address and netmask prompts do not appear. You can accept or not accept DHCP for each interface. If you enter yes for DHCP and DHCP is not yet available to the interface, the Data Domain System attempts to set up the interface with DHCP until DHCP is available. Use the net show settings command to display which interfaces are configured for DHCP. If you are on an Ethernet interface and you choose to not use DHCP for the interface, the connection is lost when you complete the configuration. At the last prompt, entering Cancel deletes all new values and goes to the next section. Each interface is a Gigabit Ethernet connection. The same set of prompts appears for each interface. Ethernet port eth0: Enable Ethernet port (yes|no) [ ]: Use DHCP on Ethernet port eth0 (yes|no) [ ]: Enter the IP address for eth0 [ ]: Enter the netmask for eth0 [ ]: When not using DHCP on any Ethernet port, you must specify an IP address for a default routing gateway. Default Gateway Enter the default gateway IP address[]:

20

Data Domain Operating System User Guide

Login and Configuration

When not using DHCP on any Ethernet port, you can enter up to three DNS servers for a Data Domain System to use for resolving hostnames into IP addresses. Use a comma- separated or space-separated list. Enter a space for no DNS servers. With no DNS servers, you can use the net hosts commands to inform the Data Domain System of IP addresses for relevant hostnames. DNS Servers Enter the DNS Server list (zero, one, two or three IP addresses)[]:

d. A listing of your choices appears. You can accept the settings (Save), reject the settings and go to the next section (Cancel), or return to the beginning of the current section and change settings (Retry). A Retry shows your previous choice for each prompt. Press Return to accept the displayed value or enter a new value.

Pending Network Settings .


--------------Hostname: Domainname: Default Gateway DNS Server List ----------------------------srvr26.yourcompany.com yourcompany.com DNS Server List ---------------

Cable -----

*** -----

Port ---eth0 eth1 eth2 eth3 ----

Enabled ------yes yes no yes -------

DHCP ---yes yes n/a yes ----

IP Address --------------(dhcp-supplied) (dhcp-supplied) n/a (dhcp-supplied) ---------------

Netmask -------------(dhcp-supplied) (dhcp-supplied) n/a (dhcp-supplied) --------------

*** No connection on indicated Ethernet port Do you want to save these settings (Save|Cancel|Retry):

Note An information box also appears in the recap if any interface is set up to use DHCP, but does not have a live Ethernet connection. After troubleshooting and completing the Ethernet connection, wait for up to two minutes for the Data Domain System to update the interface. The Cable column of the net show hardware command displays whether or not the Ethernet connection is live for each interface.
Installation 21

Login and Configuration

6. The third section is for CIFS (Common Internet File system) configuration. At the first prompt, enter yes to configure CIFS parameters. The default authentication mode is Active Directory. Note When configuring a destination Data Domain System as part of a Replicator pair, configure the authentication mode, WINS server (if needed) and other entries as with the originator in the pair. The exceptions are that a destination does not need a backup user and will probably have a different backup server list (all machines that can access data that is on the destination). CIFS Configuration Configure CIFS at this time (yes|no) [no]: yes a. Select a user-authentication method for the CIFS user accounts that connect to the /backup and /ddvar shares on the Data Domain System. CIFS Authentication Which authentication method will this system use (Workgroup|Domain|Active-Directory) [Active Directory]: The Workgroup method has the following prompts. Enter a workgroup, the name of a CIFS workgroup account that will send backups to the Data Domain System, a password for the workgroup account, a WINS server name, and backup server names. Workgroup Name Enter the workgroup name for this system [ ]: Do you want to add a backup user yes|no) [no]: Backup User Enter backup user name: Backup User Password Enter backup user password: Enter the WINS server for the Data Domain System to use: WINS Server Enter the IP address for the WINS server for this system []: Enter one or more backup servers as Data Domain System clients. Backup Servers Enter the Backup Server list (CIFS clients of /backup) []:

22

Data Domain Operating System User Guide

Login and Configuration

The Domain method brings the following prompts. Enter a domain name, the name of a CIFS domain account that will send backups to the Data Domain System and optionally, one or more domain controller IP addresses, a WINS server name, and backup server names. Press Enter with no entry to break out of the prompts for domain controllers. Domain Name Enter the name of the Windows domain for this system [ ]: Do you want to add a backup user? (yes|no) [no]: Backup user Enter backup user name: Domain Controller Enter the IP address of domain controller 1 for this system [ ]: Enter the WINS server for the Data Domain System to use: WINS Server Enter the IP address for the WINS server for this system []: Enter one or more backup servers as Data Domain System clients. Backup Servers Enter the Backup Server list (CIFS clients of /backup) []: The Active-Directory method brings the following prompts. Enter a fully-qualified realm name, the name of a CIFS backup account, a WINS server name, and backup server names. Data Domain recommends not specifying a domain controller. When not specifying a domain controller, be sure to specify a WINS server. The Data Domain System must meet all active-directory requirements, such as a clock time that is no more than five minutes different than the domain controller. Press Enter with no entry to break out of the prompts for domain controllers. Active-Directory Realm Enter the name of the Active-Directory Realm for this system [ ]: Do you want to add a backup user? (yes|no) [no]: Backup user Enter backup user name: Domain Controllers Enter list of domain controllers for this system [ ]:

Installation

23

Login and Configuration

Enter the WINS server for the Data Domain System to use: WINS Server Enter the IP address for the WINS server for this system []: Enter one or more backup servers as Data Domain System clients. An asterisk (*) is allowed as a wild card only when used alone to mean all. Backup Server List Enter the Backup Server list (CIFS clients of /backup) []: b. A listing of your choices appears. You can accept the settings (Save), reject the settings and go to the next section (Cancel), or return to the beginning of the current section and change settings (Retry). A Retry shows your previous choice for each prompt. Press Return to accept the displayed value or enter a new value. The following example is with an authentication mode of Active-Directory. Pending CIFS Settings -------------------------------------

Auth Method Active-Directory Domain domain1 Realm domain1.local Backup User dsmith Domain Controllers WINS Server 192.168.1.10 Backup Server List * -------------------------------------Do you want to save these settings (Save|Cancel|Retry): 7. The fourth section is for NFS configuration. At the first prompt, enter yes to configure NFS parameters. NFS Configuration Configure NFS at this time (yes|no) [no]: yes a. Add backup servers that will access the Data Domain System through NFS. You can enter a list that is comma-separated, space-separated, or both. An asterisk (*) opens the list to all clients. The default NFS options are: rw, no_root_squash, no_all_squash, and secure. You can later use adminaccess add and nfs add /backup to add backup servers. Backup Servers Enter the Backup Server list (NFS clients of /backup)[]:

24

Data Domain Operating System User Guide

Login and Configuration

b. A listing of your choices appears. You can accept the settings (Save), reject the settings and go to the next section (Cancel), or return to the beginning of the current section and change settings (Retry). A Retry shows your previous choice for each prompt. Press Return to accept the displayed value or enter a new value. Pending NFS Settings. Backup Server List: Do you want to save these settings (Save|Cancel|Retry): 8. The fifth section is for system parameters. At the first prompt, enter yes to configure system parameters. SYSTEM Configuration Configure SYSTEM Parameters at this time (yes|no) [no]: a. Add a client host from which you will administer the Data Domain System. The default NFS options are: rw, no_root_squash, no_all_squash, and secure. You can later use the commands adminaccess add and nfs add /ddvar to add other administrative hosts. Admin host Enter the administrative host []: b. You can add an email address so that someone at your site receives email for system alerts and autosupport reports. For example, jsmith@yourcompany.com. By default, the Data Domain System email lists include an address for the Data Domain support group. You can later use the Data Domain System commands alerts and autosupport to add more addresses. Admin email Enter an email address for alerts and support emails[]: c. You can enter a location description for ease of identifying the physical machine. For example, Bldg4-rack10. The alerts and autosupport reports display the location. system Location Enter a physical location, to better identify this system[]: d. Enter the name of a local SMTP (mail) server for Data Domain System emails. If the server is an Exchange server, be sure that SMTP is enabled. SMTP Server Enter the hostname of a mail server to relay email alerts[]:

Installation

25

Login and Configuration

e. The default time zone for each Data Domain System is the factory time zone. For a complete list of time zones, see Time Zones on page 451. Timezone Name Enter your timezone name:[US/Pacific]: f. To allow the Data Domain System to use one or more Network Time Service (NTP) servers, you can enter IP addresses or server names. The default is to enable NTP and to use multicast. Configure NTP Enable Network Time Service? (yes|no)|? [yes]: Use multicast for NTP? (yes|no|?) [no]: Enter the NTP Server list [ ]: g. A listing of your choices appears. Accept the settings (Save), reject the settings and go to the next section (Cancel), or return to the beginning of the current section and change settings (Retry). A Retry shows your previous choice for each prompt. Press Return to accept the displayed value or enter a new value. Pending system Settings ------------------------------Admin host pls@yourcompany.com system Location Server Room 52327 SMTP Server mail.yourcompany.com Timezone name US/Pacific NTP Servers 123.456.789.33 ---------------------------------Do you want to save these settings (Save|Cancel|Retry): Note For Tivoli Storage Manager on an AIX backup server to access a Data Domain System, you must re-add the backup server to the Data Domain System after completing the original configuration setup. On the Data Domain System, run the following command with the server-name of the AIX backup server: # nfs add /backup server-name (insecure) h. Configure the backup servers. For the most up-to-date information about setting up backup servers for use with a Data Domain System, go to the Data Domain Support web site (https://support.datadomain.com/). See the Technical Notes section.

26

Data Domain Operating System User Guide

Additional Configuration

Additional Configuration
The following are common changes to the Data Domain System configuration that users make after the installation. Changes to the initial configuration settings are all made through the command line interface. Each change describes the general task and the command used to accomplish the task.

Add email addresses to the alerts list and the autosupport list. See Add to the Email List on page 134 for details. alerts add addr1[,addr2,...]

Give access to additional backup servers. See NFS Management on page 301 for details. nfs add /backup srvr1[,srvr2,...]

From a remote machine, add an authorized SSH public key to the Data Domain System. See Add an Authorized SSH Public Key on page 112 for details. ssh-keygen -d ssh -l sysadmin rstr01 adminaccess add ssh-keys \ < ~/.ssh/id_dsa.pub

Add remote hosts that can use FTP or TELNET on the Data Domain System. See Add a Host on page 109 for details. adminaccess add {ftp | telnet | ssh | http}{all | host1[,host2,...]}

Enable HTTP, HTTPS, FTP or TELNET. The SSH, HTTP, and HTTPS services are enabled by default. See Enable a Protocol on page 111 for details. adminaccess enable {http | https | ftp | telnet | ssh | all}

Add a standard user. See User Administration on page 115 for details. user add username

Change a user password. See User Administration on page 115 for details. user change password username

Administering a Data Domain System


To administer a Data Domain System, use either the command line interface or the Data Domain Enterprise Manager graphical user interface.

Installation

27

Administering a Data Domain System

Command Line Interface


The command line interface gives complete access to a Data Domain System for the initial system configuration, for making changes to individual system settings, and to display of system states and the state of system operations. The remaining chapters in this book detail the use of all Data Domain System commands and operations. The headings in each chapter are a task-oriented list of operations performed by the featured commands. To find the command for any task that you want to perform, do either of the following:

Look in the table of contents at the beginning of this guide for the heading that describes the task. List the Data Domain System commands and operations. To see a list of commands, log in to the Data Domain System using SSH (or TELNET if that is enabled) and enter a question mark (?) at the prompt. To see a list of operations available for a particular command, enter the command name. To display a detailed help page for a command, use the help command with the name of the target command. Use the up and down arrow keys to move through a displayed command. Use the q key to exit. Enter a slash character (/) and a pattern to search for and highlight lines of particular interest.

Data Domain Enterprise Manager


The web-based graphical user interface, the Data Domain Enterprise Manager, is available through Ethernet connections to a Data Domain System. With the Data Domain Enterprise Manager, you can do the initial system configuration, make some configuration updates after initial configuration, and display system states and the state of system operations. From the left panel of the Data Domain Enterprise Manager, select the Configuration Wizard to change configuration values or select an area such as File system to display system information. See Figure 5.

28

Data Domain Operating System User Guide

Administering a Data Domain System

rstr01.yourcompany.com

Selections on the left panel

Figure 5: Data Domain Enterprise Manager selections

For a complete explanation of the default Data Domain Enterprise Manager screen, see Graphical User Interface on page 399.

Installation

29

Administering a Data Domain System

30

Data Domain Operating System User Guide

ES20 Expansion Shelf

A Data Domain ES20 expansion shelf is a 3U chassis that has 16 disks for increasing the storage capacity of a Data Domain system. The Data Domain OS Data Invulnerability Architecture and all other Data Domain System data integrity features that protect against data loss from hardware and software failures also apply to the ES20 expansion shelf. All Data Domain System data compression technology also applies as does the Data Domain Replicator feature that sets up and manages replication of backup data between two Data Domain Systems. The Replicator sees data on an expansion shelf as part of the volume that resides on the managing Data Domain System.

In related Data Domain System commands, the system and each expansion shelf is called an enclosure. A system sees all data storage (system and attached shelves) as part of a single volume. A new system installed along with expansion shelves finds the shelves when booted up. Follow the instructions in this chapter to add shelves to the volume and create RAID groups. After adding a shelf to a system with an existing, active file system, a percentage of new data is sent to the new shelf. An algorithm takes into account the amount of space available in the Data Domain file system, in the file system on a previously installed shelf (if one exists), and the probable impact of location on read/write times. Over time, data is spread evenly over all enclosures.

Warning After adding a shelf to a volume, the volume must always include the shelf to maintain file system integrity. Do not add a shelf and then later remove it, unless you are prepared to lose all data in the volume. If a shelf is disconnected, the volumes file system is immediately disabled. Re-connect the shelf or transfer the shelf disks to another shelf chassis and connect the new chassis to re-enable the file system. If the data on a shelf is not available to the volume, the volume cannot be recovered. Without the same disks in the original shelf or in a new shelf chassis, the Data Domain operating system must be re-installed. Contact Data Domain Technical Support for the re-installation procedure. Note Disk space is given in KiB, MiB, GiB, and TiB, the binary equivalents of KB, MB, GB, and TB.

31

RAID groups

All administrative access to an ES20 shelf is done through the controlling Data Domain System command line interface and graphical user interface. Initial configuration tasks, changes to the configuration, and displaying disk usage in a shelf are all done with standard Data Domain System commands as explained in this chapter.

RAID groups
The single volume that includes all disks and shelves in a system also includes multiple RAID 6 groups; also called disk groups. Each shelf is one RAID group and the system is one RAID group.

The system has a RAID group of 12 data disks, two parity disks, and one spare. Each shelf has a RAID group with 12 data disks and two parity disks. It also has two spares, which are global, and which are used when needed in a certain order. A RAID group is created on a new shelf with the disk add enclosure command.

Disk Failures
A system and two expansion shelves (three enclosures) have a total of five spare disks. If the number of spare disks needed by an enclosure exceeds the number of spares in that enclosure, the RAID group for that enclosure takes an available spare disk from another enclosure. Warning If no spare disks are available from any enclosure, a shelf can have up to two more failed disks and still maintain the RAID group of 12 data disks. However, if one more disk in a shelf fails (leaving only 11 data disks), the data volume (made up of all the enclosures) fails and cannot be recovered. Always replace any failed disk in any enclosure as soon as possible.

Add a Shelf
Physically install shelves by following the installation instructions received with each shelf. After installing shelves and starting the Data Domain System, the following commands display the state of disks and the Data Domain System/shelf connections before the shelves are integrated as a RAID group.

You can check the status of the SAS HBA cards before the shelves are physically connected to the Data Domain System. Enter the disk port show summary command. Each HBA generates one line in the command output. In the example below, the Data Domain System has two HBAs and no shelf cable attached to either card, giving a Status of offline for both HBAs.

32

Data Domain Operating System User Guide

Add a Shelf

# disk port show summary Port Connection Link Type Speed ----------------3a SAS 4a SAS ----------------

Connected Enclosure IDs -------------------------

Status ------offline offline -------

After the shelves are physically connected to the Data Domain System, the enclosure show ports output includes enclosure IDs and a status of online. # disk port show summary Port Connection Link Type Speed ----------------3a SAS 4a SAS ----------------Connected Enclosure IDs ------------2 3 ------------Status ------online online -------

On the system, use the enclosure show summary command to verify that the shelves are recognized. # enclosure Enclosure --------1 2 3 --------show summary Model No. ----------------Data Domain DD580 Data Domain ES20 Data Domain ES20 ----------------Serial No. ---------------1234567890 50050CC100100A3A 50050CC100100AE6 ---------------Capacity -------15 Slots 16 Slots 16 Slots --------

You can physically identify which shelf is identified by an enclosure number by matching the Serial No (actually the world-wide name of the enclosure) from the enclosure show summary command with the enclosure WWN located on the control panel on the back of the shelf. See Figure 6 for the location.

SAS controller WWN


Figure 6: Shelf serial number

ES20 Expansion Shelf

33

Add a Shelf

Enter the disk show raid-info command to show the current RAID status of the disks. All disks should have a State of unknown or foreign. # disk show raid-info

Enter the filesys show space command to display the file system that is seen by the system. # filesys show space

Use the following commands to make the shelf disks available: 1. The new disks are not yet part of a RAID group or part of the Data Domain System volume. Use the disk add enclosure command to add the disks to the volume. The command asks for confirmation and then for the sysadmin password. When adding two shelves, use the command once for enclosure 2 and once for enclosure 3. # disk add enclosure 2 The 'disk add' command adds all disks in the enclosure to the filesystem. Once the disks are added, they cannot be removed from the filesystem without re-installing the system. Are you sure? (yes|no|?) [no]: y ok, proceeding. Please enter sysadmin password to confirm 'disk add enclosure': Note On DD6xx systems, the message returned by the disk add enclosure command will be different from the above, and it could take much longer for the first shelf. Typically it should take 3 or 4 minutes for the first shelf, and half a minute for each subsequent shelf. 2. Use the disk show raid-info command to display the RAID groups. Each shelf should show most disks with a State of in use and two disks with a State of spare. # disk show raid-info If disks from each shelf are labeled as unused rather than spare, use the disk unfail command for each unused disk. For example, if the two disks 2.15 and 2.16 are labeled unused, enter the following two commands: # disk unfail 2.15 # disk unfail 2.16 Use the following commands to display the new state of the file system and disks: # filesys status Check the file system as seen by the system: # filesys show space

34

Data Domain Operating System User Guide

Disk Commands

Resource ------------------/ddvar Pre-compression Data If 100% cleaned* Meta-data Index -------------------

Size GiB -------78.7 14864.6 14864.6 19.4 49.2 --------

Used GiB -----13.8 7040.9 7880.4 7880.4 0.3 39.2 ------

Avail GiB -----61.0 6984.2 6984.2 18.1 9.9 ------

Use% ---18% 53% 53% 2% 80% ----

Estimated compression factor*: 0.8x = 7040.9/(7880.4+0.3+39.2) * Estimate based on 2007/02/08 cleaning The disk show raid-info command should show a State of in use or spare for all disks in the shelves.

Disk Commands
With DD OS 4.1.0.0 and later releases, all disk commands that take a disk-id variable must use the format enclosure-id.disk-id to identify a single disk. Both parts of the ID are a decimal number. A Data Domain System with no shelves must also use the same format for disks on the Data Domain System. A Data Domain System always has the enclosure-id of 1 (one). For example, to check that disk 12 in a system (with or without shelves) is recognized by the DD OS and hardware, use the following command: # disk beacon 1.12 In DD OS releases previous to 4.1.0.0, output from disk commands listed individual disks with the word disk and a number. For example: # disk show hardware
Disk -----disk1 disk2 Manufacturer/Model ------------------HDS725050KLA360 HDS725050KLA360 Firmware -------K2A0A51A K2AOA51A Serial No. -------------KRFS06RAG9VYGC KRFS06RAG9TYYC Capacity ---------465.76 GiB 465.76 GiB

Output now shows the enclosure (Enc) number, a dot, and the disk (Slot) number:
ES20 Expansion Shelf 35

Disk Commands

# disk show hardware

Disk Manufacturer/Model (Enc.Slot) --------- -----------------1.1 HDS725050KLA360 1.2 HDS725050KLA360

Firmware Serial No.

Capacity

-------- -------------- ---------K2AOA51A KRFS06RAG9VYGC 465.76 GiB K2AOA51A KRFS06RAG9TYYC 465.76 GiB

Command output for a system that has one or more expansion shelves includes entries for all enclosures, disk slots, and RAID Groups. Note All system commands that display the use of disk space or the amount of data on disks compute and display amounts using base 2 calculations. For example, a command that displays 1 GiB of disk space as used is reporting: 230 bytes = 1,073,741,824 bytes. 1 KiB = 210 bytes = 1024 bytes. 1 MiB = 220 bytes = 1,048,576 bytes 1 GiB = 230 bytes = 1,073,741,824 bytes 1 TiB = 240 bytes = 1,099,511,627,776 bytes

Look for New Disks, LUNs, and Expansion Shelves


To check for new disks or LUNs with gateway systems or when adding an expansion shelf, use the disk rescan operation. Administrative users only. disk rescan

Add an Expansion Shelf


To add an expansion shelf, use the disk add enclosure command. The enclosure-id is always 2 for the first added shelf and 3 for the second. The system always has the enclosure-id of 1 (one). disk add enclosure enclosure-id For example, to add a first enclosure: # disk add enclosure 2

36

Data Domain Operating System User Guide

Shelf (enclosure) Commands

Display Disk Status


The disk status operation displays the number of disks in use and failed, the number of spare disks available, and whether a RAID disk group reconstruction is underway. Note that the RAID portion of the display could show one or more disks as failed while the Operational portion of the display could show all drives as operating normally. A disk can be physically functional and available, but not currently in use by RAID, possibly because of operator intervention. disk status The display for a Data Domain System with two expansion shelves is similar to the following. Note that the disks in a new expansion shelf recognized with the disk rescan command show a status of unknown. Use the disk add enclosure command to change the status to in use. # disk status Normal - system operational 1 disk group total 9 drives are operational

Shelf (enclosure) Commands


Use the RPM enclosure command to identify and display information about expansion shelves.

List Enclosures
To list known enclosures, model numbers, serial numbers, and capacity (number of disks in the enclosure), use the enclosure show summary command. The serial number for an expansion shelf = the chassis Serial Number = the enclosure WWN (world-wide name) = the OPS Panel WWN. See Figure 7 for the WWN labels physical location on the back panel of the shelf. enclosure show summary For example:

ES20 Expansion Shelf

37

Shelf (enclosure) Commands

# enclosure show summary

Enclosure --------1 2 3 ---------

Model No. ---------------Data Domain DD560 Data Domain ES20 Data Domain ES20 ----------------

Serial No. ---------------7FP5705030 50050CC100123456 50050CC100123457 ----------------

Capacity -------15 Slots 16 Slots 16 Slots --------

3 enclosures present.

World-wide name label

Figure 7: World-wide name location

Identify an Enclosure
To check that the Data Domain OS and hardware recognize an enclosure, use the enclosure beacon operation. The operation causes the green (activity) LED on each disk in an enclosure to flash green. Use the (Control) c key sequence to turn off the operation. Administrative users only. enclosure beacon enclosure-id

38

Data Domain Operating System User Guide

Shelf (enclosure) Commands

Display Fan Status


To display the current status of fans in all enclosures or in a specific enclosure, use the enclosure show fans command: enclosure show fans [enclosure-id] To show the status of all fans for a system with one expansion shelf: # enclosure show fans
Enclosure --------1 Description ------------------- ------Crossbar fan #1 High Crossbar fan Crossbar fan Crossbar fan Rear fan #1 Rear fan #2 Power module #1 fan Low Power module ------------------- ------Level -----OK High Medium Medium Medium Medium OK fan Low -----Status

#2 #3 #4

OK OK OK OK OK OK

2 ---------

#2

Enclosure starts with the system as enclosure 1 (one). Description for a shelf lists one fan for each power/cooling unit. Level is the fan speed and depends on the internal temperature and amount of cooling needed. Status is either OK or Failed.

Display Component Temperatures


To display the internal and CPU chassis temperatures for a system and the internal temperature for expansion shelves, use the enclosure show temperature-sensors command. CPU temperatures may be shown in relative or ambient readings. The CPU numbers depend on the Data Domain System model. With newer models, the numbers are negative when the status is OK and move toward 0 (zero) as CPU temperature increases. If a CPU temperature reaches 0 Celsius, the Data Domain System shuts down. With older models, the numbers are positive. If the CPU temperature reaches 80 Celcius, the Data Domain System shuts down enclosure show temperature-sensors [enclosure-id] In the following example, the temperature for CPU 0 is 97 degrees Fahrenheit below the maximum:

ES20 Expansion Shelf

39

Shelf (enclosure) Commands

# enclosure show temperature-sensors


Enclosure --------1 Description ---------------CPU 0 Relative CPU 1 Relative Chassis Ambient Internal ambient Internal ambient ---------------C/F --------54/-97 -57/-103 32/90 33/91 31/88 -------Status -----OK OK OK OK OK ------

2 3 ---------

Display Port Connections


To display port connection information and status, use the disk port show summary operation. disk port show summary For example: # disk port show summary
Port ---3a ---Connection Type ---------SAS ---------Link Speed --------Connected Enclosure IDs ------------------------Status ------offline -------

Port See the "Data Domain System Hardware User Guide" to match a slot to a port number. A DD580, for example, shows port 3a as a SAS HBA in slot 3 and port 4a as a SAS HBA in slot 4. A gateway Data Domain system with a dual-port Fibre Channel HBA always shows #a as the top port and #b as the bottom port, where # is the HBA slot number. Connection Type is SAS or FC, depending on the Data Domain system model. Connection Type is SAS for enclosures and FC (Fibre Channel) for a gateway system. Link Speed is the HBA port link speed. Connected Enclosure IDs shows the number assigned to each shelf. The order in which the shelves are numbered is not important. Status is online or offline. Offline means that the shelf is not seen by the system. Check cabling and that the shelf is powered on.
40 Data Domain Operating System User Guide

Shelf (enclosure) Commands

Display All Hardware Status


To display temperatures and the status of all fans and power supplies, use the enclosure show all command: enclosure show all [enclosure-id]

Display Power Supply Status


To display the status of power supplies in all enclosures or in a specific enclosure, use the enclosure show powersupply command: enclosure show powersupply [enclosure-id] For example: # enclosure Enclosure --------1 2 --------show powersupply Description --------------Power Module #1 Power Module #2 Power Module #3 Power Module #1 Power Module #2 --------------Status -----OK OK OK OK OK ------

Display HBA Information


To display information about the Host Bus Adapter (HBA), use the disk port show summary operation. disk port show summary [port-id] For example: # disk port show summary Port Connection Link Type Speed ------------------3a SAS 12 Gbps 4a SAS 12 Gbps ------------------Connected Enclosure IDs ------------2 3 ------------Status -----online online ------

ES20 Expansion Shelf

41

Shelf (enclosure) Commands

Port See the "Data Domain System Hardware User Guide" to match a slot to a port number. A DD580, for example, shows port 3a as a SAS HBA in slot 3 and port 4a as a SAS HBA in slot 4. A gateway Data Domain system with a dual-port Fibre Channel HBA always shows #a as the top port and #b as the bottom port, where # is the HBA slot number, depending on the Data Domain system model. Connection Type is SAS for expansion shelves and FC (Fibre Channel) for a gateway system. Link Speed is the HBA port link speed. Connected Enclosure IDs are the IDs of the shelves that are connected. Status is online or offline.

Display Statistics
To display statistics useful when troubleshooting HBA-related problems, use the enclosure port show stats operation. The command output is used by Data Domain Technical Support. enclosure port show stats [port-id]

Display Target Storage Information


Target information is displayed only for a gateway system.

Display the Layout of SAS Enclosures


To show the layout of the SAS enclosures attached to a system, use the command enclosure show topology. enclosure show topology The output of the command looks like the following sample output. # enclosure show topology
Port ---3a 3b 4a 4b ---> > > > enc.ctrl.port ------------2.A.H:2.A.E 7.B.H:7.B.E 5.A.H:5.A.E 4.B.H:4.B.E ------------> > > > enc.ctrl.port ------------3.A.H:3.A.E 6.B.H:6.B.E 6.A.H:6.A.E 3.B.H:3.B.E ------------> > > > enc.ctrl.port ------------4.A.H:4.A.E 5.B.H:5.B.E 7.A.H:7.A.E 2.B.H:2.B.E -------------

42

Data Domain Operating System User Guide

Shelf (enclosure) Commands Encl ---2 3 4 5 6 7 ---WWN ---------------50050CC1001019AA 50050CC10010194D 50050CC100100FD1 50050CC100101A80 50050CC1001019E6 50050CC100101933 ---------------Serial # ---------------50050CC1001019AA 50050CC10010194D 50050CC100100FD1 50050CC100101A80 50050CC1001019E6 50050CC100101933 ----------------

Error Message: ----------------No error detected ----------------Enclosure rear view:

A physical diagram corresponding to the sample output given is shown in the figure: Data Domain system with 2 dual port HBAs and six shelves.

ES20 Expansion Shelf

43

Shelf (enclosure) Commands

Figure 8 Data Domain system with 2 dual port HBAs and six shelves.

Note Enclosure numbers are not static; they may change when the system is rebooted. (The numbers are generated according to when the shelves are detected during system boot.) Thus, in order to determine enclosure cabling, refer to the WWN (World Wide Name) of each enclosure, which is also shown in the output of the enclosure show topology command.

44

Data Domain Operating System User Guide

Volume Expansion

Component Relationship and Commands to show it


The relationship between various Data Domain system components can be shown with certain commands. These are shown in the table Component Relationship and Commands on page 45. .
Relationship head to shelves shelves to disks disks to disk groups Commands to show it enclosure show topology and disk multipath status disk multipath status disk show detailed-raid-info

Component Relationship and Commands

Volume Expansion
Note Dont add a shelf when theres a disk failure of any kind. Repair any disk failures before adding a shelf.

Procedure: Create RAID group on new shelf that has lost disks
The following procedure shows how to create a RAID group on a new shelf that has lost three or more disks to existing RAID groups. 1. Use the disk show raid-info command to identify which RAID group is using disks in the new shelf. Also note which disk(s) a RAID group is using 2. In the enclosure for the RAID group that is using one or more disks in the new shelf, replace the bad disks that created the need for a spare outside of the enclosure. 3. In the new shelf, fail a disk used by the enclosure that now has a replacement spare disk. The RAID group should immediately start to rebuild using the new spare in its own enclosure. After the rebuild, fail other disks in the new shelf as needed to move data to other replacement spares in other enclosures. 4. Unfail the disk or disks in the new shelf that were used by the other RAID group(s). 5. Run disk add enclosure for the new shelf.

ES20 Expansion Shelf

45

RAID Groups, Failed Disks, and Enclosures

RAID Groups, Failed Disks, and Enclosures


The disks in each enclosure (the system and each shelf) are seen as a RAID group (disk group) when the enclosure is first configured. The system has one RAID group (disk group 0) and each shelf has a RAID group (disk group 1 and disk group 2). Use the disk show raid-info command to see which disks from each disk group are in each enclosure. When a disk fails, the process that reconstructs the data onto a spare disk always first chooses a spare that is in the same enclosure as the disk group. A failed disk in disk group 2 is always reconstructed on a spare disk in enclosure 2 when the enclosure has a spare. When the enclosure does not have a spare, the reconstruction process takes a spare disk from another enclosure. When a disk from disk group 0 (the system group) is reconstructed on a spare that is outside of the system enclosure, the following message is generated: Some disks of the primary disk group dg0 are not on the head unit. This may prevent the system from booting up when the external enclosure is disconnected. If the head unit has failed disks, please replace them as soon as possible. When a disk from disk group 1 or disk group 2 (one of the expansion shelves) is reconstructed on a spare that is on the system, the following message is generated: Secondary disk group dgname has a disk on the head unit. Please check the availability of spares on enclosure number. Do not leave disk group 0 (the system enclosure) with no available spare disk on the system or with a disk that is in another enclosure. If disk group 0 has one or more disks on an expansion shelf and the shelf is disconnected, the system cannot be rebooted. The shelf must be reconnected (data remains available), or the system operating system must be re-installed, which means that all data in the file system is lost.

Always replace failed disks as soon as possible. See Replace Disks in Hardware Guide. If disk group 1 or disk group 2 use the spare disk on the system for reconstruction: Immediately replace all failed disks in all systems so that spares are available. Fail the group 1 or group 2 disk that is on the system. Wait for reconstruction to complete on one of the expansion shelf spares. Unfail the disk on the system, which should return to the state of spare.

If disk group 0 reconstructs a disk using a spare from an expansion shelf: Immediately replace all failed disks in all systems. Fail the disk group 0 disk that is on a shelf. Wait for reconstruction to complete on a system spare. Unfail the failed shelf disk. The disk should return to the state of spare.
Data Domain Operating System User Guide

46

RAID Groups, Failed Disks, and Enclosures

ES20 Expansion Shelf

47

RAID Groups, Failed Disks, and Enclosures

48

Data Domain Operating System User Guide

Gateway systems
Gateway Data Domain Systems store data in and restore data from 3rd-party physical storage mounted disk arrays through Fibre Channel connections. Currently, the gateway Data Domain Systems support the following types of connectivity:

Fibre Channel direct-attached connectivity to a storage array using a 1, 2, or 4 Gb/sec Fibre Channel interface. Fibre Channel SAN-attached connectivity to a storage array using a 1, 2, or 4 Gib/sec Fibre Channel interface.

Note Generally all serial interfaces for networking are quoted in numbers of bits per second (lower case b) rather than Bytes (upper case B). See the Gateway Compatibility Matrix on the Data Domain Support web site for the latest updates of certified storage arrays, storage firmware, and SAN topology. Points to be aware of with a gateway system are:

The system supports a single volume with a single data collection. A data collection is all the files stored in a single Data Domain System. When using a SAN-attached gateway Data Domain System, the SAN must be zoned before the Data Domain System is booted. The storage array can have single or multiple controllers and each controller can have multiple ports. The storage array port used for gateway connectivity cannot be shared with other SAN-connected hosts that access the array. Multiple gateway systems can access storage on a single storage array. The 3rd-party storage physical disks that provide storage to the gateway should be dedicated to the gateway and not shared with other hosts. 3rd-party physical disks storage is configured into one or more LUNs that are exported to the gateway.

49

All LUNs presented to the gateway are used automatically when the gateway is booted. Use the Data Domain System commands disk rescan and disk add to see newly added LUNs. A volume may use any of the disk types supported on the disk array. However, only one disk type can be used for all LUNs in the volume to assure equal performance for all LUNs. All disks in the LUNs must be like drives in identical RAID configurations. Multiple storage array RAID configurations can be used; however, you should select RAID configurations that provide the fastest possible sequential data access for the type of disks used. A gateway system supports one volume composed of 1 to 16 LUNs. LUN numbers must start at 0 (zero) and be contiguous. The total amount of storage can be no more than a certain maximum--see the table Data Domain system capacities in the Introduction chapter of the System Hardware Guide. LUNs should be provisioned across the maximum number of spindles available. Vendorspecific provisioning best practices should be used and, if available, vendor-specific tools should be used to create a virtual- or meta-LUN that spans multiple LUNs. If virtual- or meta-LUNs are used, they must follow the configuration parameters defined in this chapter. For replication between a gateway Data Domain System and other model Data Domain Systems, the total amount of storage on the originator must not exceed the total amount of storage on the destination. Replication between gateway systems must use storage arrays with similar performance characteristics. The size of destination storage must be equal to or greater than the size of source storage. Configurations do not need to be identical. The maximum data size for a LUN that a gateway Data Domain system can access is no longer limited to 2 TiB; however, LUNs larger than 10 TiB are not tested. (The "data size" means the size of the LUN presented to the Data Domain system by the 3rd-party physical disk storage.) The minimum data size for a LUN that a gateway system can access is 400 GiB for the first LUN, and 100 GiB for subsequent LUNs. That is, for the initial install the LUN size should be 400GiB or higher, and if you only have one LUN it must be at least 400 GiB. To use the maximum amount of space on a system, create multiple LUNs and adjust the LUN sizes so that the smallest is at least 100 GiB. The data size means the size of the LUN presented to the Data Domain System by the 3rd-party physical disk storage. The maximum total size of all LUNs accessed by a Data Domain System depends on the system, and is shown in the table Data Domain System Capacities in the Hardware Guide. A smaller volume can be expanded by adding LUNs. A Fibre Channel host bus adapter card in the Data Domain System communicates with the 3rd-party physical storage disk array.

50

Data Domain Operating System User Guide

Gateway Types

Gateway Types
A gateway system has the same chassis and CPUs as the equivalent model number non-gateway system. See the table Data Domain system capacities in the Introduction chapter of the System Hardware Guide for details.

DD4xxg and DD5xxg series Gateways


The DD4xx and DD5xx gateway systems have no disks in the head. The system cant boot up without LUNs.

DD6xxg Gateways
The DD6xx gateway systems have four disks used for file system configuration and location information. The DD6xx disks are not used for file system data storage. All data storage is on the external disk arrays. The system can boot up without LUNs. Note For the DD690g, the maximum # of LUNs is 16. The maximum total limit for all LUNs is the same as the max. limit with 6 shelves: 35.47 TB.The maximum data size for a LUN that a gateway Data Domain system can access is no longer limited to 2 TiB; however, LUNs larger than 10 TiB are not tested. .See the table Data Domain System Capacities in the Hardware Guide.

Commands not valid for Gateway


The following disk commands are not valid for a gateway Data Domain System using 3rd-party physical disk storage. All other commands in the Data Domain command set are available. disk disk disk disk disk beacon fail unfail show failure-history show reliability-data

Commands for Gateway only


The following additional commands are available only with the gateway Data Domain System. See Procedure: Adding a LUN on page 57 for details about using the commands.
Gateway systems 51

Disk Commands at LUN level

disk add dev<dev-id> Expand the 3rd-party physical disk storage seen by the Data Domain System to include a new LUN. Example: # disk add dev3 disk rescan Search 3rd-party physical disk storage for new or removed LUNs.

Disk Commands at LUN level


The following disk commands report activity and information only at the LUN level, not for individual disks in a LUN. Each disk entry represents a LUN in output from the following commands.

disk show raid-info The following example shows two LUNs available to the Data Domain System. After the drives are in use line, the remainder of the drives lines are not valid. system12# disk show raid-info
Disk ----1 2 ----State -----------in use (dg0) in use (dg0) -----------Additional Status -------------------------------

2 0 0 0 0 0 0

drives drives drives drives drives drives drives

are "in use" have "failed" are "hot spare(s)" are undergoing "reconstruction" are undergoing "resynch" are "not in use" are "missing/absent"

disk show performance Displays information similar to the following for each LUN. system12# disk show performance
Disk Read sects/s ------- ------1 46 2 0 Write sects/s ------109 0 Cumul. Busy MiB/sec ------- ---0.075 14 % 0.000 0 % Data Domain Operating System User Guide

52

Disk Commands at LUN level ------- ------------- ------- ----

Cumulative

0.075 MiB/s, 7 % busy

disk show detailed-raid-info Displays information similar to the following for each LUN: system12# disk show detailed-raid-info Disk Group (dg0) - Status: normal Raid Group (ext3):(raid-0)(61.01 GiB) - Status: normal Raid Group (ext3_1):(raid-100)(68.64 GiB) - Status: normal Slot ---1 ---Disk ----1 ----State Additional Status ------------ ----------------in use (dg0) ------------ -----------------

Raid Group (ppart):(raid-0)(3.04 TiB) - Status: normal Raid Group (ppart_1):(raid-100)(3.04 TiB) - Status: normal Slot ---1 2 ---Spare Disks None Unused Disks None disk show hardware Displays information similar to the following for each LUN. LUN is the LUN number used by the 3rd-party physical disk storage system. Port WWN is the world-wide number of the port on the 3rd-party physical disk storage system through which data is sent to the Data Domain System. Manufacturer/Model includes a label that identifies the manufacturer. The display may include a model ID or RAID type or other information depending on the vendor string sent by the 3rd-party physical disk storage system. Firmware is the firmware level used by the 3rd-party physical disk storage controller. Disk ----1 2 ----State Additional Status ------------ ----------------in use (dg0) in use (dg0) ------------ -----------------

Gateway systems

53

Installation

Serial No. is the serial number of the 3rd-party physical disk storage system. Capacity is the amount of data in a volume sent to the Data Domain System.

system12# disk show hardware


Disk ----1 2 ----LUN --0 4 --Port WWN ----------------------50:06:01:60:30:20:e2:12 50:06:01:60:30:20:e2:12 ----------------------Serial No. -------------APM00045001866 APM00045001866 -------------Capacity -------1.56 TiB 1.56 TiB -------Manufacturer/Model -----------------DGC RAID 3 DGC RAID 3 ------------------

Firmware -------0216 0216 --------

2 drives present.

disk status Displays information similar to the following. After drives are in use, the remainder of the drives lines are not valid. system12# disk status Normal - system operational 1 disk group total 9 drives are operational

Installation
A Data Domain System using 3rd-party physical disk storage must first connect with the 3rd-party physical disk storage and then configure the use of the storage.

Installation Procedure on DD4xxg and DD5xxg Gateways


1. For hardware setup (setting up the Data Domain System chassis), see the Data Domain system Hardware Guide. 2. On the 3rd-party physical storage disk array system, create the LUNs for use by the Data Domain System.

54

Data Domain Operating System User Guide

Installation

3. On the 3rd-party physical storage disk array system, configure LUN masking so that the Data Domain System can see only those LUNs that should be available to the Data Domain System. The Data Domain System writes to every LUN that is available. 4. Connect the Fiber Channel cable to one of the Fiber Channel HBA card ports on the back of the Data Domain System. The cable and the 3rd-party physical disk storage must also be connected to the FC-AL. Up to 4 cables can be used for basic connectivity and also for multipath. 5. Connect a serial terminal to the Data Domain System. A VGA console does not display the menu mentioned in the next step of this procedure. 6. Press the Power button on the front of the Data Domain System. During the initial system start, the Data Domain System does not know of the available LUNs. The following menu appears with the Do a New Install entry selected: New Install 1. Do a New Install 2. Show Configuration 3. Reboot 7. Check that the LUNs available from the connected array system are correct. Use the down-arrow key, select Show Configuration, and press Enter. The configuration menu appears with Show Storage Information selected: system Configuration (Before Installation) 1. 2. 3. 4. 5. Show Storage Information Show Head Information Go to Previous Menu Go to Rescue Menu Reboot

8. Press Enter to display storage information. Each LUN that is available from the array system appears as a one line entry in the List of SCSI Disks/LUNs. The Valid RAID DiskGroup UUID List section shows no disk groups until after installation. Use the arrow keys to move up and down in the display. Storage Details Software Version: 4.5.0.0-62320 Valid RAID DiskGroup UUID List: ID DiskGroup UUID Last Attached Serialno ------------------------------------------------- No diskgroup uuids were found -Gateway systems 55

Installation

List of SCSI Disks/LUNs: (Press ctrl+m for disk size information) ID -1 2 UUID ------No UUID No UUID tgt --0 0 lun --0 4 loop ---0 0 wwpn comments ---------------- ------------------500601603020e212 500601603020e212

Number of Flash disks: 1 ---------------------------------------Errors Encountered: ----------------------------------------- No errors to report 9. Press Enter to return to the New Install menu. 10. Use the up-arrow key to select Do a New Install. 11. Press Enter to start the installation. The system automatically configures the use of all LUNs available from the array. 12. Press Enter to accept the Yes selection in the New Install? Are you sure? display. No other user input is required. A number of displays appear during the reboot. Each one automatically times-out with the displayed information and the reboot continues. 13. When the reboot completes, the login prompt appears. Login and configure the Data Domain System as explained in the Installation chapter of this manual beginning with step 2 on page 18.

Installation Procedure on DD6xxg Gateways


(See also Restore system configuration after a head unit replacement (with DD690/DD690G) on page 66.) 1. For hardware setup (setting up the Data Domain System chassis), see the Data Domain system Hardware Guide. 2. On the 3rd-party physical storage disk array system, create the LUNs for use by the Data Domain System.

56

Data Domain Operating System User Guide

Procedure: Adding a LUN

3. On the 3rd-party physical storage disk array system, configure LUN masking so that the Data Domain System can see only those LUNs that should be available to the Data Domain System. The Data Domain System writes to every LUN that is available. 4. Connect the Fiber Channel cable from the Fiber-Channel Arbitrated Loop (FC-AL) to one of the Fiber Channel HBA card ports on the back of the Data Domain System. The cable and the 3rd-party physical disk storage must also be connected to the FC-AL. 5. Connect a serial terminal to the Data Domain System. A VGA console does not display the menu mentioned in the next step of this procedure. 6. Press the Power button on the front of the Data Domain System. 7. Boot up. 8. Login as sysadmin. 9. Enter the command: disk rescan 10. In order to find out the device name, enter the command: disk show raid-info 11. Where dev<x> is the device returned by the above command, for example dev3, enter the command: disk add dev<x> 12. Wait 3 or 4 minutes. 13. Enter the command filesys status to verify that the system is up and running.

Procedure: Adding a LUN


After installing a gateway Data Domain System to use LUNs on 3rd-party physical disk storage, you can expand the volume by adding LUNs (all LUNs are seen as a single volume by the Data Domain System). Caution Once a LUN is added to the volume used by the Data Domain System, you cannot remove the LUN. The only way to reduce the volume size is to re-install the Data Domain System operating system and reconfigure the Data Domain System. If a LUN used by a Data Domain System is removed from the 3rd-party physical disk storage, the Data Domain System file system is immediately compromised and returns an error condition.

Gateway systems

57

Procedure: Adding a LUN

1. On the 3rd-party physical disk storage, create the new LUN. Make sure that masking for the new LUN allows the Data Domain System to see the LUN. 2. On the Data Domain System, enter the disk rescan command to find the new LUN. # disk rescan NEW: Host: scsi0 Channel: 00 Id: 00 Lun: 03 Vendor: NEXSAN Model: ATAbea(C0A80B0C) Rev: 8035 Type: Direct-Access ANSI SCSI revision: 04 1 new device(s) found. The disk show raid-info command then shows all of the previously configured LUNs (as disk1, disk2, and so on) and the new LUN as unknown. Also, the new LUN is referenced in the line 1 drive is not in use. A LUN that has been used by a different Data Domain system previously and that shows as foreign cannot be added. # disk show raid-info Disk ----1 2 3 ----2 0 0 0 0 1 0 State Additional Status -------------------- ----------------in use (dg0) in use (dg0) unknown -------------------- -----------------

drives are "in use" drives have "failed" drives are "hot spare(s)" drives are undergoing "reconstruction" drives are undergoing "resynch" drive is "not in use" drives are "missing/absent"

Note At this point, the new LUN can be removed from 3rd-party physical disk storage with no damage to the Data Domain System file system. The disk rescan command then shows the LUN as removed. After using the disk add command (the next step), you cannot safely remove the LUN. 3. Use the disk add dev<dev-id> command to add the new LUN to the Data Domain System volume. The disk-id is given in the output from the disk show raid-info command. # disk add dev3

58

Data Domain Operating System User Guide

Procedure: Adding a LUN

The 'disk add' command adds a disk to the filesystem. Once the disk is added, it cannot be removed from the filesystem without re-installing the Data Domain System. Are you sure? (yes|no|?) [no]: yes Output from the disk show raid-info command should now show the new disk (LUN) as in use. Output from the filesys show space command should include the new space in the Data section.

Gateway systems

59

Procedure: Adding a LUN

60

Data Domain Operating System User Guide

SECTION 2: Configuration - System Hardware, Users, Network, and Services.

61

This page intentionally left blank.

62

Data Domain Operating System User Guide

System Maintenance

The Data Domain System system, ntp, and alias commands allow you to take system-level actions. Examples for the system command are shutting down or restarting the Data Domain System, displaying system problems and status, and setting the system date and time. The alias command allows users to set up aliases for Data Domain System commands. The ntp command manages access to one or more time servers. The support command sends multiple log files to the Data Domain Support organization. Support staff may ask you to use the command when dealing with unusual situations. See Collect and Send Log Files on page 139 for details.

The system Command


The system command manages system-level actions on the Data Domain System.

Shut down the Data Domain System Hardware


To shut down power to the Data Domain System, use the system poweroff operation. The operation automatically does an orderly shut down of file system processes; however, always close the Enterprise Manager graphical user interface before a poweroff operation to avoid a series of harmless warning messages when rebooting. The operation is available to administrative users only. system poweroff The display includes a warning similar to the following: # system poweroff The system poweroff command shuts down the system and turns off the power. Continue? (yes|no|?) [no]:

63

The system Command

Reboot the Data Domain System


To shutdown and reboot a Data Domain system, use the system reboot operation. The operation automatically does an orderly shutdown of the file system process; however, always close the Enterprise Manager graphical user interface before a reboot operation to avoid a series of harmless warning messages when the system reboots. Administrative users only. system reboot The display includes a warning similar to the following: # system reboot The system reboot command reboots the system. File access is interrupted during the reboot. Are you sure? (yes|no|?) [no]:

Upgrade the Data Domain System Software


You can upgrade Data Domain System software either from the Data Domain Support web site or with FTP. Upgrade points of interest:

The upgrade operation shuts down the Data Domain System file system and reboots the Data Domain System. (If an upgrade fails, call customer support.) The upgrade operation may take over an hour, depending on the amount of data on the system. After the upgrade completes and the system reboots, the /backup file system is disabled for up to an hour for upgrade processing. Stop any active CIFS client connections before starting an upgrade. Use the cifs show active command on the Data Domain System to check for CIFS activity. Disconnect any client that is active. On the client, enter the command net use \\dd\backup /delete. For systems that are already part of a replication pair: With directory replication, upgrade the destination and then upgrade the source. With collection replication, upgrade the source and then upgrade the destination. With one exception, replication is backwards compatible within release families (all 4.2.x releases, for example) and with the latest release of the previous family (4.3 is compatible with release 4.2, for example). The exception is bi-directional directory replication, which requires the source and destination to run the same release. Do NOT disable replication on either system in the pair.

Note Before starting an upgrade, always read the Release Notes for the new release. DD OS changes in a release may require unusual, one-time operations to perform an upgrade.

64

Data Domain Operating System User Guide

The system Command

To upgrade using HTTP


1. Log in to a Data Domain System administrative host that mounts /ddvar from the Data Domain System. 2. On the administrative host, open a browser and go to the Data Domain Support web site. Use HTTP to connect to the web site. For example: https://support.datadomain.com 3. Log in with the Data Domain login name and password that you use for access to the support web page. Note Some web browsers do not automatically ask for a login if a machine does not accept all logins. In that case, add your user name and password. For example: http://your-name:your-pw@support.datadomain.com 4. Click on Downloads. (If the web site has updated instructions, follow those instructions.) 5. Click on the Download button for the latest release. 6. Download the new release file to the Data Domain System directory /ddvar/releases. Note When using Internet Explorer to download a software upgrade image, the browser may add bracket and numeric characters to the upgrade image name. Remove the added characters before running the system upgrade command. 7. To start the upgrade, log in to the Data Domain System as sysadmin and enter a command similar to the following. Use the file name (not a path) received from Data Domain. (Always close the Enterprise Manager graphical user interface before an upgrade operation to avoid a series of harmless warning messages when rebooting.) For example: # system upgrade 4.0.2.0-30094.rpm

To upgrade using FTP


1. Log in to a Data Domain System administrative host that mounts /ddvar from the Data Domain System. 2. On the administrative host, use FTP to connect to the Data Domain support site. For example: # ftp://support.datadomain.com/

System Maintenance

65

The system Command

3. Log in with the Data Domain login name and password that you use for access to the support web page. 4. Download the release recommended by your Data Domain field representative. The file should go to /ddvar/releases on the Data Domain System. Note When using Internet Explorer to download a software upgrade image, the browser may add bracket and numeric characters to the upgrade image name. Remove the added characters before running the system upgrade command. 5. To start the upgrade, log in to Data Domain System as sysadmin and enter a command similar to the following. Use the file name (not a path) received from Data Domain. (Always close the Enterprise Manager graphical user interface before an upgrade operation to avoid a series of harmless warning messages when rebooting.) For example: # system upgrade 4.0.2.0-30094.rpm

Set the Date and Time


To set the system date and time, use the system set date operation. The entry is two places for month (01 through 12), two places for day of the month (01 through 31), two places for hour (00 through 23), two places for minutes (00 through 59), and optionally, two places for century and two places for year. The hour (hh) and minute (mm) entries are 24-hour military time with no colon between hours and minutes. 2400 is not a valid entry. An entry of 0000 is midnight at the beginning of a day. The operation is available to administrative users only. system set date MMDDhhmm[[cc]yy] For example, use either of the following commands to set the date and time to October 22 at 9:24 a.m. in the year 2004: # system set date 1022092404 # system set date 102209242004

Restore system configuration after a head unit replacement (with DD690/DD690G)


To restore system configuration after a head unit replacement, use the system headswap command: system headswap Definitions of terms:
66

"head unit" = The DD690 or DD690g.


Data Domain Operating System User Guide

The system Command

"data storage" = a set of disks that make up a metagroup which houses a file system. This set of disks could be physical disks or LUNs residing in an external storage array in a gateway system. "DD4xxg/DD5xxg" = DD4xx or DD5xx series gateway = DD460g, DD560g, or DD580g.

Possible cases: There are three possible cases: 1. DD690 -> DD690 (You are the owner of a DD690 and just purchased another DD690 and want to use the same storage/data). 2. DD690g -> DD690g (You are the owner of a DD690g and just purchased another DD690g and want to use the same storage/data). 3. DD4xxg/DD5xxg -> DD690g (You are the owner of a DD4xx or DD5xx series gateway, and just purchased a DD690g, and want to use the same storage/data). For this case, have an SE do Step #15 for you. (As of release 4.5.1, the system headswap command is only available when swapping to DD690/DD690g models.)

Procedure to Swap Filesystems:


1. Get sysadmin privilege and a sysadmin password (required for this command). 2. Login as sysadmin. 3. Verify hardware configuration: There is a complete set of "data storage" containing file system data. There is a "head unit" connected to the "data storage". The "head unit" must either: have no prior system configuration setting (a brand-new system), or not currently contain the system configuration setting for the "data storage" set.

4. To determine if the above conditions are met, run the 'disk status' command. IF the output of 'disk status' is one of the following: Error - data storage unconfigured, a complete set of foreign storage attached" Error - system non-operational, a complete set of foreign storage attached"

System Maintenance

67

The system Command

--Any other message indicating that the system is in need of a headswap!--

THEN Continue to step 6. (The 'system headswap' command will result in a headswap operation.) ELSE Go back to step 3 and fix the hardware configuration. (Other error messages are shown below.)

5. Considering the three cases: DD690 -> DD690 DD690g -> DD690g DD4xxg/DD5xxg -> DD690g (For this case, have an SE do step #15 for you.)

6. Upgrade the system to the left of the arrow (DD690, DD690g, or DD4xxg/DD5xxg) to the release you want to run. Note: the system to the left of the arrow should be at least at Release 4.5.0.0. 7. Install on (or upgrade to the release you want to run) the system to the right of the arrow (DD690 or DD690g). 8. Using the system power off command (not the power switch), power off both systems. Note Please do not power-cycle the system with the power switch, or hit the Reset switch, without calling Data Domain Support first! Instead, use the system power off command, for which you dont need to contact Data Domain Support. 9. Move the fiber channel cables from the DD4xxg/DD5xxg to the DD690g (or DD690 to DD690 or DD690g to DD690g) and make any necessary SAN/Storage management changes. 10. Power on the dlh gateway and do a "disk rescan" to discover the luns. 11. Make sure the luns show up as "foreign" when issuing a "disk show raid-info" command. Then issue the "system show hardware" command to verify that you are seeing the luns you are expecting to see. 12. After verifying that the luns are visible by the dlh gateway as foreign devices, issue the "system headswap" command. 13. It will do the necessary checks and once its done with the swap, the system will reboot. 14. After the system comes up issue "disk show raid-info" again to verify that the new luns are part of a disk group and show up as "in use". Wait until this is so.

68

Data Domain Operating System User Guide

The system Command

15. Set the system to ignore nvram, using the command: "reg set system.IGNORE_NVRAM=1". NOTE: This is a workaround for the 690g only, and it should not be used with any other system! For the DD4xxg/DD5xxg -> DD690g case, case, have an SE do this step for you! 16. Issue a "filesys enable" to bring the filesystem up. 17. Once the filesystem is up, issue "filesys status" and "filesys show space" to verify the health of the filesystem. 18. If directory replication contexts are present, break all replication contexts and then re-add them, then issue the "replication resync" command to resume the original replication contexts. 19. (IMPORTANT) Set the system back to not ignoring NVRAM, using the command reg set system.IGNORE_NVRAM=0. Note Note If doing a headswap from a DD4xx/DD5xx-series gateway, the disk group that is created is not dg1, but rather "(dg0(2))". This is a new convention that might be confusing to someone doing this for the first time.

ERROR MESSAGES:

"No file system present, unable to headswap." There is no "data storage" present. "Incomplete file system, unable to headswap." There is no complete set of "data storage". "More than one file systems present, unable to headswap." There are more than one "data storage" present. "Existing file system incomplete, headswap unnecessary." The existing incompleted "data storage" belongs to the "head unit". "File system operational, headswap unnecessary." The system is operating normally, no headswap operation is needed.

For more information on system headswap, see the documentation on your particular platform, including the appropriate Field Replacement Unit documents and sections of the the Hardware Guide.

Upgrading DD690 and DD690g


With the DD690 and DD690g, never use reinstall as a way of upgrading the system. This is why: A change with the DD690 (and DD690g) is that after you do a fresh installation on the head unit, the system may prompt you to run the "system headswap" command, and after running it and booting up you may find that the head unit has returned to the DDOS version that is on the storage.

System Maintenance

69

The system Command

For example, you may have a DD690 and expansion shelves running 4.5.0. You install 4.5.1 on the head unit. It asks for the system headswap command. After reboot, you find that the head unit is back at 4.5.0. This is as it should be: the head unit resyncs itself with the storage on the expansion shelves, because they are more important, as the stored data is there.

Create a Login Banner


To create a message that appears whenever someone logs in, mount the Data Domain system directory /ddvar from another system. Create a text file with your login message as the text. To have the banner appear, use the system option set login-banner command with the path and file name of the file that you created: system option set login-banner file For example, to use the text from a file named banner: # system option set login-banner /ddvar/banner

Reset the Login Banner


To reset the login banner to the default of no banner, use the system option reset login-banner command: system option reset login-banner

Display the Login Banner Location


To display the location of the file that contains the login banner text, use the system option show command: system option show The command output shows the path and file name: # system option show Option Value ----------------------------Login Banner File /ddvar/banner -----------------------------

Display the Ports


To display the ports, use the system show ports command. system show ports
70 Data Domain Operating System User Guide

The system Command

The display is similar to the following: # system show ports


Port ---0a 0b 3a ---Connection Type ---------Enet Enet VTL ---------Link Speed -----1 Gbps 0 Gbps 2 Gbps -----Firmware ----------Hardware Address ---------------------------00:30:48:74:a3:ed (eth1) 00:30:48:74:a3:ec (eth0) 20:00:00:e0:8b:1c:fd:c4 WWNN 21:00:00:e0:8b:1c:fd:c4 WWPN ----------------------------

3.03.19 IPX -----------

Port See the "Data Domain System Hardware User Guide" to match a slot to a port number. A DD580, for example,shows port 3a as a SAS HBA in slot 3 and port 4a as a SAS HBA in slot 4. A gateway Data Domain system with a dual-port Fibre Channel HBA always shows #a as the top port and #b as the bottom port, where # is the HBA slot number, depending on the Data Domain system model. Link Speed is given in Gbps (Gigabits per second). Firmware refers to the Data Domain system HBA firmware version. Hardware Address is a MAC address, WWN or WWPN/WWNN, as follows: WN is the world-wide name of the Data Domain system SAS HBA(s) on a system with expansion shelves. WWPN/WWNN is the world-wide port name or node name from the Data Domain system FC HBA on gateway systems.

Display the Data Domain System Serial Number


To display the system serial number, use the system show serialno operation. system show serialno The display is similar to the following: # system show serialno Serial number: 22BM030026

Display system Uptime


To display the time that has passed since the last reboot and the file system uptime, use the system show uptime operation. system show uptime

System Maintenance

71

The system Command

The system display includes the current time, time since the last reboot (in days and hours), the current number of users, and the average load for file system operations, disk operations, and the idle time. The Filesystem line displays the time that has passed since the file system was last started.

For example: # system show uptime 12:57pm up 9 days, 18:55, 3 users, load average: 0.51, 0.42, 0.47 Filesystem has been up 9 days, 16:26

Display system Statistics


To display system statistics for CPUs, disks, Ethernet ports, and NFS, use the system show stats operation. The time period covered is from the last reboot, except with interval and count. An interval, in seconds, runs the command every number of seconds (nsecs) for the number of times in count. The first report covers the time period since the last reboot. Each subsequent report is for activity in the last interval. The default interval is five seconds. The interval and count labels are optional when giving both an interval and a count. To give only an interval, you can enter a number for nsecs without the interval label. To give only a count, you must enter the count label and a number for count. The start and stop options return averages per second of statistics over the time between the commands. system show stats [start | stop | ([interval nsecs] [count count])] The display is similar to the following: # system show stats 09/30/ 16:23:10 CPU FS Disk kiB/s busy ops/s read write ---------------9% 624 40834 37245

FS Disk proc busy -------0 % 10%

Net kiB/s in ---0 out ---0

72

Data Domain Operating System User Guide

The system Command

NVRAM kiB/s ----0

Repl kiB/s ---0

Note kiB = kibibytes = binary equivalent of kilobytes.

Display Detailed system Statistics


The detailed system statistics cover the time period since the last reboot. The columns in the display are: CPUx busy The percentage of time that each CPU is busy. State 'CDVMS' A single character shows whether any of the five following events is occurring. Each event can affect performance. C cleaning D disk reconstruction (repair of a failed disk), or RAID is resyncing (after an improper system shutdown and a restart), or RAID is degraded (a disk is missing and no reconstruction is in progress) V verify data (a background process that checks for data consistency) M merging of the internal fingerprint index S summary vector internal checkpoint process NFS ops/s The number of NFS operations per second. NFS proc The fraction of time that the file server is busy servicing requests. NFS rcv The proportion of NFS-busy time spent waiting for data on the NFS socket. NFS snd The proportion of NFS-busy time spent sending data out on the socket. NFS idle The percentage of NFS idle time. CIFS ops/s The number of CIFS (Common Internet File system) operations per second. ethx kiB/s The amount of data in kilobytes per second passing through each Ethernet connection. One column appears for each Ethernet connection. Disk kiB/s The amount of data in kibibytes per second going to and from all disks in the Data Domain System. Disk busy The percentage of time that all disks in the Data Domain System are busy. NVRAM kiB/s The amount of data in kilobytes per second that are read from and written to the NVRAM card. Repl kiB/s The amount of data in kilobytes per second being replicated between one Data Domain System and another. For directory replication, the value is the sum total of all in and out traffic for all replication contexts.
System Maintenance 73

The system Command

Note kiB = kibibytes = binary equivalent of kilobytes.

Display To display detailed system statistics, use the system show detailed-stats operation or click system Stats in the left panel of the Data Domain Enterprise Manager. The time period covered is from the last reboot, except when using interval and count. An interval, in seconds, runs the command every number of seconds (nsecs) for the number of times in count. The first report covers the time period since the last reboot. Each subsequent report is for activity in the last interval. The default interval is five seconds. The interval and count labels are optional when giving both an interval and a count. To give only an interval, you can enter a number for nsecs without the interval label. To give only a count, you must enter the count label and a number for count. The start and stop options return averages per second of statistics over the time between the commands. system show detailed-stats [start | stop | ([interval int][count count])] The display is similar to the following: # system show detailed-stats
CPU0 NFS busy send ------0 % 0 CPU1 State NFS busy CDVMS idle ---- --------0 % 0 NFS CIFS ops/s ops/s ---------624 0 eth1 kiB/s Disk kiB/s in read --------- ----0 0 eth2 kiB/s out write ----0 0 in ----0 out ----0 0% 0% ------proc recv NFS NFS

eth0 kiB/s eth3 kiB/s in out in out ----------- ----0 0 0 0 Disk busy ---0 74 NVRAM kiB/s read ---0

write ----0

Repl kiB/s in ---0

out ---0

Data Domain Operating System User Guide

The system Command

Note kiB = kibibytes = binary equivalent of kilobytes.

Display system Statistics Graphically


The graphical display of system statistics is taken from the partial output of multiple commands in the command line interface. Six continuously updated graphs form the display. Each graph is labeled in the lower left corner. CPU CPU The percentage of time that each CPU is busy. Network Network The amount of data in kibibytes (binary equivalent of kilobytes) per second passing through each Ethernet connection. One line appears for each Ethernet connection. NFS recv % The proportion of NFS-busy time spent waiting for data on the NFS socket. proc % The fraction of time that the file server is busy servicing requests. send % The proportion of NFS-busy time spent sending data out on the socket. CPU Disk The amount of data in kibibytes (binary equivalent of kilobytes) per second going to and from all disks in the Data Domain System. Replication (Displays only if the Replicator feature is licensed) KB/s in The total number of kilobytes per second received by this side from the other side of the Replicator pair. For the destination, the value includes backup data, replication overhead, and network overhead. For the source, the value includes replication overhead and network overhead. KB/s out The total number of kilobytes per second sent by this side to the other side of the Replicator pair. For the source, the value includes backup data, replication overhead, and network overhead. For the destination, the value includes replication and network overhead. FS ops (File system operations per second) NFS ops/s The number of NFS operations per second. CIFS ops/s The number of CIFS operations per second.

System Maintenance

75

The system Command

Display To display general system statistics, click system Stats in the left panel of the Data Domain Enterprise Manager.

Figure 9: Graphic display of system statistics

Display system Status


The system hardware status display includes information about fans, internal temperatures, and the status of power supplies. Information is grouped by enclosure (Data Domain System or expansion shelf).

Fans displays status for all the fans cooling each enclosure: Description tells where the fan is located in the chassis. Level gives the current operating speed range (low, medium, high) for each fan. The operating speed changes depending on the temperature inside the chassis. See Replace Fans in Hardware Guide to identify fans in the Data Domain System chassis by name and number. All of the fans in an expansion shelf are located inside the power supply units. Status is the system view of fan operations.

76

Data Domain Operating System User Guide

The system Command

Temperature displays the number of degrees that each CPU is below the maximum allowable temperature and the actual temperature for the interior of the chassis. The C/F column displays temperature in degrees Celsius and Fahrenheit. The Status column shows whether or not the temperature is acceptable. If the overall temperature for a Data Domain System reaches 50 degrees Celcius, a warning message is generated. If the temperature reaches 60 degrees Celcius, the Data Domain System shuts down. The CPU numbers depend on the Data Domain System model. With newer models, the numbers are negative when the status is OK and move toward 0 (zero) as CPU temperature increases. If a CPU temperature reaches 0 Celsius, the Data Domain System shuts down. With older models, the numbers are positive. If the CPU temperature reaches 80 Celcius, the Data Domain System shuts down.

Power Supply informs you that all power supplies are either operating normally or that one or more are not operating normally. The message does not identify which power supply or supplies are not functioning (except by enclosure). Look at the back panel of the enclosure and check the LED for each power supply to identify those that need replacement. Display To display the current hardware status, use the system status operation. system status The display is similar to the following: # system status Enclosure 1 Fans Description --------------Crossbar fan #1 Crossbar fan #2 Crossbar fan #3 Crossbar fan #4 Rear fan #1 Rear fan #2 --------------Temperature Description --------------CPU 0 Actual CPU 1 Actual Chassis Ambient --------------Power Supply
System Maintenance 77

Level -----medium medium medium medium medium medium -----C/F -------40/-72 -46/-83 31/88 -------

Status -----OK OK OK OK OK OK -----Status -----OK OK OK ------

The system Command

Status -----OK ------

Display Data Transfer Performance


To display system performance figures for data transfer for an amount of time, use the system show performance operation. You can set the duration and the interval of the display. Duration is the hours, minutes, or seconds for the display to go back in time. Interval is the time between each line in the display. The default is to show the last 24 hours in 10 minute intervals. You can set duration only, but not interval only. The raw option displays unformatted statistics. The Read, Write, and Replicate values are calculated in powers of 10 (1KB = 1000) instead of powers of 2 (1KiB = 1024). system show performance [raw][duration {hr | min | sec} [interval {hr | min | sec}]] The following example sets a duration of 30 minutes with an interval of 10 minutes: # system show performance 30 min 10 min Date Time Read Write Replicate proc ----------------------------------- -----------2004/05/18 10:37:28 0.0 MiB/s 0.0 MiB/s 0.0 MiB/s 0% 2004/05/18 10:47:28 0.0 MiB/s 0.0 MiB/s 0.0 MiB/s 0% recv ---0% 0% send ---0% 0% idle ---99% 99%

Note MiB = Mebibytes = binary equivalent of Megabytes.

Display the Date and Time


To display the system date and time, use the system show date operation. system show date The display is similar to the following:

78

Data Domain Operating System User Guide

The system Command

# system show date Fri Nov 12 12:06:30 PDT 2004

Display NVRAM Status


The NVRAM status display shows the size of the NVRAM card and the state of the batteries on the card.

The memory size, window size, and number batteries identify the type of NVRAM card. The errors entry shows the operational state of the card. If the card has one or more PCI or memory errors, an alerts email is sent and the daily AM-email includes an NVRAM entry. Each battery entry should show 100% charged, enabled. The exceptions are for a new system or for a replacement NVRAM card. In both cases, the charge may initially be below 100%. If the charge does not reach 100% in three days (or if a battery is not enabled), the card should be replaced.

Display To display the NVRAM information, use the system show nvram operation. system show nvram The display is similar to the following: # system show nvram NVRAM Card: component ------------------memory size window size number of batteries errors battery 1 battery 2 -------------------

value --------------------512 MiB 16 MiB 2 0 PCI, 0 memory 100% charged, enabled 100% charged, enabled ---------------------

Note MiB = Mebibytes = binary equivalent of Megabytes.

Display the Data Domain System Model Number


To display the model number of the Data Domain System, use the system show modelno command. system show modelno
System Maintenance 79

The system Command

For example: # system show modelno Model number DD560

Display Hardware
To display the PCI cards and other hardware in a Data Domain System, use the system show hardware operation. The display is useful for Data Domain Support when troubleshooting. system show hardware A few sample lines from the display follow: # system show hardware Slot Vendor Device ---------------------------0 Intel 82546GB GigE 1 (empty) (empty) 2 3-Ware 8000 SATA 3 QLogic QLE2362 2Gb FC 4 (empty) (empty) 5 Micro Memory MM-5425CN 6 (empty) (empty) ---------------------------Ports -----0a, 0b 3a

------

Display Memory
To display a summary of the memory in a Data Domain System, use the system show meminfo operation. The display is useful for Data Domain Support when troubleshooting. system show meminfo For example: # system show meminfo Memory Usage Summary Total memory: 7987 MiB Free memory: 1102 MiB Total swap: 12287 MiB Free swap: 12287 MiB

80

Data Domain Operating System User Guide

The alias Command

Display the Data Domain OS Version


To display the Data Domain OS version on your system, use the system show version operation. The display gives the release number and a build identification number. system show version The display is similar to the following: # system show version Data Domain Release 3.0.0.0-12864 To display the versions of Data Domain System components on your system, use the show detailed-version operation. The display is useful for Data Domain support staff. system show detailed-version The display is similar to the following: # system show detailed-version Data Domain Release 3.0.0.0-12864 //prod/main/tools/ddr_dist/ddr_dist_files/...@12826 //prod/main/httpd/...@9826 //prod/main/app/...@12858 //tools/main/devtools/ddr/...@11444 //tools/main/devtools/README-DataDomain@10093 //tools/main/devtools/toolset.bom@3909 //prod/main/net-snmp/...@9320 //prod/main/os/lib/...@3799 . . .

Display All system Information


To display memory usage and the output from the commands: system show detailed-version, system show fans, system show modelno, system show serialno, system show uptime, and system show date, use the system show all operation. system show all

The alias Command


The alias command allows you to add, delete, or display command aliases and their definitions. See Display Aliases on page 82 for the list of default aliases.
System Maintenance 81

The alias Command

Add an Alias
To add an alias, use the alias add name command operation. Use double quotes around the command if it has includes one or more spaces. A new alias is available only to the user who creates the alias. A user can not create a working alias for a command that is outside of that users permission level. alias add name command For example, to add an alias named rely for the Data Domain System command that displays reliability statistics: # alias add rely disk show reliability-data

Remove an Alias
To remove an alias, use the alias del name operation. alias del name For example, to remove an alias named rely: # alias del rely

Reset Aliases
To return to the default alias list, use the alias reset operation. Administrative users only. alias reset

Display Aliases
To display all aliases and their definitions, use the alias show operation. alias show The following example displays the default aliases: # alias show date -> system show date df -> filesys show space hostname -> net show hostname ifconfig -> net config iostat -> system show detailed-stats 2 netstat -> net show stats nfsstat -> nfs show statistics passwd -> user change password
82 Data Domain Operating System User Guide

Time Servers and the NTP Command

ping -> net ping poweroff -> system poweroff reboot -> system reboot sysstat -> system show stats traceroute -> route trace uname -> system show version uptime -> system show uptime who -> user show active You have 16 aliases The sysstat alias can take an interval value for the number of seconds between each display of statistics. The following example refreshes the display every 10 seconds: # sysstat 10

Time Servers and the NTP Command


The ntp command allows you to synchronize a Data Domain System with an NTP time server, manage the NTP service, or turn off the local (on the Data Domain System) NTP server. The default system settings for NTP service are enabled and multicast. A Data Domain system can use a time server supplied through the default multicast operation, received from DHCP, or set manually with the Data Domain system ntp add command.

Time servers set with the ntp add command override time servers from DHCP and from multicast operations. Time servers from DHCP override time servers from multicast operations. The Data Domain system ntp del and ntp reset commands act only on manually added time servers, not on DHCP supplied time servers. You cannot delete DHCP time servers or reset to multicast when DHCP time servers are supplied.

Enable NTP Service


To enable NTP service on a Data Domain System, use the ntp enable operation. Available to administrative users only. ntp enable

Disable NTP Service


To disable NTP service on a Data Domain System, use the ntp disable operation. Available to administrative users only. ntp disable
System Maintenance 83

Time Servers and the NTP Command

Add a Time Server


To add a remote time server to NTP list, use the ntp add timeserver operation. Available to administrative users only. ntp add timeserver server_name For example, to add an NTP time server named srvr26.yourcompany.com to the list: # ntp add timeserver srvr26.yourcompany.com

Delete a Time Server


To delete a manually added time server from the list, use the ntp del timeserver operation. Available to administrative users only. ntp del timeserver server_name For example, to delete an NTP time server named srvr26.yourcompany.com from the list: # ntp del timeserver srvr26.yourcompany.com

Reset the List


To reset the time server list from manually entered time servers to either DHCP time servers (if supplied) or to the multicast mode (if no DHCP time servers supplied), use the ntp reset timeservers operation. Available to administrative users only. ntp reset timeservers

Reset All NTP Settings


To reset the local NTP server list to either DHCP time servers (if supplied) or to the multicast mode (if no DHCP time servers supplied) and reset the service to enabled, use the ntp reset operation. Available to administrative users only. ntp reset

Display NTP Status


To display the local NTP service status, time, and synchronization information, use the ntp status operation. ntp status The following example shows the information that is returned:
84 Data Domain Operating System User Guide

Time Servers and the NTP Command

# ntp status NTP Service is currently enabled. Current Clock Time: Fri, Nov 12 2004 16:05:58.777 Clock last synchronized: Fri, Nov 12 2004 16:05:19.983 Clock last synchronized with time server: srvr26.company.com

Display NTP Settings


To display the NTP enabled/disabled setting and the time server list, use the ntp show config operation. ntp show config The following example shows the information that is returned: # ntp show config NTP Service: enabled The Remote Time Server List is: srvr26.company.com, srvr28.company.com

System Maintenance

85

Time Servers and the NTP Command

86

Data Domain Operating System User Guide

Network Management
The net command manages the use of DHCP, DNS, and IP addresses, and displays network information and status. The route command manages routing rules.

Note Changes to the ethernet interfaces made with the net command options flush the routing table. All routing information is lost and any data movement currently using routing is immediately cut off. Data Domain recommends making interface changes only during scheduled maintenance down times. After making interface changes, you must reconfigure any routing rules and gateways.

Ethernet Failover and Net Aggregation - Considerations


Ethernet failover and net aggregation are supported within the following guidelines. Note that when using 10Gb Ethernet cards, aggregation and failover are limited to two interfaces that must be on the same card. Note Be sure to see also the next section, .

A Data Domain system can have up to six physical Ethernet interface ports (eth0, eth1, eth2, eth3, eth4, and eth5). Two or more interfaces (depending on the restrictions below) can be set up as a virtual interface for failover or aggregation. The recommended number of physical interfaces for failover is two. However, you can set up one primary interface and up to five failover interfaces (except with 10 Gb Ethernet cards, which are restricted to one primary and one failover). The recommended number of physical interfaces for aggregation is two. Because ports eth0 and eth1 are reserved for the motherboard, aggregation can use at most 4 (two with 10 Gb Ethernet cards) physical interfaces (eth2, eth3, eth4, eth5) configured in a virtual interface. Aggregation between motherboard interfaces (eth0 and eth1) and optional NIC interfaces is not supported. Each physical interface (eth0, eth1, eth2, eth3, eth4, eth5) can be a part of at most 1 virtual interface. A system can have multiple and mixed failover and aggregation virtual interfaces subject to the restrictions above.
87

Virtual interfaces must be created from identical physical interfaces (all copper or all fiber or all 1 Gb or all 10 Gb).

Guidelines: Interface Aggregation Failover (Both Non-10GE and 10GE) Does not work
SUPPORTED SUPPORTED

1 Gb -> 10 Gb
1 Gb -> 1 Gb 10 Gb-> 10 Gb

Does not work


SUPPORTED SUPPORTED

copper -> fiber


copper -> copper fiber -> fiber

Not supported
SUPPORTED SUPPORTED

Not supported
SUPPORTED SUPPORTED

MOBO -> NIC


NIC -> NIC

Not supported
SUPPORTED

SUPPORTED
SUPPORTED

When setting up a virtual interface:


The virtual-name must be in the form veth<x> where <x> is a number from 0 (zero) to 3. The physical-name must be in the form eth<x> where <x> is a number from 0 (zero) to 5. Each interface used in a virtual interface must first be disabled with the net disable command. An interface that is part of a virtual interface is seen as disabled by other net commands. All interfaces in a virtual interface must be on the same subnet and on the same LAN or VLAN (or card for 10 Gb). Network switches used by a virtual interface must be on the same subnet. A virtual interface needs an IP address that is set manually. Use the net config command. The first interface given for a virtual interface is the primary interface used, and the other interfaces are backup interfaces. If the primary interface goes down and multiple interfaces are still available, the next interface used is a random choice.

88

Data Domain Operating System User Guide

Supported Pairs

Aggregation Pairs Supported: eth2-eth3, eth2-eth4, eth2-eth5, eth3-eth4, eth3-eth5, eth4-eth5.

Non-10GE Failover eth0-eth1, eth0-eth2, eth0-eth3, eth0-eth4, eth0-eth5, eth1-eth2, eth1-eth3, eth1-eth4, eth1-eth5, eth2-eth3, eth2-eth4, eth2-eth5, eth3-eth4, eth3-eth5, eth4-eth5.

10GE Failover eth2-eth3, eth4-eth5.

Pairs NOT Supported:

eth0-eth1, eth0-eth2, eth0-eth3, eth0-eth4, eth0-eth5, eth1-eth2, eth1-eth3, eth1-eth4, eth1-eth5 (anything with eth0 or eth1).

eth0-eth1, eth0-eth2, eth0-eth3, eth0-eth4, eth0-eth5, eth1-eth2, eth1-eth3, eth1-eth4, eth1-eth5, eth2-eth4, eth2-eth5, eth3-eth4, eth3-eth5.

Network Management

89

Ethernet Failover - Set Up Failover Between Ethernet Interfaces


Ethernet failover provides improved network stability and performance. Four net failover commands control the feature. Note that a failover from one physical interface to another may take up to 60 seconds. The delay is to guard against multiple failovers when a network is unstable. See Ethernet Failover and Net Aggregation - Considerations on page 87 before setting up failover.

Set up Failover
To set up failover, use the net failover add command with a virtual interface name in the form veth<x>, where <x> is a number from 0 (zero) to 3. net failover add virtual-ifname interfaces physical-ifnames For example, to create a failover virtual interface named veth1 using the physical interfaces eth2 and eth3: # net failover add veth1 interfaces eth2,eth3 Interfaces for veth1: eth2, eth3

Remove a Physical Interface from a Failover Virtual Interface


Use the failover del command to remove a physical Ethernet interface from a failover virtual interface. The physical interface remains disabled after being removed from the virtual interface. net failover del virtual-ifname interfaces physical-ifnames For example, to remove eth2 from the virtual interface veth1: # net failover del veth1 interfaces eth2 Interfaces for veth1: eth3

Display Failover Virtual Interfaces


Use the net failover show command to display configured failover virtual interfaces. net failover show The value in the Hardware Address column belongs to the physical interface currently in use by the failover virtual interface.

90

Data Domain Operating System User Guide

# net failover show Ifname Hardware Address ---------------------veth1 00:04:23:d4:f1:27 ----------------------

Configured Interfaces --------------------eth3 ---------------------

Delete a Virtual Failover Interface


Reset a virtual interface by removing all associated physical interfaces from the virtual interface. To reset a virtual interface and remove all physical interfaces that were associated with it, use the net failover reset command: net failover reset virtual-ifname For example, the following command removes the virtual interface veth1 and releases all of its associated physical interfaces. (The physical interfaces are still disabled and must be enabled for any other use than as part of another virtual interface.) # net failover reset veth1 Interfaces for veth1: After resetting the virtual interface, the physical interfaces remain disabled. Use the net enable command to re-enable the interfaces. # net enable eth2 # net enable eth3

Sample Failover Workflow


1. Disable the interfaces eth2, eth3, and eth4 to use as failover interfaces: # net disable eth2 # net disable eth3 # net disable eth4 2. Create a failover virtual interface named veth1 using the physical interfaces eth2 and eth3: # net failover add veth1 interfaces eth2,eth3 Interfaces for veth1: eth2, eth3 3. Show configured failover virtual interfaces: # net failover show Ifname Hardware Address ---------------------veth0 00:04:23:d4:f1:27 ---------------------Network Management

Configured Interfaces --------------------eth2,eth3 --------------------91

4. Add physical interface eth4 to failover virtual interface veth1: # net failover add veth1 interfaces eth4 Interfaces for veth1: eth2,eth3,eth4 5. Remove eth2 from the virtual interface veth1: # net failover del veth1 interfaces eth2 Interfaces for veth1: eth3,eth4 6. Show configured failover virtual interfaces: # net failover show Ifname Hardware Address ---------------------veth0 00:04:23:d4:f1:27 ---------------------# net failover reset veth1 Interfaces for veth1: 8. Re-enable the physical interfaces: # net enable eth2 # net enable eth3 # net enable eth4 9. Show the failover setup: # net failover show
No interfaces in failover mode.

Configured Interfaces --------------------eth3,eth4 ---------------------

7. Remove the virtual interface veth1 and release all of its associated physical interfaces:

Net Aggregation/Ethernet Trunking


Ethernet aggregation provides improved network stability and performance. Four net aggregate commands control the feature. (Net aggregation and Ethernet trunking mean the same thing.) See Ethernet Failover and Net Aggregation - Considerations on page 87 before setting up aggregation:

Set up link aggregation between Ethernet interfaces


Create a virtual interface with supplied physical interfaces in a specified mode. Aggregate mode must be specified, as the default is no aggregation.
92 Data Domain Operating System User Guide

To create a virtual interface with supplied physical interfaces in a specified mode, use the net aggregate command: net aggregate add <virtual-ifname> mode {roundrobin | xor-L2 | xor-L3L4} interfaces <physical-ifname-list> The command creates a virtual interface virtual-ifname in the mode aggregate-mode with the supplied physical interfaces physical-ifname-list. The aggregated links transmit packets out of the Data Domain system. The supported aggregate modes are: round-robin: Transmit packets in sequential order from the first available link through the last in the aggregated group. XOR-L2: Transmit based on a hash policy. An XOR of the source and destination MAC addresses generates the hash. XOR-L3L4: Transmit based on a hash policy. An XOR of the source and destination's upper layer (i.e. Layers 3 and 4) protocol info generates the hash. This allows for traffic to a particular network peer to span multiple slaves, although a single connection does not span multiple slaves. L3 = Layer 3 = IP Addresses for the source and destination. L4 = Layer 4 = TCP or UDP Ports for the source and destination.

For example, to enable link aggregation on virtual interface veth1 to physical interfaces eth1 and eth2 in mode xor-L2, use the following command: # net aggregate add veth1 mode xor-L2 interfaces eth2 eth3

Remove selected physical interfaces from an aggregate virtual interface


To delete interfaces from the physical list of the aggregate virtual interface, use the net aggregate del command. net aggregate del <virtual-ifname> interfaces <physical-ifname-list> For example, to delete physical interfaces eth1 and eth2 from the aggregate virtual interface xor-L2, use the following command: # net aggregate del veth1 interfaces eth2,eth3

Display basic information on the aggregate setup


To display basic information on the aggregate setup, use the net aggregate show command. net aggregate show
For example:

# net aggregate show


Ifname Network Management Hardware Address Aggregation Mode Configured Interfaces 93

-----veth1 ------

----------------00:15:17:0f:63:fc -----------------

---------------xor-L2 ----------------

--------------------eth4,eth5 ---------------------

Remove all physical interfaces from an aggregate virtual interface


To remove all physical interfaces from an aggregate virtual interface, use the aggregate reset command. net aggregate reset virtual-ifname
For example:

# net aggregate reset veth1 Interfaces for veth1:

Sample Aggregation Workflow


1. Disable the interfaces eth2, eth3, and eth4 to use as aggregation interfaces: # net disable eth2 # net disable eth3 # net disable eth4 2. Enable link aggregation on virtual interface veth1 to physical interfaces eth1 and eth2 in mode xor-L2, use the following command: # net aggregate add veth1 mode xor-L2 interfaces eth2 eth3 3. Show the aggregate setup: # net aggregate show
Ifname -----veth1 Hardware Address ----------------00:15:17:0b:d0:61 Aggregation Mode ---------------xor-L2 Configured Interfaces --------------------eth2,eth3

4. Delete physical interface eth3 from the aggregate virtual interface veth1: # net aggregate del veth1 interfaces eth3 5. Show the aggregate setup: # net aggregate show
Ifname -----veth1 Hardware Address ----------------00:15:17:0b:d0:61 Aggregation Mode ---------------xor-L2 Configured Interfaces --------------------eth2

6. Add link aggregation on virtual interface veth1 to physical interface eth4:

94

Data Domain Operating System User Guide

The net Command

# net aggregate add veth1 mode xor-L2 interfaces eth4 7. Show the aggregate setup: # net aggregate show
Ifname -----veth1 Hardware Address ----------------00:15:17:0b:d0:61 Aggregation Mode ---------------xor-L2 Configured Interfaces --------------------eth2,eth4

8. Remove all physical interfaces from the aggregate virtual interface veth1 # net aggregate reset veth1 Interfaces for veth1: # 9. Re-enable the physical interfaces: # net enable eth2 # net enable eth3 # net enable eth4 10. Show the aggregate setup: # net aggregate show
No interfaces in aggregate mode.

The net Command


Use the net command for the following operations.

Enable an Interface
To enable a disabled Ethernet interface on the Data Domain System, use the net enable ifname operation, where ifname is the name of an interface. Administrative users only. net enable ifname For example, to enable the interface eth0: # net enable eth0

Disable an Interface
To disable an Ethernet interface on the Data Domain System, use the net disable ifname operation. Administrative users only.
Network Management 95

The net Command

net disable ifname For example, to disable the interface eth0: # net disable eth0

Enable DHCP
To set up an Ethernet interface to expect DHCP information, use the net config ifname dhcp yes operation. Changes take effect only after a system reboot. Administrative users only. Note To activate DHCP for an interface when no other interface is using DHCP, the Data Domain System must be rebooted. To activate DHCP for an optional gigabit Ehternet card, either have a network cable attached to the card during the reboot or, after attaching a cable, run the net enable command for the interface. net config ifname dhcp yes For example, to set DHCP for the interface eth0: # net config eth0 dhcp yes To check the operation, use the net show configuration command. To check that the Ethernet connection is live, use the net show hardware command.

Disable DHCP
To set an Ethernet interface to not use DHCP, use the net config ifname dhcp no operation. After the operation, you must set an IP address for the interface. All other DHCP settings for the interface are retained. Administrative users only. net config ifname dhcp no For example, to disable DHCP for the interface eth0: # net config eth0 dhcp no To check the operation, use the net show configuration command.

Change an Interface Netmask


To change the netmask used by an Ethernet interface, use the net config ifname netmask mask operation. Administrative users only. net config ifname netmask mask For example, to set the netmask 255.255.255.0 for the interface eth0:

96

Data Domain Operating System User Guide

The net Command

# net config eth0 netmask 255.255.255.0

Change an Interface Transfer Unit Size


To change the maximum transfer unit size for an Ethernet interface, use the net config ifname mtu operation. Supported values are from 256 to 9014. For 100 Base-T and gigabit networks, 1500 is the standard default. The default option returns the setting to the default value. Make sure that all of your network components support the size set with this option. Administrative users only. net config ifname mtu {size | default} For example, to set a maximum transfer unit size of 9014 for the interface eth2: # net config eth2 mtu 9014

Add or Change DNS servers


To add or change DNS servers for the Data Domain System to use in resolving addresses, use the net set dns ipaddr operation to give DNS server IP addresses. The operation writes over the current list of DNS servers. Only the servers given in the latest command are available to a Data Domain System. The list can be comma-separated, space-separated, or both. Changes take effect only after a system reboot. Administrative users only. net set dns ipaddr1[,ipaddr2[,ipaddr3]] Note To activate a DNS change, the Data Domain System must be rebooted. For example, to allow a Data Domain System to use a DNS server with an IP address of 123.234.78.92: # net set dns 123.234.78.92 To check the operation, use the net ping host-name command.

Ping a Host
To check that a Data Domain System can communicate with a remote host, use the net ping operation with a hostname or IP address. net ping hostname [broadcast] [count n] [interface ifname] broadcast Allows pinging a broadcast address.
Network Management 97

The net Command

count Gives the number of pings to issue. interface Gives the interface to use: eth0 through eth3. For example, to check that communication is possible with the host srvr24: # net ping srvr24

Change the Data Domain System Hostname


To change the name other systems use to access the Data Domain System, use the net set hostname host operation. Administrative users only. net set hostname host For example, to set the Data Domain System name to dd10: # net set hostname dd10 To check the operation, use the net show hostname command. Note If the Data Domain System is using CIFS with active directory authentication, changing the hostname causes the Data Domain System to drop out of the domain. Use the cifs set authentication command to rejoin the active directory domain.

Change an Interface IP Address


To change the IP address used by a Data Domain System Ethernet interface, use the net config ifname ipaddr operation. If the interface is configured for DHCP, the command returns an error. Use the net config ifname dhcp disable command to turn off DHCP for an interface. See Disable DHCP on page 96 for details. Administrative users only. net config ifname ipaddr For example, to set the interface eth0 to the IP address of 192.168.1.1: # net config eth0 192.168.1.1 Use the net show config command to check the operation.

Change the Domain Name


To change the domain name used by the Data Domain System, use the net set domainname dm.name operation. Administrative users only. net set domainname dm.name For example, to set the domain name to yourcompany-ny.com:
98 Data Domain Operating System User Guide

The net Command

# net set domainname yourcompany-ny.com

Add a Hostname/IP Address to the /etc/hosts File


To associate an IP address with a hostname, use the net hosts add operation. The hostname is a fully-qualified domain name or a hostname. In a list, separate each entry with a space and enclose the list in double quotes. The entry is added to the /etc/hosts file. Administrative users only. net hosts add ipaddr {host | alias host} ... For example, to associate both the fully-qualified domain name bkup20.yourcompany.com and the hostname of bkup20 with an IP address of 192.168.3.3: # net hosts add 192.168.3.3 bkup20 bkup20.yourcompany.com

Reset Network Parameters


To reset the hostname, domain name, and DNS parameters to their default values (empty), use the net reset operation. The command requires at least one parameter and accepts multiple parameters. Changes take effect only after a system reboot. Administrative users only. net reset {hostname | domainname | dns} For example, to reset the system host name: # net reset hostname

Set Interface Duplex Line Use


To manually set the line use for an interface to half-duplex or full-duplex, use the net config ifname duplex operation and set the speed at the same time. Half-duplex is not available for any port set for a speed of 1000 (Gigabit). Note: Not applicable with 10Gb Ethernet cards. Administrative users only. net config ifname duplex {full|half} speed {10 | 100 | 1000} For example, to set the line use to half-duplex for interface eth1: # net config eth1 duplex half speed 100

Set Interface Line Speed


To manually set the line speed for an interface to 10 Base-T, 100 Base-T, or 1000 Base-T (Gigabit), use the net config ifname speed operation. A line speed of 1000 allows only a duplex setting of full. Setting a port to a speed of 1000 and duplex of half leads to unpredictable results. Note: Not applicable with 10Gb Ethernet cards. Administrative users only.
Network Management 99

The net Command

net config ifname speed {10 | 100 | 1000} For example, to set the line speed to 100 Base-T for interface eth1: # net config eth1 speed 100

Set Autonegotiate for an Interface


To allow the network interface card to autonegotiate the line speed and duplex setting for an interface, use the net config ifname autoneg operation. Note: Not applicable with 10Gb Ethernet cards. Administrative users only. net config ifname autoneg For example, to set autonegotiation for interface eth1: # net config eth1 autoneg

Delete a Hostname/IP address from the /etc/hosts File


To delete a hostname/IP address entry from the /etc/hosts file, use the net hosts del operation. Administrative users only. net hosts del ipaddr For example, to remove the hosts with an IP address of 192.168.3.3: # net hosts del 192.168.3.3

Delete All Hostname/IP addresses from the /etc/hosts File


To delete all hostname/IP address entries from the /etc/hosts file, use the net hosts reset operation. Administrative users only. net hosts reset

Display Hostname/IP addresses from the /etc/hosts File


To display hostname/IP addresses from the /etc/hosts file, use the net hosts show operation. Administrative users only. net hosts show The display looks similar to the following:

100

Data Domain Operating System User Guide

The net Command

# net hosts show Hostname Mappings: 192.168.3.3 -> bkup20 bkup20.yourcompany.com

Display an Ethernet Interface Configuration


To display the current network driver settings for an Ethernet interface, use the net show config operation. With no ifname, the command returns configuration information for all Ethernet interfaces. net show config [ifname]

A display for interface eth0 looks similar to the following: # net show config eth0 eth0 Link encap:Ethernet HWaddr 00:02:B3:B0:8A:D2 inet addr:192.168.240.187 Bcast:123.456.78.255 Mask:255.255.255.0 UP BROADCAST NOTRAILERS RUNNING MULTICAST MTU:1500 Metric:1 RX packets:3081076 errors:0 dropped:0 overruns:0 frame:0 TX packets:1533783 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:100 RX bytes:3764464 (3.5 Mb) TX bytes:136647745 (130.3 Mb) Interrupt:20 Base address:0xc000

Display Interface Settings


The display of Ethernet interface settings shows what you have configured, not the actual status of each interface. For example, if an interface on the Data Domain System does not have a live Ethernet connection, the interface is not actually enabled. Port lists each Ethernet interface by name. Enabled shows whether or not the port is configured as enabled. To check the actual status of interfaces, use the net show hardware command or see Network Hardware State in the Data Domain Enterprise Manager. Both show a Cable column entry of yes for live Ethernet connections.

Network Management

101

The net Command

DHCP shows whether or not port characteristics are supplied by DHCP. If a port uses DHCP for configuration values, the display does not have values for the remaining columns. IP address is the address used by the network to identify the port. Netmask is the standard IP network mask. Display Use the net show settings operation or click Network in the left panel of the Data Domain Enterprise Manager and look at Network Settings. net show settings The display is similar to the following: # net show settings
Ethernet settings: port enabled netmask ---------------------eth0 yes (dhcp-supplied) eth1 no n/a eth2 yes 255.255.255.0 eth3 yes (dhcp-supplied) --------------------DHCP -------yes n/a no yes -------IP address --------------(dhcp-supplied) n/a 192.168.10.187 (dhcp-supplied) ---------------

Display Ethernet Hardware Information


The display of the actual status of Ethernet connections has the columns: Port is for the four Ethernet interfaces, eth0 through eth3. All Ethernet interfaces use the Gigabit data transmission speed of 1000 Base-T. Speed is the actual speed at which the port currently deals with data. Duplex shows whether the port is using the full or half duplex protocol. Supp. Speeds lists all the speeds that the port is capable of using. Hardware Address is the MAC address. Physical shows whether the port is Copper or Fiber.

102

Data Domain Operating System User Guide

The net Command

Cable shows whether or not the port currently has a live Ethernet connection. Display Use the net show hardware operation or click Network in the left panel of the Data Domain Enterprise Manager and look at Network Hardware State. net show hardware The display looks similar to the following (each line wraps in the example here): # net show hardware
Port ---eth0 eth1 eth2 eth3 Speed -------100Mb/s unknown 1000Mb/s unknown Duplex ------full unknown full unknown Supp Speeds ----------10/100/1000 10/100/1000 10/100/1000 10/100/1000 Hardware Address ----------------00:02:b3:b0:8a:d2 00:02:b3:b0:80:3f 00:07:e9:0d:5a:1a 00:07:e9:0d:5a:1b Physical -------Copper Copper Copper Copper Cable ----yes no yes no

Display the Data Domain System Hostname


To display the current hostname used by the Data Domain System, use the net show hostname operation. net show hostname The display is similar to the following: # net show hostname The Hostname is: dd10.yourcompany.com

Display the Domain Name Used for Email


To display the domain name used for email sent by a Data Domain System, use the net show domainname operation. net show domainname The display looks similar to the following: # net show domainname The Domainname is: yourcompany.com

Display DNS Servers


To display the DNS servers used by a Data Domain System, use the net show dns operation.
Network Management 103

The net Command

net show dns The display looks similar to the following. The last line informs that the servers were configured manually or by DHCP. # net show dns # Server - ----------1 192.168.1.3 2 192.168.1.4 - ----------Showing DNS servers configured manually.

Display Network Statistics


To display network statistics, use the net show stats operation. The information returned from all the options is used by Data Domain support staff for troubleshooting. net show stats [all | interfaces | listening | route | statistics] all Display summaries of the other options. interfaces Display the kernel interface table and a table of all network interfaces and their activity. listening Display statistics about active internet connections from servers. route Display the IP routing tables showing the destination, gateway, netmask, and other information for each route. statistics Display network statistics for protocols. The display with no options is similar to the following, with statistics about live client connections. # net show stats Active Internet connections (w/o servers) Proto Recv-Q Send-Q Local Address Foreign Address State tcp 0 20 123.234.78.90:21 123.234.78.11:512 ESTABLISHED tcp 0 0 123.234.78.90:34 123.234.78.27:673 TIME_WAIT

Display All Networking Information


To display the output from the commands net show config, net show settings, net show domainname, net show hostname, net show hardware, net show dns, and net show stats, use the net show hostname operation. net show all

104

Data Domain Operating System User Guide

The route Command

The route Command


Use the route command to manage routing between a Data Domain System and backup hosts. An added routing rule appears in the Kernel IP routing table and in the Data Domain System Route Config list, a list of static routes that are re-applied at each system boot. Use the route show config command to display the Route Config list. Use the route show table command to display the Kernel IP routing table. Note Changes to the ethernet interfaces made with the net command options flush the routing table. All routing information is lost and any data movement currently using routing is immediately cut off. Data Domain recommends making interface changes only during scheduled maintenance down times. After making interface changes, you must reconfigure any routing rules and gateways.

Add a Routing Rule


To add a routing rule, use the route add -host or add -net operation. If the target being added is a network, use the -net option. If the target is a host, use the -host option. The gateway can be either an IP address or a hostname that is available to the Data Domain System and that can be resolved to an IP address. Administrative users only. route add -host host-name gw gw-addr route add -net ip-addr netmask mask gw gw-addr To add a route for the host user24 with a gateway of srvr12: # route add -host user24 gw srvr12 To add a route with a route specification of 192.168.1.x, a netmask, and a gateway of srvr12: # route add -net 192.168.1.0 netmask 255.255.255.0 gw srvr12 The following example gives a default gateway of srvr14 for use when no other route matches: # route set gateway srvr14

Remove a Routing Rule


To remove a routing rule, use the route del -host or del -net operation. Use the same form (-host or -net) to delete a rule as was used to create the rule. The route show config command shows whether the entry is a host name or a net address. If neither -host or -net is used, any matching lines in the Route Config list are deleted. Administrative users only. route del -host host-name route del -net ip-addr netmask mask To remove a route for host user24:
Network Management 105

The route Command

# route del -host user24 To remove a route with a route specification of 192.168.1.x and a gateway of srvr12: # route del -net 192.168.1.0 netmask 255.255.255.0 gw srvr12

Change the Routing Default Gateway


To change the routing default gateway, use the route set gateway ipaddr operation. Administrative users only. route set gateway ipaddr For example, to set the default routing gateway to the IP address of 192.168.1.2: # route set gateway 192.168.1.2

Reset the Default Routing Gateway


To reset the default routing gateway to the default value (empty), use the route reset operation. Administrative users only. route reset gateway

Display a Route
To display a route used by a Data Domain System to connect with a particular destination, use the route show trace host operation. route trace host For example, to trace the route to srvr24: # route trace srvr24 Traceroute to srvr24.yourcompany.com (192.168.1.6), 30 hops max, 38 byte packets 1 srvr24 (192.168.1.6) 0.163 ms 0.178 ms 0.147 ms

Display the Configured Static Routes


To display the configured static routes that are in the Route Config list, use the route show config operation. route show config The display looks similar to the following (each line in the example wraps):

106

Data Domain Operating System User Guide

The route Command

# route show config The Route Config list is: -host user24 gw srvr12 -net 192.168.1.0 netmask 255.255.255.0 gw srvr12

Display the Kernel IP Routing Table


To display all entries in the Kernel IP routing table, use the route show table operation. route show table The display looks similar to the following (each line in the example wraps): # route show table Kernel IP routing table Destination Genmask 192.168.1.0 255.255.255.0 127.0.0.0 255.0.0.0 0.0.0.0 0.0.0.0 Use Iface 0 eth0 0 lo 0 eth0

Gateway Flags Metric Ref 0.0.0.0 U 0 0.0.0.0 U 0 192.168.1.2 UG 0

0 0 0

Display the Default Routing Gateway


To display the configured or DHCP-supplied routing gateways used by a Data Domain System, use the route show gateway operation. route show gateway The display looks similar to the following: # route show gateway Default Gateways 192.168.1.2 192.168.3.4

Network Management

107

Multiple Network Interface Usability Improvement

Multiple Network Interface Usability Improvement


In DDOS 4.5 there is a new enhancement made to the kernel, as follows: Suppose there are multiple IP addresses configured on a Data Domain System. When a request comes for service from one IP address on a Data Domain System, the response will go out of the same IP address as the first choice. For high-availability connectivity support, if no route could be found from the interface on which the IP address is configured, then a route from another interface, if found, will be used to send out the response. This change is particularly useful in scenarios that require the Restorer and Media Server to be dual homed with different IP subnets. Also this is a valuable change when using the DDOS "failover" feature. With release 4.5, we can make better use of having more than one link on the same subnet. It is now possible to have HA and load balancing, removing a performance bottleneck. To sum up, a direct benefit of this routing enhancement is that Data Domain Systems allow multiple interfaces to be configured on the same subnet, while keeping each of them working 'independently'.

108

Data Domain Operating System User Guide

Access Control for Administration


The Data Domain System adminaccess command allows remote hosts to use the FTP, TELNET, and SSH administrative protocols on the Data Domain System. The command is available only to Data Domain System administrative users.

The FTP and TELNET protocols have host-machine access lists that limit access. The SSH protocol is open to the default user sysadmin and to all Data Domain System users added with the user add command. By default, only the SSH protocol is enabled.

Add a Host
To add a host (IP address or hostname) to the FTP or TELNET protocol access lists, use the adminaccess add operation. You can enter a list that is comma-separated, space-separated, or both. To give access to all hosts, the host-list can be an asterisk (*). Administrative users only. adminaccess add {ftp | telnet | ssh | http} host-list The host-list can contain class-C IP addresses, IP addresses with either netmasks or length, hostnames, or an asterisk (*) followed by a domain name, such as *.yourcompany.com. For SSH, TCP wrappers are used and /etc/hosts.allow and /etc/hosts.deny files are updated. For HTTP/HTTPS, Apache's mod_access is used for host-based accesscontrol and /usr/local/apache2/conf/httpd-ddr.conf file is updated. For example, to add srvr24 and srvr25 to the list of hosts that can use TELNET on the Data Domain System: # adminaccess add telnet srvr24,srvr25 Netmasks, as in the following examples, are supported: # adminaccess add ftp 192.168.1.02/24 # adminaccess add ftp 192.168.1.02/255.255.255.0

109

Remove a Host

Remove a Host
To remove hosts (IP addresses, hostnames, or asterisk (*)) from the FTP or TELNET access lists, use the adminaccess del operation. You can enter a list that is comma-separated, space-separated, or both. Administrative users only. adminaccess del {ftp | telnet} host-list For example, to remove srvr24 from the list of hosts that can use TELNET on the system: # adminaccess del telnet srvr24

Allow Access from Windows


To allow access using SSH, TELNET, and FTP for Windows domain users who have no local account on the Data Domain System, use the adminaccess authentication add cifs operation. For administrative access, the user in the Windows domain must be in either the standard Windows Domain Admins group or in a group that you create named Data Domain. Users from both group names are always accepted as administrative users on the Data Domain system. (User-level access is not allowed.) adminaccess authentication add cifs The SSH, TELNET, or FTP command that accesses the Data Domain System must include, in double quotes, the domain name, a backslash, and the user name. For example: C:> ssh domain2\djones@ddr22 The login to the Data Domain System requires you to enter the password twice.

Restrict Administrative Access from Windows


To reverse the ability for users to access the Data Domain System if they have no local account, use the adminaccess authentication del cifs operation. adminaccess authentication del cifs

Reset Windows Administrative Access to the Default


To reset Windows administrative authentication to the default of requiring a local account, use the adminaccess authentication reset cifs operation. adminaccess authentication reset cifs

110

Data Domain Operating System User Guide

Enable a Protocol

Enable a Protocol
By default, the SSH, HTTP, and HTTPS services are enabled. FTP and TELNET are disabled. HTTP and HTTPS allow users to log in through the web-based graphical user interface. The adminaccess enable operation enables a protocol on the Data Domain System. Note that to use FTP and TELNET, you must also add host machines to the access lists. Administrative users only. adminaccess enable {http | https | ftp | telnet | ssh | all} For example, to enable the FTP service: # adminaccess enable ftp

Disable a Protocol
To disable a service on the Data Domain System, use the adminaccess disable operation. Disabling FTP or TELNET does not affect entries in the access lists. If all services are disabled, the Data Domain System is accessible only through a serial console or keyboard and monitor. Administrative users only. adminaccess disable {http | https | ftp | telnet | ssh | all} For example, to disable the FTP service: # adminaccess disable ftp

Reset system Access


By default, FTP and TELNET are disabled and have no entries in their access lists. SSH is enabled. No one is able to use FTP or TELNET unless the appropriate access list has one or more host entries. The adminaccess reset operation returns the FTP and TELNET protocols to the default state of disabled with no entries and sets SSH to enabled. Administrative users only. adminaccess reset {ftp | telnet | ssh | all} For example, to reset the FTP list to an empty list and reset FTP to disabled: # adminaccess reset ftp

Access Control for Administration

111

Add an Authorized SSH Public Key

Add an Authorized SSH Public Key


Adding an authorized SSH public key to the SSH key file on a Data Domain System is done from a machine that accesses the Data Domain System. Adding a key allows a user to log in from the remote machine to the Data Domain System without entering a password. After creating a key on the remote machine, use the adminaccess add ssh-keys operation. Administrative users only. adminaccess add ssh-keys For example, the following steps create a key and then write the key to a Data Domain System: 1. On the remote machine, create the public and private SSH keys. jsmith > ssh-keygen -d Generating public/private dsa key pair. Enter file in which to save the key (/home/jsmith/.ssh/id_dsa): . 2. Press Enter to accept the file location and other defaults. The public key created under /home/jsmith/.ssh (in this example) is id_dsa.pub. 3. On the remote machine, write the public key to the Data Domain System, dd10 in this example. The Data Domain System asks for the sysadmin password before accepting the key: jsmith > ssh -l sysadmin dd10 adminaccess add ssh-keys \ < ~/.ssh/id_dsa.pub

Remove an SSH Key File Entry


To remove one entry from the SSH key file, use the adminaccess del ssh_keys lineno operation. The lineno variable is the line number as displayed by the adminaccess show ssh-keys command. Available only to administrative users. adminaccess del ssh-keys lineno For example, to remove the third entry in the SSH key file: # adminaccess del ssh-keys 3

Remove the SSH Key File


To remove the entire SSH key file, use the adminaccess reset ssh-keys operation. Available only to administrative users. adminaccess reset ssh-keys
112 Data Domain Operating System User Guide

Create a New HTTPS Certificate

Create a New HTTPS Certificate


To generate a new HTTPS certificate for the Data Domain System, use the adminaccess https generate certificate command. Available only to administrative users. adminaccess https generate certificate

Display the SSH Key File


To display all entries in the SSH key file, use the adminaccess show ssh-keys operation. The output gives a line number to each entry. Available only to administrative users. adminaccess show ssh-keys

Display Hosts and Status


The display shows every access service available on a Data Domain System, whether or not the service is enabled, and a list of hostnames that are allowed access through each service that uses a list. An N/A in the Allowed Hosts column means that the service does not use a list. A - (dash) means that the service can have a list, but currently has no hosts in the list. Administrative users only. Display To display protocol access lists and status, use the adminaccess show operation or click Admin Access in the left panel of the Data Domain Enterprise Manager. adminaccess show For example, to show the status and lists for all services: # adminaccess show Service Enabled -------- -------ssh yes telnet no ftp yes http https -------yes no -------Allowed Hosts -----------------------N/A admin10.yourcompany.com admin22.yourcompany.com N/A N/A ------------------------

Access Control for Administration

113

Display Windows Access Setting

Display Windows Access Setting


To display the current value of the setting that allows Windows administrative users to access a Data Domain System when no local account exists, use the adminaccess authentication show command. adminaccess authentication show

Procedure: Return Command Output to a Remote machine


Using SSH, you can have output from Data Domain System commands return to a remote machine at login and then automatically log out. Available only to the user sysadmin. For example, the following command connects with the machine dd10 as user sysadmin, asks for the password, and returns output from the command filesys status. # ssh -l sysadmin dd10 filesys status sysadmin@dd10s password: The filesystem is enabled You can create a file with a number of Data Domain System commands, with one command on a line, and then use the file as input to the login. Output from all the commands is returned. For example, a file named cmds11 could contain the following commands: filesys status system show uptime nfs status The login and the returned data look similar to the following: # ssh -l sysadmin dd10 < cmds11 sysadmin@dd10s password: The filesystem is enabled 3:00 pm up 14 days 10 hours 15 minutes 1 user, load average: 0.00, 0.00, 0.00 Filesystem has been up 14 days 10:13 The NFS system is currently active and running Total number of NFS requests handled = 314576 To use scripts that return output from a Data Domain System, see Add an Authorized SSH Public Key on page 112 to eliminate the need for a password.

114

Data Domain Operating System User Guide

User Administration

The Data Domain System command user adds, removes, and displays users and changes user passwords. A Data Domain System has two classes of user accounts. The user class is for standard users who have access to a limited number of commands. Most of the user commands display information. The admin class is administrative users who have access to all Data Domain System commands. The default administrative account is sysadmin. You can change the sysadmin password, but cannot delete the account. Throughout this manual, command explanations include text similar to the following for commands or operations that standard users cannot access: Available to administrative users only.

Add a User
To add a Data Domain System user, use the user add user-name operation. The operation asks for a password and confirmation or you can include the password as part of the command. Each user has a privilege level of either admin or user. Admin is the default. The only way to change a users privilege level is to delete the user and then add the user with the other privilege level. Available to administrative users only. A user name must start with an alpha character. user add user-name [password password][priv {admin | user}] Note The user names root and test are default existing names on every Data Domain System and are not available for general use. Use the existing sysadmin user account for administrative tasks. For example, to add a user with a login name of jsmith, a password of usr256, and administrative privilege: # user add jsmith password usr256 priv

Remove a User
To remove a user from a Data Domain System, use the user del user-name operation. Available to administrative users only.

115

Change a Password

user del user-name For example, to remove a user with a login name of jsmith: # user del jsmith user jsmith removed

Change a Password
To change a user password, including the password for the sysadmin user, use the user change password user-name operation. The operation asks for the new password and then asks you to re-enter the password as a confirmation. Without the user-name component, the command changes the password for the current user. Available to sysadmin to change any user password and available to all users to change only their own password. user change password [user-name] For example, to change the password for a user with a login name of jsmith: # user change password jsmith Enter new password: Re-enter new password: Passwords matched

Reset to the Default User


To reset the user list to the one factory default user, sysadmin, use the user reset operation. Available to administrative users only. user reset The response looks similar to the following, which lists all removed users: # user reset Removing user jsmith Removing user bjones Can not remove user sysadmin

Change a Privilege Level


To change a user privilege level, use the user change user-name operation with a key word of admin or user. Available to users who currently have the admin privilege. user change user-name {admin | user}
116 Data Domain Operating System User Guide

Display Current Users

For example, to change the privilege level from admin to user for the login name of jsmith: # user change jsmith user

Display Current Users


The display of users currently logged in to a Data Domain System shows: Name is the users login name. Idle is the amount of time logged in with no actions from the user. Login Time is the date and time when the user logged in. Login From shows the address from which the user logged in. tty is the hardware or network port through which the user is logged in or GUI for the users logged in through the Data Domain Enterprise Manager web-based interface. Session is the user session number. Display Use the user show active operation or click Users in the left panel of the Data Domain Enterprise Manager and look at Logged in Users. user show active The display looks similar to the following: # user show active Name Idle Login Time -------- ---- ---------------jsmith 18h Thu Nov 11 15:46 sysadmin 0s Fri Nov 12 09:44 -------- ---- ---------------Session -----3262 26772 -----Login From -----------------jsmith.company.com -----------------tty ----GUI pts/0 -----

User Administration

117

Display All Users

Display All Users


The display of all users known to the Data Domain System is available to administrative users only. The information given is: Name is the users login name. Class is the users access level of an administrator or a user who can see most information displays. Last login from shows the address from which the user logged in. Last login time is the date and time when the user last logged in. Display Use the user show all operation or click Users in the left panel of the Data Domain Enterprise Manager and look at All Users. user show list The display is similar to the following: # user show list Name Class Last login from -------- ----- -----------------sysadmin admin user24.company.com rjones user user25.company.com jsmith user user26.company.com -------- ----- ---------------------3 users found. Last login time -----------------------Fri Nov 12 14:55:47 2004 Fri Nov 12 12:36:30 2004 (never) ------------------------

118

Data Domain Operating System User Guide

Configuration Management

The Data Domain System config command allows you to examine and modify all of the configuration parameters that are set in the initial system configuration. The license command allows you to add, delete, and display feature licenses. Note The migration command copies all data from one Data Domain system to another. The command is usually used when upgrading from a smaller Data Domain system to a larger Data Domain system. For information on migration, see the chapter Replication - CLI.

The config Command


The config setup command brings up the same prompts as the initial system configuration. You can change any of the configuration parameters as detailed in the section Login and Configuration on page 14. All of the config operations are available only to administrative users. You can also use other Data Domain System commands to change individual configuration settings. Most of the remaining chapters of this manual detail using individual commands. An example of an individual command that sets only one of the config possibilities is nfs add to add NFS clients.

Change Configuration Settings


To change multiple configuration settings with one command, use the config setup operation. The operation displays the current value for each setting. Press the Return key to retain the current value for a setting. Administrative users only. config setup See Login and Configuration on page 14 for details about using config setup. Enter the command from a command prompt to change values after the initial setup. Many other Data Domain System commands change configuration settings. For example, the user command adds another user account each time a user is added.

119

The config Command

Note You can also use the Data Domain Enterprise Manager graphical user interface to change all of the same parameters that are available through the config setup command. In the Data Domain Enterprise Manager, select Configuration Wizard in the top section of the left panel.

Save and Return a Configuration


Using SSH, you can direct output from the Data Domain System config dump command, which returns all Data Domain System configuration settings, into a file on a remote host from which you do Data Domain System administration. You can later use SSH to return the file to the Data Domain System, which immediately recognizes the settings as a configuration and accepts the settings as the current configuration. For example, the following command connects with the Data Domain System dd10 as user sysadmin, asks for the password, returns output from the command config dump that is run on the Data Domain System, and stores the output in the local file (remote from the Data Domain System) /tmp/config12: # ssh -l sysadmin dd10 config dump > /tmp/config12 sysadmin@dd10s password: reg set config.aliases.default_set.root = '1' reg set config.aliases.default_set.sysadmin = '1' reg set config.aliases.sysadmin.df = 'filesys show space' reg set config.aliases.sysadmin.halt = 'system poweroff' . . . The following command returns the configuration settings from the file /tmp/config12 to the Data Domain System. The settings immediately become the current configuration for the Data Domain System. # ssh -l sysadmin dd10 < /tmp/config12 sysadmin@dd10s password: Reloading configuration: (CHECKED) Security access lists (from adminaccess) updated Bringing up DHCP client daemon for eth0... Bringing up DHCP client daemon for eth2...

120

Data Domain Operating System User Guide

The config Command

Reset the Location Description


To reset the location description to the system default of a null entry, use the config reset location command. Administrative users only. config reset location

Reset the Mail Server to a Null Entry


To reset the mail server used by the Data Domain System to the system default of a null entry, use the config reset mailserver command. Administrative users only. config reset mailserver

Reset the Time Zone to the Default


To reset the time zone used by the Data Domain System to the system default of US/Pacific, use the config reset timezone command. Administrative users only. config reset timezone

Set an Administrative Email Address


To give an administrative address to which the Data Domain System sends all alerts and autosupport messages, use the config set admin-email command. The address is also used as the required From address for alerts and autosupport emails to other recipients. The system needs only one administrative email address. Use the autosupport and alerts commands to add other email addresses. Administrative users only. config set admin-email email-address For example: # config set admin-email jsmith@company.com The Admin email is: jsmith@company.com To check the operation, use the config show admin-email command.

Configuration Management

121

The config Command

Set an Administrative Host Name


To change the machine from which you can log into the Data Domain System to see system logs and use system commands, use the config set admin-host host operation. The host name can be a simple host name, a host name with a fully-qualified domain name, or an IP address. Administrative users only. config set admin-host host For example, to set the administrative host to admin12.yourcompany.com: # config set admin-host admin12.yourcompany.com To check the operation, use the config show admin-host command.

Change the system Location Description


To change the description of a Data Domain System location, use the config set location location operation. A description of a physical location helps identify the machine when viewing alerts and autosupport emails. If the description contains one or more spaces, the description must be in double quotes. Administrative users only. config set location location For example, to set the location description to row2-num4-room221: # config set location row2-num4-room221 To check the operation, use the config show location command.

Change the Mail Server Hostname


To change the SMTP mail server used by the Data Domain System, use the config set mailserver host operation. Administrative users only. config set mailserver host For example, to set the mail server to mail.yourcompany.com: # config set mailserver mail.yourcompany.com To check the operation, use the config show mailserver command.

122

Data Domain Operating System User Guide

The config Command

Set a Time Zone for the system Clock


To set the system clock to a specific time zone, use the config set timezone operation. The default setting is US/Pacific. See the appendix: Time Zones on page 451 for a complete list of time zones. For the change to take effect for all currently running processes, you must reboot the Data Domain System. The operation is available to administrative users only. config set timezone zone For example, to set the system clock to the time zone that includes Los Angeles, California, USA: # config set timezone Los_Angeles To display time zones, enter a category or a partial zone name. The categories are: Africa, America, Asia, Atlantic, Australia, Brazil, Canada, Chile, Europe, Indian, Mexico, Mideast, Pacific, and US. The following examples show the use of a category and the use of a partial zone name: # config set timezone us US/Alaska US/Aleutian US/Eastern US/East-Indiana US/Michigan US/Mountain US/Arizona US/Hawaii US/Pacific US/Central US/Indiana-Starke US/Samoa

# config set timezone new Ambiguous timezone name, matching ... America/New_York Canada/Newfoundland

Display the Administrative Email Address


To display the administrative email address that the Data Domain System uses for email from the alerts and autosupport utilities, use the config show admin-email operation. config show admin-email The display is similar to the following: # config show admin-email The Admin Email is: rjones@yourcompany.com

Display the Administrative Host Name


To display the administrative host from which you can log into the Data Domain System to see system logs and use system commands, use the config show admin-host operation. config show admin-host The display is similar to the following: # config show admin-host The Admin Host is: admin12.yourcompany.com
Configuration Management 123

The license Command

Display the system Location Description


To display the Data Domain System location description, if you gave one, use the config show location operation. Administrative users only. config show location The display is similar to the following: # config show location The system Location is: bldg12 rm 120 rack8

Display the Mail Server Hostname


To display the name of the mail server that the Data Domain System uses to send email, use the config show mailserver operation. config show mailserver The display is similar to the following: # config show mailserver The Mail (SMTP) server is: mail.yourcompany.com

Display the Time Zone for the system Clock


To display the time zone used by the system clock, use the config show timezone operation. config show timezone The display is similar to the following: # config show timezone The Timezone name is: US/Pacific

The license Command


The license command manages licensed features on a Data Domain System.

Add a License
To add a feature license, use the license add operation. The code for each license is a string of 16 letters with dashes. Include the dashes when entering the license code. Administrative users only.

124

Data Domain Operating System User Guide

The license Command

The licensed features are:


Expanded Storage Add disks to a DD510 or DD530 system. Open Storage (OST) Use a system with the Symantec OpenSTorage product. Replication Use the Data Domain Replicator for replication of data from one Data Domain System to another. Retention-Lock Prevent certain files from being deleted or modified, for up to 70 years. VTL Use a Data Domain System as a virtual tape library. license add license-code

For example: # license add ABCD-BCDA-CDAB-DABC License ABCE-BCDA-CDAB-DABC added.

Display Licenses
The license display shows only those features licensed on the Data Domain System. Administrative users only. ## is the license number of the feature. License Key is the characters of a valid license key. Feature is the name of the licensed feature. Current licensed features are Replicator for replication from one Data Domain System to another, and the virtual tape library (VTL) feature. Display To display current licenses and default features, use the license show operation. Each line shows the license code. license show For example: # license show ## License Key -- ------------------Feature -----------------

Configuration Management

125

The license Command

1 DEFA-EFCD-FCDE-CDEF 2 EFCD-FCDE-CDEF-DEFA -- ------------------

Replication VTL ----------------

Remove All Feature Licenses


To remove all licenses use the license reset operation. The system then behave as though it has the single default license of CAPACITY-FULLSIZE. Administrative users only. license reset

Remove a License
To remove a current license, use the license del operation. Enter the license feature name or code (as shown with the license show command). Administrative users only. license del {license-feature | license-code} For example: # license del replication The Replication license is removed.

126

Data Domain Operating System User Guide

SECTION 3: Remote Monitoring - Alerts, SNMP, and Log Files.

127

This page intentionally left blank.

128

Data Domain Operating System User Guide

Alerts and System Reports

10

A Data Domain System uses multiple methods to inform administrators about the status of the Data Domain OS and hardware. The Data Domain System alerts, autosupport, and AM email features send messages and reports to user-configurable lists of email addresses. The lists include an email address for Data Domain support staff who monitor the status of all Data Domain Systems and contact your company when problems are reported. The messages also go to the system log.

The alerts feature sends an email whenever a critical component in the system fails or is known, through monitoring, to be out of an acceptable range. Consider adding pager email addresses to the alerts email list so that someone is informed immediately about system problems. For example, a single fan failure is not critical and does not generate an alert as the system can continue normal operations; however, multiple fan failures can cause a system to begin overheating, which generates an alerts email. Each disk, fan, and CPU in the Data Domain System is monitored. Temperature extremes are also monitored.

The autosupport feature sends a daily report that shows system identification information and consolidates the output from a number of Data Domain System commands. See Run the Autosupport Report on page 135 for details. Data Domain support staff use the report for troubleshooting. Every morning at 8:00 a.m. (local time for your system), the Data Domain System sends an AM email to the autosupport email list. The purpose is to highlight hardware or other failures that are not critical, but that should be dealt with soon. An example would be a fan failure. A failed fan should be replaced as soon as is reasonably possible, but the system can continue operations. The AM email is a copy of output from alerts show current (see Display Current Alerts on page 131) and alerts show history (see Display the Alerts History on page 132) messages about non-critical hardware situations, and some disk space usage numbers.

Non-critical hardware problems generate email messages to the autosupport list. An example is a failed power supply when the other two power supplies are still fine. If the situation is not fixed, the message also appears in the AM email.

129

Alerts

Every hour, the Data Domain System logs a short system status message. See Hourly system Status on page 138 for details. The support command sends multiple log files to the Data Domain Support organization.

Alerts
Use the alerts command to administer the alerts feature.

Add to the Email List


To add an email address to the alerts list, use the alerts add operation. By default, the list includes an address for Data Domain support staff. The email-list is a list of addresses that are comma- separated or space-separated or both. After adding to the list, always use the alerts test operation to test for mailer problems. Administrative users only. alerts add email-list For example, to add an email address to the alerts list: # alerts add jsmith@yourcompany.com

Test the Email List


To test the alerts list, use the alerts test operation, which sends an alerts email to each address on the list or to a specific address. After adding addresses to the list, always use this operation to test for mailer problems. alerts test [email-addr] For example, to test for the address jsmith@yourcompany.com: # alerts test jsmith@yourcompany.com

Remove from the Email List


To remove an email address from the alerts list, use the alerts del operation. The operation is for administrative users only. The email-list is a list of addresses that are comma-separated or space-separated or both. Administrative users only. alerts del email-list For example, to remove an email address from the alerts list: # alerts del jsmith@yourcompany.com

130

Data Domain Operating System User Guide

Alerts

Reset the Email List


By default, the alerts list includes an address for Data Domain support personnel. The alerts reset operation returns the list to the default address. Available only to administrative users. The default address is autosupport-alert@autosupport.datadomain.com. alerts reset

Display Current Alerts


The list of current alerts includes all alerts that are not corrected. An alert is removed from the display when the underlying situation is corrected. For example, an alert about a fan failure is removed when the fan is replaced with a working unit. Each type of alert maintains only one message in the current alerts list. For example, the display reports the most recent date of a system reboot, not every reboot. Look in the system log files for current and previous messages. Display To display current alerts, use the alerts show current operation or click Autosupport in the left panel of the Data Domain Enterprise Manager and look at Current Alerts. alerts show current The command returns entries similar to the following: # alerts show current Alert Time Description ----------------------------------------------------Fri Nov 12 18:54 Rear fan #1 failure: Current RPM is 0, nominal is 8000 Fri Nov 12 16:22 Reboot reported. system rebooted ----------------------------------------------------There are 2 active alerts.

Alerts and System Reports

131

Alerts

Display the Alerts History


The alerts history lists alerts messages from all of the existing messages system log files, which hold messages for up to ten weeks. Display To display the history of alerts messages, use the alerts show history operation or click Autosupport in the left panel of the Data Domain Enterprise Manager and look at Alert History. Use the up and down arrow keys to move through the display. Use the q key to exit. Enter a slash character (/) and a pattern to search for and highlight lines of particular interest. alerts show history The command returns entries similar to the following: # alerts show history Alert Time Description ----------------------------------------------------------Nov 11 18:54:51 Rear fan #1 failure: Current RPM is 0, nominal is 8000 Nov 11 18:54:53 system rebooted Nov 12 18:54:58 Rear fan #2 failure: Current RPM is 0, nominal is 8000 -----------------------------------------------------------

Display the Email List


The alerts email list includes an address for Data Domain support. Addresses that you add to the list appear as local or fully-qualified addresses exactly as you enter them. Display To display all email addresses in the alerts list, use the alerts show alerts-list operation or click Autosupport in the left panel of the Data Domain Enterprise Manager and look at Mailing Lists, Alert Email List.
132 Data Domain Operating System User Guide

Alerts

alerts show alerts-list The display is similar to the following: # alerts show alerts-list Alert email list autosupport@datadomain.com admin12 jsmith@company.com

Display Current Alerts and Recent History


To display the current alerts and the alerts history over the last 24 hours, use the alerts show daily operation. alerts show daily The display is similar to the following: # alerts show daily Current Alert ------------Alert Time Description ----------------------------------------------------------Nov 12 18:54 Rear fan #1 failure: Current RPM is 0, nominal is 8000 ----------------------------------------------------------There is 1 active alert. Recent Alerts and Log Messages -----------------------------Nov 5 20:56:43 localhost sysmon: EMS: Rear fan #2 failure: Current RPM is 960, nominal is 8000

Display the Email List and Administrator Email


To display all email addresses in the alerts list and the system administrator email address, use the alerts show all operation. alerts show all The display is similar to the following. The administrator address appears twice:

Alerts and System Reports

133

Autosupport Reports

# alerts show all The Admin email is: admin@yourcompany.com Alerts email autosupport@datadomain.com admin@yourcompany.com admin12 jsmith@company.com

Autosupport Reports
The autosupport feature automatically generates reports detailing the state of the system. The first section of an autosupport report gives system identification and uptime information. The next sections display output from numerous Data Domain System commands and entries from various log files. At the end of the report, extensive and detailed internal statistics and information are included to aid Data Domain in debugging system problems.

Add to the Email List


To add an email address to the autosupport report list, use the autosupport add operation. By default, the list includes an address for Data Domain support staff. The email-list is a list of addresses that are comma-separated or space-separated or both. After adding to the list, always use the autosupport test operation to test the address. Administrative users only. autosupport add email-list For example, to add a an email address to the list: # autosupport add jsmith@yourcompany.com

Test the Autosupport Report Email List


To create an autosupport report and send it to all addresses in the email list or to a specific address, use the autosupport send operation. After adding addresses to the list, always use this operation to test the address. autosupport test [email-addr] For example, after adding the email address djones@yourcompany.com to the list, the test for that address would be: # autosupport test djones@yourcompany.com

134

Data Domain Operating System User Guide

Autosupport Reports

Send an Autosupport Report


To send an autosupport report to all addresses in the email list or to a specific address, use the autosupport send operation. autosupport send [email-addr] For example, to send an autosupport to djones@yourcompany.com: # autosupport send djones@yourcompany.com

Remove from the Email List


To remove an email address from the autosupport report list, use the autosupport del operation. The operation is available only to administrative users. The email-list is a list of addresses that are comma-separated or space-separated or both. Administrative users only. autosupport del email-list For example, to remove an email address from the list: # autosupport del jsmith@yourcompany.com

Reset the Email List


By default, the list includes an address for Data Domain support personnel. The autosupport reset operation returns the list to the default address. The operation is available only to administrative users. autosupport reset support-list

Run the Autosupport Report


To manually run and immediately display the autosupport report, use the autosupport show report operation. Use the up and down arrow keys to move through the display. Use the q key to exit. Enter a slash character (/) and a pattern to search for and highlight lines of particular interest. autosupport show report The display is similar to the following. The first section gives system identification and uptime information: # autosupport show report ========== GENERAL INFO ========== GENERATED_ON=Wed Sept 7 13:17:48 UTC 2005 VERSION=Data Domain OS 4.5.0.0-62320
Alerts and System Reports 135

Autosupport Reports

SYSTEM_ID=Serial number: 22BM030026 MODEL_NO=DD560 HOSTNAME=dd10.yourcompany.com LOCATION=Bldg12 room221 rack6 ADMIN_EMAIL=admin@yourcompany.com UPTIME= 1:17pm up 124 days, 14:31, 0.00, 0.00, 0.00

2 users,

load average:

The next sections display output from numerous Data Domain System commands and entries from various log files. At the end of the report, extensive and detailed internal statistics and information appear to aid Data Domain in debugging system problems.

Email Command Output


To send the display output from any Data Domain System command to an email address, use the autosupport send operation. Enclose the command that is to generate output in double quotes. With a command and no address, the output is sent to the autosupport list. autosupport send [email-addr] [cmd "command"] For example, to email the log file messages.1 to Data Domain Support: # autosupport send support@datadomain.com cmd "log view messages.1"

Set the Schedule


To change the date and time when a Data Domain System automatically runs a verbose autosupport report, use the set schedule operation. The default time is daily at 3 a.m. (daily 0300). The operation is available only to administrative users.

A time is required. 2400 is not a valid time. An entry of 0000 is midnight at the beginning of a day. The never option turns off the report. Set a schedule using any of the other options to turn on the report. autosupport set schedule [daily | never day1[,day2,...]] time

For example, the following command runs the report automatically every Tuesday at 4 a.m.: # autosupport set schedule tue 0400 The most recent invocation of the scheduling operation cancels the previous setting.

136

Data Domain Operating System User Guide

Autosupport Reports

Reset the Schedule


To reset the autosupport report to run at the default time, use the autosupport reset schedule operation. The default time is Sunday at 3 a.m. The operation is available only to administrative users. autosupport reset schedule

Reset the Schedule and the List


To reset the autosupport schedule and email list to defaults, use the autosupport reset all operation. The operation is available only to administrative users. autosupport reset all

Display all Autosupport Parameters


To display all autosupport parameters, use the autosupport show all operation. autosupport show all The display is similar to the following. The default display includes only the Data Domain support address and the system administrator address (as given in the initial system configuration). Any additional addresses that you add to the list also appear. # autosupport show all The Admin email is: admin@yourcompany.com The Autosupport email list is : autosupport@datadomain.com admin@yourcompany.com Autosupport is scheduled to run Sun at 0300

Display the Autosupport Email List


The autosupport email list includes an address for Data Domain support. Addresses that you add to the list appear as local or fully-qualified addresses exactly as you enter them. Display To display all email addresses in the alerts list, use the autosupport show support-list operation or click Autosupport in the left panel of the Data Domain Enterprise Manager and look at Mailing Lists, Autosupport Email List. autosupport show support-list The default display is similar to the following:
Alerts and System Reports 137

Hourly system Status

# autosupport show support-list Autosupport Email List autosupport@datadomain.com admin@yourcompany.com

Display the Autosupport Report Schedule


Display the date and time when the autosupport report runs with the autosupport show schedule operation. autosupport show schedule The display is similar to the following: # autosupport show schedule Autosupport is scheduled to run Sun at 0300

Display the Autosupport History


To display all autosupport messages, use the autosupport show history operation. Use the J key to scroll down through the file, the K key to scroll up, and the Q key to exit. The operation displays entries from all of the messages system logs, which hold messages for up to ten weeks. autosupport show history The command returns entries similar to the following: # autosupport show history Nov 10 03:00:19 scheduled autosupport Nov 11 03:00:19 scheduled autosupport Nov 12 03:00:19 scheduled autosupport

Hourly system Status


The Data Domain System automatically generates a brief system status message every hour. The message is sent to the system log and to a serial console if one is attached. To see the hourly message, use the log view command. The message reports system uptime, the amount of data stored, the number of NFS operations, and the amount of disk space used for data storage (as a percentage). For example: # log view Nov 12 13:00:00 localhost logger: at 1:00pm up 3 days, 3:42, 52324 NFS ops, 84763 GiB data col. (1%)
138 Data Domain Operating System User Guide

Collect and Send Log Files

Nov 12 14:00:00 localhost logger: at 2:00pm up 3 days, 4:42, 59411 NFS ops, 84840 GiB data col. (1%)

Collect and Send Log Files


When troubleshooting problems, Data Domain Technical Support may ask for a support bundle, which is a tar-gzipped selection of log files with a README file that includes identifying autosupport headers. To create a support bundle in the Data Domain Enterprise Manager, click on the Support link in the left panel, and then click on the here link under the title Generate a support bundle. The browser opens a dialog window. Select the Save option and save the file on the local system. You can then send the file to Data Domain Technical Support. The new file immediately appears in the Data Domain Enterprise Manager Support bundles list. Left click on the file name to bring up the dialog window if you want to open the zip file or save the file to another location. Command Line Interface The support upload operations create bundles of log files (with a README file) and automatically send the results to Data Domain Technical Support. support upload {bundle | traces} The bundle operation sends various Data Domain system log files that are often needed by the Support staff. The traces operation sends multiple perf.log (performance log) files.

Alerts and System Reports

139

Collect and Send Log Files

140

Data Domain Operating System User Guide

SNMP Management and Monitoring

11

Simple Network Management Protocol (SNMP) is a standard protocol used to exchange network management information. It is part of the Transmission Control Protocol/Internet Protocol (TCP/IP) protocol suite. SNMP provides a tool for network administrators to monitor and manage network attached devices such as Data Domain systems. For information specific to the MIB, see the last half of this chapter, beginning at the heading More about the MIB on page 147. Data Domain systems support SNMP versions V1 and V2C. SNMP management requires two primary elements: an SNMP manager and an SNMP agent. An SNMP manager is software running on a workstation from which an administrator monitors and controls the different hardware and software systems on a network. These devices include, but not limited to, storage systems, routers, switches, etc. The agent is software running on equipment that implements the SNMP protocol. SNMP defines exactly how a SNMP manager communicates with an SNMP agent. For example, SNMP defines the format of requests that an SNMP device manager sends to an agent and the format of replies the agent returns. The SNMP feature allows a Data Domain System to respond to a set of SNMP get operations from a remote machine. From an SNMP perspective, a Data Domain System is a read-only device with the following exceptions: A remote machine can set the SNMP location, contact, and system name on a Data Domain System. To configure community strings, hosts, and other SNMP variables on the Data Domain System, use the snmp command. With one or more trap hosts defined, a Data Domain System takes the additional action of sending alerts messages as SNMP traps, even when the SNMP agent is disabled. Note The SNMP sysLocation and sysContact variables are not the same as those set with the config set location and config set admin-email commands. However, if the SNMP variables are not set with the SNMP commands, the variables default to the system values given with the config set commands.

141

Enable SNMP

Enable SNMP
To enable the SNMP agent on a Data Domain System, use the snmp enable operation. The default port that is opened when SNMP is enabled is port 161. Traps are sent to port 162. Administrative users only. snmp enable

Disable SNMP
To disable the SNMP agent on a Data Domain System, use the snmp disable operation. Ports 161 and 162 are closed. Administrative users only. snmp disable

Set the system Location


To set the system location as used in the SNMP MIB II system variable sysLocation, use the snmp set sysLocation operation. Administrative users only. snmp set sysLocation location For example, to give a location of bldg3-rm222: # snmp set sysLocation bldg3-rm222

Reset the system Location


To reset the system location to the system value displayed by the command system show location or an empty string if the system value is empty, use the snmp reset sysLocation operation. Administrative users only. snmp reset sysLocation

Set a system Contact


To set the system contact as used in the SNMP MIB II system variable sysContact, use the snmp set sysContact operation. Administrative users only. snmp set sysContact contact For example, to give a contact of bob-smith: # snmp set sysContact bob-smith
142 Data Domain Operating System User Guide

Reset a system Contact

Reset a system Contact


To reset the system contact to the system value displayed by the command system show admin-email or an empty string if the system value is empty, use the snmp reset sysContact operation. Administrative users only. snmp reset sysContact

Add a Trap Host


To add a trap host to the list of machines that receive SNMP traps generated by the Data Domain System, use the snmp add trap-host operation. With one or more trap hosts defined, alerts messages are also sent as traps, even when the SNMP agent is disabled. Administrative users only. snmp add trap-host hostname For example, to add a trap host admin12: # snmp add trap-host admin12

Delete a Trap Host


To delete one or more trap hosts from the list of machines that receive SNMP traps generated by the Data Domain System, use the snmp del trap-host operation. Administrative users only. snmp del trap-host hostname For example, to delete a trap host admin12: # snmp del trap-host admin12

Delete All Trap Hosts


To return the trap hosts list to the default of empty, use the snmp reset trap-host operation. Administrative users only. snmp reset trap-hosts

SNMP Management and Monitoring

143

Add a Community String

Add a Community String


To add one or more community strings that enable access to a Data Domain System, use one of the snmp add community operations. One operation gives read/write permissions and one gives read-only permission. A common string for read/write access is private. A common string for read-only access is public. Administrative users only. snmp add rw-community community-string snmp add ro-community community-string For example, to add a community string of private with read/write permissions: # snmp add rw-community private

Delete a Community String


To delete one or more community strings that enable access to a Data Domain System, use one of the snmp del community operations. One operation deletes community strings that have read/write permissions and one deletes those that have read-only permission. Administrative users only. snmp del rw-community community-string snmp del ro-community community-string For example, to delete the community string private that gives read/write permissions: # snmp del rw-community private

Delete All Community Strings


To return the community strings lists to the defaults of empty, use one of the snmp reset community operations. One operation resets the read/write permissions list and one resets the read-only permissions list. Administrative users only. snmp reset rw-community snmp reset ro-community

Reset All SNMP Values


To return all SNMP values to the defaults, use the snmp reset operation. Administrative users only. snmp reset
144 Data Domain Operating System User Guide

Display SNMP Agent status

Display SNMP Agent status


The status of the SNMP agent on the Data Domain System is either enabled or disabled. Display To display the status of the SNMP agent on a Data Domain System (enabled or disabled), use the snmp status operation o click SNMP in the left panel of the Data Domain Enterprise Manager. snmp status

Display Trap Hosts


To display the trap host list on a Data Domain System, use the snmp show trap-hosts operation. snmp show trap-hosts The output is similar to the following: # snmp show trap-hosts Trap Hosts: admin10 admin11

Display All Parameters


The SNMP configuration entries set by an administrator are:

sysLocation The system location as used in the SNMP MIB II system variable sysLocation. sysContact The system contact as used in the SNMP MIB II system variable sysContact. Trap Hosts The list of machines that receive SNMP traps generated by the Data Domain System. Read-only Communities One or more read-only community strings that enable access to the Data Domain System Read-write Communities One or more read-write community strings that enable access to the Data Domain System.

Display To display all of the SNMP parameters, use the snmp show config operation. Administrative users only.
SNMP Management and Monitoring 145

Display the system Contact

snmp show config The output is similar to the following: # snmp show config ---------------------SNMP sysLocation SNMP sysContact Trap Hosts Read-only Communities Read-write Communities ---------------------------------------bldg3-rm222 smith@company.com admin10 admin11 public snmpadmin23 private snmpadmin1 -------------------

Display the system Contact


To display the system contact on a Data Domain System, use the snmp show sysContact operation. snmp show sysContact

Display the system Location


To display the system location on a Data Domain System, use the snmp show syslocation operation. snmp show sysLocation

Display Community Strings


To display the community strings on a Data Domain System, use one of the snmp show communities operations. Administrative users only. snmp show rw-communities snmp show ro-communities The output is similar to the following: # snmp show rw-communities RW Community Strings: private snmpadmin1

146

Data Domain Operating System User Guide

Display the MIB and Traps

Display the MIB and Traps


The MIB display formats the complete management information base and SNMP traps. The traps are listed at the end of the file under the tag Common Notifications. You can download the MIB by mounting the Data Domain System /ddvar directory from another system. Use any SNMP MIB browser to view the downloaded MIB. The MIB location and name are: /ddvar/snmp/mibs/DATA_DOMAIN.mib Data Domain Enterprise Manager To view the MIB in the Data Domain Enterprise Manager graphical user interface, select SNMP from the left panel and find the SNMP MIB files section. Click on the DATA DOMAIN.mib link.

More about the MIB


Note The MIB documentation given here is not necessarily current, and is only meant as a starting point for the user. For up-to-date information, the user should see the MIB itself, which can be reached as described above under the heading Display the MIB and Traps on page 147.

What is a MIB?
Simply put, a MIB (Management Information Base) is a hierarchy of objects. The Data Domain MIB is a hierarchy of objects that define the status and operation of a Data Domain system. The hierarchy is in the form of a table.

MIB Browser
The user may find it worthwhile to download a freeware MIB Browser. Many can be found by searching on Google. As an example, the iReasoning MIB Browser can be found for downloading at http://www.ireasoning.com/mibbrowser.shtml, at the link "Download Free Personal Edition".

Entire MIB Tree


A view of the entire MIB in tree form is shown in the figures "Entire MIB Tree" in Figure 10 on page 148 and Figure 11 on page 149.

SNMP Management and Monitoring

147

More about the MIB Figure 10 : Entire MIB Tree - 1st half

148

Data Domain Operating System User Guide

More about the MIB Figure 11: Entire MIB Tree - 2nd half

SNMP Management and Monitoring

149

More about the MIB

Top-Level Organization of the MIB:


Table 1 : Top-Level Organization of the MIB

Tree/subtree The Data Domain MIB Description: This document describes the Management Information Base for Data Domain Products. The Data Domain enterprise number is 19746. The ASN.1 prefix up to and including the Data Domain, Inc. Enterprise is 1.3.6.1.4.1.19746. The top line is truncated in the image, it is really: DATA-DOMAIN-MIB.iso.org.dod.internet.private. enterprises.dataDomainMib

Relative OID and Name

Info

19746 DATA-DOM AIN-MIB

The MIB is divided into four top-level entities: MIB Conformance MIB Objects MIB Notifications Products

150

Data Domain Operating System User Guide

More about the MIB

Mid-Level Organization of the MIB:


Figure 12 : Mid-Level Organization of the MIB

At a middle level, the main subheadings of the MIB are shown in Figure 12 on page 151. On the "Entire MIB Tree" diagrams in Figure 10 on page 148 and Figure 11 on page 149 , these are the nodes that divide the MIB into sets of leaf nodes. That is, these are the nodes that have only one set of leaf nodes under them.

The MIB (Current Alerts Section) in Text Form


The MIB can be viewed in text form, but it is somewhat difficult to read. The text form of the section on Alerts is shown below by way of example.
------********************************************************************** CurrentAlerts ============= dataDomainMib (1.3.6.1.4.1.19746)

SNMP Management and Monitoring

151

More about the MIB -dataDomainMibObjects(1.3.6.1.4.1.19746.1) -alerts (1.3.6.1.4.1.19746.1.4) -currentAlerts(1.3.6.1.4.1.19746.1.4.1) --- ********************************************************************** currentAlerts OBJECT IDENTIFIER ::= { alerts 1 }

currentAlertTable OBJECT-TYPE SYNTAX SEQUENCE OF CurrentAlertEntry ACCESS not-accessible STATUS mandatory DESCRIPTION "A table containing entries of CurrentAlertEntry." ::= { currentAlerts 1 } currentAlertEntry OBJECT-TYPE SYNTAX CurrentAlertEntry ACCESS not-accessible STATUS mandatory DESCRIPTION "currentAlertTable Row Description" INDEX { currentAlertIndex } ::= { currentAlertTable 1 } CurrentAlertEntry ::= SEQUENCE { currentAlertIndexAlertIndex, currentAlertTimestampAlertTimestamp, currentAlertDescriptionAlertDescription } currentAlertIndex OBJECT-TYPE SYNTAX AlertIndex ACCESS read-only STATUS mandatory DESCRIPTION "Current Alert Row index" ::= { currentAlertEntry 1 } currentAlertTimestamp OBJECT-TYPE SYNTAX AlertTimestamp ACCESS read-only STATUS mandatory DESCRIPTION "Timestamp of current alert" ::= { currentAlertEntry 2 } currentAlertDescription OBJECT-TYPE SYNTAX AlertDescription ACCESS read-only STATUS mandatory DESCRIPTION "Alert Description" ::= { currentAlertEntry 3 } 152 Data Domain Operating System User Guide

More about the MIB

-- **********************************************************************

Entries in the MIB


The MIB is a hierarchy stored in a table. Each entry in the table has the following fields under it: Name Full Name of the field. For example: currentAlertDescription!@#.iso.org.dod.internet.private.enterprises .dataDomainMib.dataDomainMibObjects.alerts.currentAlerts.currentAlertTable.currentAlertEntry .currentAlertDescription. This is equivalent to the OID number: iso=1, org=3, dod=6, internet=1, private=4, enterprises=1, dataDomainMib=19746, etc. OID MIB Full index number of the field. For example: .1.3.6.1.4.1.19746.1.4.1.1.1.3 For this MIB, this is always DATA-DOMAIN-MIB.

Syntax Brief description. Access Example: read-only. Status Examples: mandatory, current.

DefVal Default Value. Indexes For tables, lists indexes into the table. (For objects, lists the object.) Descr Description of the field.

Important Areas of the MIB


Four areas deserve special attention and are documented thoroughly here. The four areas documented here are in the following order for the sake of clarity (the numbers in parentheses are the relative numbers inside the MIB):

Alerts (.1.3.6.1.4.1.19746.1.4) DataDomain MIB Notifications (.1.3.6.1.4.1.19746.2) Filesystem Space (.1.3.6.1.4.1.19746.1.3.2) Replication (.1.3.6.1.4.1.19746.1.8)

A section of information on each area is given (see Alerts (.1.3.6.1.4.1.19746.1.4) on page 154, Data Domain MIB Notifications (.1.3.6.1.4.1.19746.2) on page 154, Filesystem Space (.1.3.6.1.4.1.19746.1.3.2) on page 161, and Replication (.1.3.6.1.4.1.19746.1.8) on page 162).

SNMP Management and Monitoring

153

More about the MIB

Alerts (.1.3.6.1.4.1.19746.1.4)
The Alerts table is a set of containers (variables or fields) that hold the current problems happening in the system. [By contrast, the Notifications table holds a set of rules for what the system does in in response to problems whenever they happen in the system. See also Data Domain MIB Notifications (.1.3.6.1.4.1.19746.2) on page 154.] Alerts are the system for communicating problems, Data Domain's version of Notifications. The table currentAlertTable holds many current alert entries at once, with an Index, Timestamp, and Description for each. The Data Domain Alerts are shown in Figure 13 on page 154 and Table 2 on page 154.
Figure 13: Alerts

The Alerts table is indexed by the index: currentAlertIndex.


Table 2: Alerts

OID .1.3.6.1.4.1.19746.1.4 .1.3.6.1.4.1.19746.1.4.1 .1.3.6.1.4.1.19746.1.4.1.1 .1.3.6.1.4.1.19746.1.4.1.1.1 .1.3.6.1.4.1.19746.1.4.1.1.1.1 .1.3.6.1.4.1.19746.1.4.1.1.1.2 .1.3.6.1.4.1.19746.1.4.1.1.1.3

Name alerts currentAlerts currentAlertTable currentAlertEntry currentAlertIndex currentAlertTimestamp currentAlertDescription

Description

A table containing entries of CurrentAlertEntry currentAlertTable Row Description Current Alert Row index Timestamp of current alert Alert Description

Data Domain MIB Notifications (.1.3.6.1.4.1.19746.2)


The Notifications table holds a set of rules for what the system does in in response to problems whenever they happen in the system. (Notifications are also known as Traps.) [By contrast, the Alerts table is a set of containers (variables or fields) that hold the current problems happening in the system. See also Alerts (.1.3.6.1.4.1.19746.1.4) on page 154.]
154 Data Domain Operating System User Guide

More about the MIB

As a user, the only thing you can do with notifications and alerts is choose to receive them or not. Choosing to receive notifications is called "adding a trap host", that is, adding the name of a host machine to the list of machines that receive notifications when traps are sprung. Choosing not to receive notifications on a given machine is called "deleting a trap host". See the entries Add a Trap Host on page 143, Delete a Trap Host on page 143, and Delete All Trap Hosts on page 143 in this chapter. Notifications vary in severity level, and thus in result. This is shown in Table 3 on page 155.
Table 3 Notification Severity Levels and Results

Severity Level of Notification Warning Alert Shutdown

Result An Autosupport email is sent. An Alert email is sent. The system shuts down.

In addition to the above results, in each case a Notification is sent if supported. The following is an example of how the user might use the MIB Notifications table. Example: A user adds the hostname "panther5" to the list of machines that receive notifications, using the command: snmp add trap-host panther5 Later a fan module fails on the enclosure. The alarm "fanModuleFailedAlarm" is sent to panther5. The user gets this alarm, and looks it up in the MIB, in the Notifications table. The entry looks like somewhat like this:
Table 4: Part of the fanModuleFailedAlarm Field of the Notifications Table in the MIB

.1.3. 6.1.4 .1.19 746. 2.5

fanMo duleFa iledAl arm

fanInd ex

Meaning:a Fan Module in the enclosure has failed. The index of the fan is given as the index of the alarm. This same index can be looked up in the environmentals table 'fanProperties' for more information about which fan has failed. What to do:replace the fan!

The user looks up the index in the MIB environmentals table 'fanProperties', and finds that fan #1 has failed. Back in the Notifications table, the user sees that What to do is: replace the fan. The user replaces the fan, removing the error condition. More on Notifications is given in Figure 14 on page 156 and Table 5 on page 156.
SNMP Management and Monitoring 155

More about the MIB Figure 14: Notifications

In the Notifications table, Notifications are indexed into other tables by various indexes, given in the Indexes column. The table names can be found under Description.
Table 5 : Notifications

OID .1.3. 6.1.4 .1.19 746. 2 .1.3. 6.1.4 .1.19 746. 2.1

Name dataD omain MibN otificat ions power Supply Failed Alarm

Index es

Description

Meaning:Power Supply failed What to do:replace the power supply!

156

Data Domain Operating System User Guide

More about the MIB

.1.3. 6.1.4 .1.19 746. 2.2

system Overh eatWa rningA larm

tempS ensorI ndex

Meaning:the temperature reading of one of the thermometers in the Chassis has exceeded the 'warning' temperature level. If it continues to rise, it may eventually trigger a shutdown of the DDR. The index value of the alarm indicates the thermometer index that may be looked up in the environmentals table 'temperatures' for more information about the actual thermometer reading the high value. What to do:check the Fan status, temperatures of the environment in which the DDR is, and other factors which may increase the temperature. Meaning:the temperature reading of one of the thermometers in the Chassis is more than halfway between the 'warning' and 'shutdown' temperature levels. If it continues to rise, it may eventually trigger a shutdown of the DDR. The index value of the alarm indicates the thermometer index that may be looked up in the environmentals table 'temperatures' for more information about the actual thermometer reading the high value. What to do:check the Fan status, temperatures of the environment in which the DDR is, and other factors which may increase the system temperature. Meaning:the temperature reading of one of the thermometers in the Chassis has reached or exceeded the 'shutdown' temperature level. The DDR will be shutdown to prevent damage to the system. The index value of the alarm indicates the thermometer index that may be looked up in the environmentals table 'temperatures' for more information about the actual thermometer reading the high value. What to do:Once the system has been brought back up, after checking for high environment temperatures or other factors which may increase the system temperature, check other environmental values, such as Fan Status, Disk Temperatures, etc...! Meaning:a Fan Module in the enclosure has failed. The index of the fan is given as the index of the alarm. This same index can be looked up in the environmentals table 'fanProperties' for more information about which fan has failed. What to do:replace the fan! Meaning:The system has detected that the NVRAM is potentially failing. There has been an excessive amount of PCI or Memory errors. The nvram tables 'nvramProperties' and 'nvramStats' may provide for information on why the NVRAM is failing. What to do:check the status of the NVRAM after reboot, and replace if the errors continue.

.1.3. 6.1.4 .1.19 746. 2.3

system Overh eatAle rtAlar m

tempS ensorI ndex

.1.3. 6.1.4 .1.19 746. 2.4

system Overh eatShu tdownt Alarm

tempS ensorI ndex

.1.3. 6.1.4 .1.19 746. 2.5 .1.3. 6.1.4 .1.19 746. 2.6

fanMo duleFa iledAl arm nvram Failing Alarm

fanInd ex

SNMP Management and Monitoring

157

More about the MIB

.1.3. 6.1.4 .1.19 746. 2.7 .1.3. 6.1.4 .1.19 746. 2.8

filesys temFai ledAla rm fileSpa ceMai ntenan ceAlar m filesys temRe source Index

Meaning:The File system process on the DDR has had a serious problem and has had to restart. What to do:check the system logs for conditions that may be triggering the failure. Other alarms may also indicate why the File system is having problems. Meaning:DDVAR File system Resource Space is running low for system maintenance activities. The system may not have enough space for the routine system activities to run without error. What to do:Delete unneeded files, such as old log files, support bundles, core files, upgrade rpm files stored in the /ddvar file system. Consider upgrading the hardware or adding shelves to high-end units. Reducing the retention times for backup data can also help. When files are deleted from outside of the /ddvar space, filesys clean will have to be done before the space is recovered. Meaning:A File system Resource space is 90% utilized. The index value of the alarm indicates the file system index that may be looked up in the filesystem table 'filesystemSpace' for more information about the actual FS that is getting full. What to do:Delete unneeded files, such as old log files, support bundles, core files, upgrade rpm files stored in the /ddvar file system. Consider upgrading the hardware or adding shelves to high-end units. Reducing the retention times for backup data can also help. When files are deleted from outside of the /ddvar space, filesys clean will have to be done before the space is recovered. Meaning:A File system Resource space is 95% utilized. The index value of the alarm indicates the file system index that may be looked up in the filesystem table 'filesystemSpace' for more information about the actual FS that is getting full. What to do:Delete unneeded files, such as old log files, support bundles, core files, upgrade rpm files stored in the /ddvar file system. Consider upgrading the hardware or adding shelves to high-end units. Reducing the retention times for backup data can also help. When files are deleted from outside of the /ddvar space, filesys clean will have to be done before the space is recovered.

.1.3. 6.1.4 .1.19 746. 2.9

fileSpa ceWar ningAl arm

filesys temRe source Index

.1.3. 6.1.4 .1.19 746. 2.10

fileSpa ceSeve reAlar m

filesys temRe source Index

158

Data Domain Operating System User Guide

More about the MIB

.1.3. 6.1.4 .1.19 746. 2.11

fileSpa ceCriti calAla rm

filesys temRe source Index

Meaning:A File system Resource space is 100% utilized. The index value of the alarm indicates the file system index that may be looked up in the filesystem table 'filesystemSpace' for more information about the actual FS that is full. What to do:Delete unneeded files, such as old log files, support bundles, core files, upgrade rpm files stored in the /ddvar file system. Consider upgrading the hardware or adding shelves to high-end units. Reducing the retention times for backup data can also help. When files are deleted from outside of the /ddvar space, filesys clean will have to be done before the space is recovered. Meaning:some problem has been detected about the indicated disk. The index value of the alarm indicates the disk index that may be looked up in the disk tables 'diskProperties', 'diskPerformance', and 'diskReliability' for more information about the actual disk that is failing. What to do:monitor the status of the disk, and consider replacing if the problem continues. Meaning:some problem has been detected about the indicated disk. The index value of the alarm indicates the disk index that may be looked up in the disk tables 'diskProperties', 'diskPerformance', and 'diskReliability' for more information about the actual disk that has failed. What to do:replace the disk. Meaning:the temperature reading of the indicated disk has exceeded the 'warning' temperature level. If it continues to rise, it may eventually trigger a shutdown of the DDR. The index value of the alarm indicates the disk index that may be looked up in the disk tables 'diskProperties', 'diskPerformance', and 'diskReliability' for more information about the actual disk reading the high value. What to do:check the disk status, temperatures of the environment in which the DDR is, and other factors which may increase the temperature.

.1.3. 6.1.4 .1.19 746. 2.12

diskFa ilingAl arm

diskPr opInd ex

.1.3. 6.1.4 .1.19 746. 2.13 .1.3. 6.1.4 .1.19 746. 2.14

diskFa iledAl arm

diskPr opInd ex

diskO verhea tWarni ngAlar m

diskEr rIndex

SNMP Management and Monitoring

159

More about the MIB

.1.3. 6.1.4 .1.19 746. 2.15

diskO verhea tAlert Alarm

diskEr rIndex

Meaning:the temperature reading of the indicated disk is more than halfway between the 'warning' and 'shutdown' temperature levels. If it continues to rise, it will trigger a shutdown of the DDR. The index value of the alarm indicates the disk index that may be looked up in the disk tables 'diskProperties', 'diskPerformance', and 'diskReliability' for more information about the actual disk reading the high value. What to do:check the disk status, temperatures of the environment in which the DDR is, and other factors which may increase the temperature. If the temperature continues stays at this level or rises, and no other disks are reading this trouble, consider 'failing' the disk, and get a replacement. Meaning:the temperature reading of the indicated disk has surpassed the 'shutdown' temperature level. The DDR will be shutdown. The index value of the alarm indicates the disk index that may be looked up in the disk tables 'diskProperties', 'diskPerformance', and 'diskReliability' for more information about the actual disk reading the high value. What to do:Boot the DDR and monitor the status and temperatures. If the same disk has continued problems, consider 'failing' it and get a replacement disk. Meaning:Raid group reconstruction is currently active and has not completed after 71 hours. Reconstruction occurs when the raid group falls into 'degraded' mode. This can happen due to a disk failing at run-time or boot-up. What to do:while it is still possible that the reconstruction could succeed, the disk should be replaced to ensure data safety. Meaning:Raid group reconstruction is currently active and has not completed after 72 hours. Reconstruction occurs when the raid group falls into 'degraded' mode. This can happen due to a disk failing at run-time or boot-up. What to do:the disk should be replaced to ensure data safety. Meaning:Raid group reconstruction is currently active and has not completed after more than 72 hours. Reconstruction occurs when the raid group falls into 'degraded' mode. This can happen due to a disk failing at run-time or boot-up. What to do:the disk must be replaced.

.1.3. 6.1.4 .1.19 746. 2.16

diskO verhea tShutd owntA larm

diskEr rIndex

.1.3. 6.1.4 .1.19 746. 2.17 .1.3. 6.1.4 .1.19 746. 2.18 .1.3. 6.1.4 .1.19 746. 2.19

raidRe conSe vereAl arm

raidRe conCri ticalAl arm raidRe conCri ticalSh utdow nAlar m

160

Data Domain Operating System User Guide

More about the MIB

Filesystem Space (.1.3.6.1.4.1.19746.1.3.2)


The Filesystem Space MIB entries describe the allocation of file system space in Data Domain systems. See Figure 15 on page 161 and Table 6 on page 161. (More on Filesystem Space can be found in the File system Management chapter of the User Guide, for example under the heading Statistics and Basic Operations on page 213.)
Figure 15: Filesystem Space

The Filesystem Space table is indexed by the index: filesystemResourceIndex.


Table 6 : Filesystem Space

OID .1.3.6.1.4.1.19746.1.3.2 .1.3.6.1.4.1.19746.1.3.2.1 .1.3.6.1.4.1.19746.1.3.2.1.1 .1.3.6.1.4.1.19746.1.3.2.1.1.1 .1.3.6.1.4.1.19746.1.3.2.1.1.2 .1.3.6.1.4.1.19746.1.3.2.1.1.3 .1.3.6.1.4.1.19746.1.3.2.1.1.4 .1.3.6.1.4.1.19746.1.3.2.1.1.5

Name filesystemSpace filesystemSpaceTable filesystemSpaceEntry filesystemResourceIndex filesystemResourceName filesystemSpaceSize filesystemSpaceUsed filesystemSpaceAvail

Description A table containing entries of FilesystemSpaceEntry. filesystemSpaceTable Row Description File system resource index File system resource name Size of the file system resource in gigabytes Amount of used space within the file system resource in gigabytes Amount of available space within the file system resource in gigabytes Percentage of used space within the file system resource

.1.3.6.1.4.1.19746.1.3.2.1.1.6

filesystemPercentUsed

SNMP Management and Monitoring

161

More about the MIB

Replication (.1.3.6.1.4.1.19746.1.8)
Various values related to Replication are contained in the Replication table in the MIB. See Figure 16 on page 162 and Table 7 on page 162. (More on Replication can be found in the Replication chapter of the User Guide, for example under the heading Replication - CLI on page 249.)
Figure 16: Replication

The Replication table is indexed by the index: replContext.


Table 7 : Replication

OID .1.3.6.1.4.1.19746.1.8 .1.3.6.1.4.1.19746.1.8.1 .1.3.6.1.4.1.19746.1.8.1.1 .1.3.6.1.4.1.19746.1.8.1.1.1 .1.3.6.1.4.1.19746.1.8.1.1.1.2 .1.3.6.1.4.1.19746.1.8.1.1.1.3 .1.3.6.1.4.1.19746.1.8.1.1.1.4

Name replication replicationInfo replicationInfoTable replicationInfoEntry ReplState replStatus replFileSysStatus

Description

A table containing entries of ReplicationInfoEntry. raidInfoTable Row Description state of replication source/dest pair status of replication source/dest pair connection status of filesystem

162

Data Domain Operating System User Guide

More about the MIB

.1.3.6.1.4.1.19746.1.8.1.1.1.5

replConnTime

.1.3.6.1.4.1.19746.1.8.1.1.1.6 .1.3.6.1.4.1.19746.1.8.1.1.1.7 .1.3.6.1.4.1.19746.1.8.1.1.1.8 .1.3.6.1.4.1.19746.1.8.1.1.1.9 .1.3.6.1.4.1.19746.1.8.1.1.1.10 .1.3.6.1.4.1.19746.1.8.1.1.1.11 .1.3.6.1.4.1.19746.1.8.1.1.1.12 .1.3.6.1.4.1.19746.1.8.1.1.1.13

replSource replDestination replLag replPreCompBytesSent replPostCompBytesSent replPreCompBytesRemaining replPostCompBytesReceived replThrottle

time of connection established between source and dest, or time since disconnect if status is 'disconnected' network path to replication source directory network path to replication destination directory time lag between source and destination pre compression bytes sent post compression bytes sent pre compression bytes remaining post compression bytes received replication throttle in bps

SNMP Management and Monitoring

163

More about the MIB

164

Data Domain Operating System User Guide

Log File Management

12

The log command allows you to view Data Domain System log file entries and to save and clear the log file contents. Messages from the alerts feature, the autosupport reports, and general system messages go to the log directory and into the file messages. A log entry appears for each Data Domain System command given on the system. The log directory is /ddvar/log. Every Sunday at 3 a.m., the Data Domain System automatically opens new log files and renames the previous files with an appended number of 1 (one) through 9, such as messages.1. Each numbered file is rolled to the next number each week. For example, at the second week, the file messages.1 is rolled to messages.2. If a file messages.2 already existed, it would roll to messages.3. An existing messages.9 is deleted when messages.8 is rolled to messages.9. See Procedure: Archive Log Files on page 170 for instructions on saving log files.

Scroll New Log Entries


To display a view of the messages file that adds new entries as they occur, use the watch operation. Use the key combination <Control> c to break out of the watch operation. With no filename, the command displays the current messages file. log watch [filename]

Send Log Messages to Another system


Some log messages can be sent outside of a Data Domain System to other systems. A Data Domain System exports the following facility.priority selectors for log files. For managing the selectors and receiving messages on a third-party system, see your vendor-supplied documentation for the receiving system.

*.notice Sends all messages at the notice priority and higher. *.alert Sends all messages at the alert priority and higher (alerts are included in *.notice). kern.* Sends all kernel messages (kern.info log files). local7.* Sends all messages from system startups (boot.log files).
165

Send Log Messages to Another system

The log host commands manage the process of sending log messages to another system:

Add a Host
To add a system to the list that receives Data Domain System log messages, use the log host add command. log host add host-name For example, the following command adds the system log-server to the hosts that receive log messages: # log host add log-server

Remove a Host
To remove a system from the list that receives Data Domain System log messages, use the log host del command. log host del host-name For example, the following command removes the system log-server from the hosts that receive log messages: # log host del log-server

Enable Sending Log Messages


To enable sending log messages to other systems, use the log host enable command. log host enable

Disable Sending Log Messages


To disable sending log messages to other systems, use the log host disable command. log host disable

Reset to Default
To reset the log sending feature to the defaults of an empty list and disabled, use the log host reset command. log host reset

166

Data Domain Operating System User Guide

Display a Log File

Display the List and State


To display the list of systems that receive log messages and the state of enabled or disabled, use he log host show command. log host show The output is similar to the following: # log host show Remote logging is enabled. Remote logging hosts log-server

Display a Log File


To view the log files, use the log view operation. With no filename, the command displays the current messages file. When viewing the log, use the up and down arrows to scroll through the file; use the q key to quit; enter a slash character (/) and a pattern to search through the file. log view [filename] The display of the messages file is similar to the following. The last message in the example is an hourly system status message that the Data Domain System generates automatically. The message reports system uptime, the amount of data stored, NFS operations, and the amount of disk space used for data storage (%). The hourly messages go to the system log and to the serial console if one is attached. # log view Jun 27 12:11:33 localhost rpc.mountd: authenticated unmount request from perfsun-g.datadomain.com:668 for /ddr/col1/segfs (/ddr/col1/segfs) Jun 27 12:28:54 localhost sshd(pam_unix)[998]: session opened for user jsmith10 by (uid=0) Jun 27 13:00:00 localhost logger: at 1:00pm up 3 days, 3:42, 52324 NFS ops, 84763 GiB data col. (1%) Note GiB = Gibibytes = the binary equivalent of Gigabytes.

Log File Management

167

List Log Files

List Log Files


The basic log files are: access Tracks users of the Data Domain Enterprise Manager graphical user interface. boot.log Kernel diagnostic messages generated during the booting up process. ddfs.info Debugging information created by the file system processes. ddfs.memstat Memory debugging information for file system processes. destroy.id_number.log All of the actions taken by an instance of the filesys destroy command. Each instance produces a log with a unique ID number. disk-error-log Disk error messages. error Lists errors generated by the Data Domain Enterprise Manager operations. kern.error Kernel error messages. kern.info Kernel information messages. messages The system log, generated from Data Domain System actions and general system operations. network Messages from network connection requests and operations. perf.log Performance statistics used by Data Domain support staff for system tuning. secure Messages from unsuccessful logins and changes to user accounts. (Not shown in the graphical user interface.) space.log Messages about disk space use by Data Domain System components and data storage, and messages from the clean process. A space use message is generated every hour. Each time the clean process runs, it creates about 100 messages. All the messages are in comma-separated- value format with tags that you can use to separate out the disk space or clean messages. You can use third-party software to analyze either set of messages. The tags are: CLEAN for data lines from clean operations. CLEAN_HEADER for lines that contain headers for the clean operations data lines. SPACE for disk space data lines. SPACE_HEADER for lines that contain headers for the disk space data lines. ssi_request Messages from the Data Domain Enterprise Manager when users connect with HTTPS. windows Messages about CIFS-related activity from CIFS clients attempting to connect to the Data Domain System.

168

Data Domain Operating System User Guide

Procedure: Understand a Log Message

Display To list all of the files in the log directory, use the log list operation or click Log Files in the left panel of the Data Domain Enterprise Manager. log list The list is similar to the following: # log list Last modified ----------------------Tue May 24 12:15:01 2005 Wed May 25 00:28:27 2005 Wed May 25 08:43:03 2005 Sun May 22 03:00:01 2005 Sun May 15 03:00:00 2005 Size ----3 KiB 933 KiB 42 KiB 70 KiB 111 KiB File ------------boot.log ddfs.info messages messages.1 messages.2

Note KiB = Kibibytes = the binary equivalent of Kilobytes.

Procedure: Understand a Log Message


1. View the log file. (This can be done on the Data Domain system either by using the command "log view message", or the command "log view", or from the gui by going to the menu bar at left and clicking "Log Files", then scrolling down and clicking the link "messages".) 2. In the log file, see something like this: Jan 31 10:28:11 syrah19 bootbin: NOTICE: MSG-SMTOOL-00006: No replication throttle schedules found: setting throttle to unlimited. 3. Look for the file of log messages. A detailed description of log messages can be obtained from the Data Domain Support website, https://support.datadomain.com/, by clicking Software Downloads, then the book icon under Docs for the given release, then Error Message Catalog. The user opens this web page. 4. In the web page of log messages, search for the message "MSG-SMTOOL-00006". Find the following: ID: MSG-SMTOOL-00006 - Severity: NOTICE - Audience: customer Message: No replication throttle schedules found: setting throttle to unlimited.\n Description: The restorer cannot find a replication throttle schedule. Replication is running with throttle set to unlimited. Action: To set a replication throttle schedule, run the replication throttle add command.
Log File Management 169

Procedure: Archive Log Files

5. Based on the message, the user could run the "replication throttle add" command to set the throttle.

Procedure: Archive Log Files


To archive log files, use FTP to copy the files to another machine. 1. On the Data Domain System, use the adminaccess show ftp command to see that the FTP service is enabled. If the service is not enabled, use the command adminaccess enable ftp. 2. On the Data Domain System, use the adminaccess show ftp command to see that the FTP access list has the IP address of your remote machine or a class-C address that includes your remote machine. If the address is not in the list, use the command adminaccess add ftp <ipaddr>. 3. On the remote machine, open a web browser. 4. In the Address box at the top of the web browser, use FTP to access the Data Domain System. For example: ftp://Data Domain System_name.yourcompany.com/ Note Some web browsers do not automatically ask for a login if a machine does not accept anonymous logins. In that case, add a user name and password to the FTP line. For example: ftp://sysadmin:your-pw@Data Domain System_name.yourcompany.com/ 5. At the login popup, log into the Data Domain System as user sysadmin. 6. On the Data Domain System, you are in the directory just above the log directory. Open the log directory to list the messages files. 7. Copy the file that you want to save. Right-click on the file icon and select Copy To Folder from the menu. Choose a location for the file copy. 8. If you want the FTP service disabled on the Data Domain System, use SSH to log into the Data Domain System as sysadmin and give the command adminaccess disable ftp.

170

Data Domain Operating System User Guide

SECTION 4: Capacity - Disk Management, Disk Space, System Monitoring, and Multipath.

171

This page intentionally left blank.

172

Data Domain Operating System User Guide

Disk Management

13

The Data Domain System disk command manages disks and displays disk locations, logical (RAID) layout, usage, and reliability statistics. Command output examples in this chapter show systems with 15 disk drives. Each Data Domain System model reports on the number of disks actually in the system. With a DD560 that has one or more Data Domain external disk shelves, commands also include entries for all enclosures, disks, and RAID groups. See the Data Domain publication ES20 Expansion Shelf User Guide for details about disks in external shelves. A Data Domain System has either 8 or 15 disks, depending on the model. Each disk in a Data Domain system has two LEDs at the bottom of the disk carrier. The right LED on each disk flashes (green or blue depending on the Data Domain system model) whenever the system accesses the disk. The left LED glows red when the disk has failed. In a DD460 or DD560, both LEDs are dark on the disk that is available as a spare. DD460 and DD560 systems maintain data integrity with a maximum of two failed disks. The DD410 and DD430 models have no spare and maintain data integrity with a maximum of one failed disk. DD530 and DD510 models have one spare and maintain data integrity with a maximum of two failed disks. Each disk in an external shelf has two LEDs at the right edge of the disk carrier. The top LED is green and flashes when the disk is accessed or when the disk is the target of a beacon operation. The bottom LED is amber and glows steadily when the disk has failed. The disk-identifying variable used in disk commands (except gateway-specific commands) is in the format enclosure-id.disk-id. An enclosure is a Data Domain system or an external disk shelf. A Data Domain system is always enclosure 1 (one). For example, disk 12 in a Data Domain system is 1.12. Disk 12 in the first external shelf is 2.12. On gateway Data Domain Systems (that use 3rd-party physical storage disk arrays other than Data Domain external disk shelves), the following command options are not valid: disk disk disk disk disk disk beacon expand fail unfail show failure-history show reliability-data

With gateway storage, output from all other disk commands returns information about the LUNs and volumes accessed by the Data Domain System.

173

Expand from 9 disks to 15 disks

Expand from 9 disks to 15 disks


To expand disk usage from 8 disks plus one spare to 14 disks plus one spare, use the disk expand command. disk expand This command only works for the DD510/530. The command is for the sysadmin. Expansion can occur only when the first 9 disks are not in a degraded state, and there is at least one spare disk. (To verify this, enter the command "disk status". In the response, the "in use" line must show at least 8 disks as in use, and the "spare" line must show at least one disk as spare.) In the following example, the user checks disk status, which proves satisfactory, and so the user does disk expand: # disk status Normal - system operational 1 8 8 1 1 disk group drives are drives are disk group disk group total operational "in use" total present

Add a LUN
For gateway systems only. Add a new LUN to the current volume. To get the dev-ID, use the disk rescan command and then use the disk show raid-info command. The dev-ID format is the word dev and the number as seen in output from the disk show raid-info command. See Procedure: Adding a LUN on page 57 for details. disk add dev<dev-id> For example, to add a LUN with a dev-id of 2 as shown by the disk show raid-info command: # disk add dev2

Fail a Disk
To set a disk to the failed state, use the disk fail enclosure-id.disk-id operation. The command asks for a confirmation before carrying out the operation. Available to administrative users only. disk fail enclosure-id.disk-id

174

Data Domain Operating System User Guide

Unfail a Disk

A failed disk is automatically removed from a RAID disk group and is replaced by a spare disk (when a spare is available). The disk use changes from spare to in use and the status becomes reconstructing. See Display RAID Status for Disks on page 180 to list the available spares. Note A Data Domain system can run with a maximum of two failed disks. Always replace a failed disk as soon as possible. Spare disks are supplied in a carrier for a Data Domain system or a carrier for an expansion shelf. DO NOT move a disk from one carrier to another.

Unfail a Disk
To change a disk status from failed to available, use the disk unfail enclosure-id.disk-id command. Use the command when replacing a failed disk. The new disk in the failed slot is seen as failed until the disk is unfailed. disk unfail enclosure-id.disk-id

Look for New Disks, LUNs, and Expansion Shelves


To check for new disks or LUNs with gateway systems or when adding an expansion shelf, use the disk rescan operation. Administrative users only. disk rescan

Identify a Physical Disk


The disk beacon enclosure-id.disk-id operation causes the LED on the right (that signals normal operation) on the target disk to flash. Use the (Control) c key sequence to turn off the operation. (To check all disks in an enclosure, use the enclosure beacon command.) Administrative users only. disk beacon enclosure-id.disk-id For example, to flash the LED for disk3 in a Data Domain system: # disk beacon 1.3

Add an Expansion Shelf


To add a Data Domain expansion shelf disk storage unit, use the disk add enclosure command. The enclosure-id is always 2 for the first added shelf and 3 for the second. The Data Domain system always has the enclosure-id of 1 (one).
Disk Management 175

Reset Disk Performance Statistics

disk add enclosure enclosure-id

Reset Disk Performance Statistics


To reset disk performance statistics to zero, use the disk reset performance operation. See Display Disk Performance Details on page 183 for displaying disk statistics. disk reset performance

Display Disk Status


The disk status command reports the overall status of disks in the system. It displays the number of disks in use and failed, the number of spare disks available, and whether a RAID disk group reconstruction is underway. Note that the RAID portion of the display could show one or more disks as failed while the Operational portion of the display could show all drives as operating nominally. A disk can be physically functional and available, but not currently in use by RAID, possibly because of operator intervention. disk status On a gateway Data Domain System, the display shows only the number and state of the LUNs accessed by the Data Domain System. The remainder of the display is not valid for a gateway system. Reconstruction is done on one disk at a time. If more than one disk is to be reconstructed, the disks waiting for reconstruction show as spare or hot spare until reconstruction starts on the disk. Note that the disks in a new expansion shelf recognized with the disk rescan command show a status of unknown. Use the disk add enclosure command to change the status to in use.

Output Format
The general format of the disk status command is as follows: 1. <summary> - <description> This line shows a summary of disks in the system. The summary can be "Error", "Normal" or "Warning". If it says "Normal", you need look no further, because all the disks in the system are in good condition. If it says Warning, the system is operational, but there are problems that need to be corrected, so see the further info given. If it says "Error", the system is not operational, so look at the further information given to fix the problems. The description provides more detail of the summary. See OUTPUT EXAMPLES below. 2. <additional information> This section show lists of disks in different states relevant to the above summary line.

176

Data Domain Operating System User Guide

Display Disk Status

There are three possible cases of summary:

Error: A brand-new "head unit" will be in this state when foreign storage is present. For a system that has been configured with some storage, the "Error" indicates that some or all of its own storage is missing. Normal: A brand-new "head unit" is normal if there is no configured storage attached, it has never used 'disk add' or 'disk add enclosure' before, and all disks outside of the "head unit" are not in any of the following states: "in use", "foreign", or "known". A system that has been configured with "data storage" = "Normal" indicates that the entire "data storage" set is present. Warning: special case of a system that would have been "Normal" if the system had had none of the following conditions that require user action: RAID system degraded Foreign storage present Some of the disks are failed or absent

Output Examples
A) Brand-new "head unit". Error - data storage unconfigured and foreign storage attached Error - data storage unconfigured, a complete set of foreign storage attached Error - data storage unconfigured, multiple set of foreign storage attached B) Configured "head unit" without its own "data storage". Error - system non-operational, storage missing Error - system non-operational, incomplete set of foreign storage attached Error - system non-operational, a complete set of foreign storage attached Error - system non-operational, multiple set of foreign storage attached C) Configured "head unit" with part of its "data storage". Error - "system non-operational, partial storage attached" If there is any foreign storage in the system that belongs to any of the above cases (A) (B) and (C), a list of foreign storage as seen the following example will be shown:

System serialno --------------7DD6843004


Disk Management

Number of disks --------------42

Storage Set ----------complete


177

Display Disk Type and Capacity Information

7DD6841003 ---------------

14 ---------------

incomplete -----------

In case (C), the number of total (expected) and presented RAID groups is also shown. D) Normal - system operational E) Warning - unprotected - no redundant protection system operational Warning - degraded - single redundant protection system operational Warning - foreign disk attached system operational Warning - disk fails system operational Warning - disk absent system operational Warning - disk has invalid status system operational Note that in the above case (E) the descriptions are shown in the order of severity, from least severe to most severe. For example, a system may contain a failed disk and have no redundant protection at the same time. In this case, the "no redundant protection" message will be shown because it has the higher severity (is more severe).

Display Disk Type and Capacity Information


The display of disk information for a Data Domain System has the columns:

Disk (Enc.Slot) is the enclosure and disk numbers. Manufacturer/Model shows the manufacturers model designation. Firmware is the firmware revision on each disk. Serial No. is the manufacturers serial number for the disk. Capacity is the data storage capacity of the disk when used in a Data Domain System. The Data Domain convention for computing disk space defines one gigabyte as 230 bytes, giving a different disk capacity than the manufacturers rating.

The display for a gateway Data Domain System has the columns:
178 Data Domain Operating System User Guide

Display Disk Type and Capacity Information

Disk displays each LUN accessed by the Data Domain System as a disk. LUN is the LUN number given to a LUN on the 3rd-party physical disk storage system. Port WWN is the world-wide number of the port on the storage array through which data is sent to the Data Domain System. Manufacturer/Model includes a label that identifies the manufacturer. The display may include a model ID or RAID type or other information depending on the vendor string sent by the storage array. Firmware is the firmware level used by the 3rd-party physical disk storage controller. Serial No. is the serial number from the 3rd-party physical disk storage system for a volume that is sent to the Data Domain System. Capacity is the amount of data in a volume sent to the Data Domain System.

Display Use the disk show hardware operation or click Disks in the left panel of the Data Domain Enterprise Manager to display disk information. disk show hardware The display for disks in a Data Domain System is similar to the following: # disk show hardware
Disk Manufacturer/Model Firmware Serial No. (Enc.Slot) --------- -------------------------------------1.1 HDS724040KLSA80 KFAOA32A KRFS06RAG9VYGC 1.2 HDS724040KLSA80 KFAOA32A KRFS06RAG9TYYC 1.3 HDS724040KLSA80 KFAOA32A KRFS06RAG99EVC 1.4 HDS724040KLSA80 KFAOA32A KRFS06RAGA002C 1.5 HDS724040KLSA80 KFAOA32A KRFS06RAG9SGMC 1.6 HDS724040KLSA80 KFAOA32A KRFS06RAG9VX7C 1.7 HDS724040KLSA80 KFAOA32A KRFS06RAG9SEKC 1.8 HDS724040KLSA80 KFAOA32A KRFS06RAG9U27C Disk Management Capacity

--------372.61 GiB 372.61 GiB 372.61 GiB 372.61 GiB 372.61 GiB 372.61 GiB 372.61 GiB 372.61 GiB 179

Display RAID Status for Disks 1.9 HDS724040KLSA80 KFAOA32A KRFS06RAG9SHXC 1.10 HDS724040KLSA80 KFAOA32A KRFS06RAG9SJWC 1.11 HDS724040KLSA80 KFAOA32A KRFS06RAG9SHRC 1.12 HDS724040KLSA80 KFAOA32A KRFS06RAG9SK2C 1.13 HDS724040KLSA80 KFAOA32A KRFS06RAG9WYVC 1.14 HDS724040KLSA80 KFAOA32A KRFS06RAG9SJDC 1.15 HDS724040KLSA80 KFAOA32A KRFS06RAG9SKBC --------- -------------------------------------15 drives present.

372.61 GiB 372.61 GiB 372.61 GiB 372.61 GiB 372.61 GiB 372.61 GiB 372.61 GiB ---------

Note GiB = Gibibytes, the base 2 equivalent of Gigabytes.

Display RAID Status for Disks


To display the RAID status and use of disks, which disks have failed from a RAID point of view, spare disks available for RAID, and the progress of a disk group reconstruction operation, use the disk show raid-info operation. disk show raid-info When a spare disk is available, the Data Domain System file system automatically replaces a failed disk with a spare and begins the reconstruction process to integrate the spare into the RAID disk group. The disk use changes from spare to in use and the status becomes reconstructing. In the sample display below, disk 8 is a spare disk. The display for a gateway Data Domain System shows only as many Disk and drives are in use entries as LUNs accessed by the Data Domain System. All other lines in the drives section of the display are always zero for gateway displays. Reconstruction is done on one disk at a time. If more than one disk is to be reconstructed, the disks waiting for reconstruction show as spare or hot spare until reconstruction starts on the disk. During reconstruction, the output line x drives are undergoing reconstruction includes a percentage of reconstruction that is completed. The percentage is the average amount completed for all disks that are currently undergoing reconstruction. The display for disks in a Data Domain System is similar to the following:

180

Data Domain Operating System User Guide

Display the History of Disk Failures

# disk show raid-info Disk State Status (Enc.Slot) -----------------------------------------------1.1 in use (dg0) 1.2 in use (dg0) 1.3 in use (dg0) 1.4 in use (dg0) 1.5 in use (dg0) 1.6 in use (dg0) 1.7 in use (dg0) 1.8 spare 1.9 in use (dg0) 1.10 in use (dg0) 1.11 in use (dg0) 1.12 in use (dg0) 1.13 in use (dg0) 1.14 in use (dg0) 1.15 in use (dg0) -------------------------------------------14 drives are in use 0 drives have "failed" 1 drive is spare(s) 0 drives are undergoing reconstruction 0 drives are not in use 0 drives are missing/absent

Additional

Display the History of Disk Failures


The disk show failure-history operation displays a list of serial numbers for all disks that have ever been failed in the Data Domain System. Use the disk show hardware command to display the serial numbers of current disks. Administrative users only. disk show failure-history

Display Detailed RAID Information


To display RAID disk groups and the status of disks within each group, use the disk show detailed-raid-info operation.
Disk Management 181

Display Detailed RAID Information

disk show detailed-raid-info The Slot column in the Disk Group section shows the logical slot for each disk in a RAID subgroup. In the example below, the RAID group name is ext3 with subgroups of ext3_1 through ext3_4 (only subgroups ext_1 and ext_2 are shown). The number of Gigabytes allocated for the RAID group and for each subgroup is shown just after the group or subgroup name. The Raid Group section shows the logical slot and actual disks for the whole group. On a gateway system, the display does not include information about individual disks. # disk show detailed-raid-info Disk Group (dg0) - Status: normal Raid Group (ext3):(raid-0)(61.6 GiB) - Status: normal Raid Group (ext3_1):(raid-6)(15.26 GiB) - Status: normal Slot Disk State Additional Status -----------------------------------0 1.10 in use (dg0) 1 1.11 in use (dg0) 2 1.12 in use (dg0) -----------------------------------Raid Group (ext3_2):(raid-6)(15.26 GiB) - Status: normal Slot Disk State Additional Status -----------------------------------0 1.13 in use (dg0) 1 1.14 in use (dg0) 2 1.15 in use (dg0) -----------------------------------Raid Group (ppart):(raid-6)(2.47 TiB) - Status: normal Slot Disk State Additional Status ---------------------------------0 1.16 in use (dg0) 1 1.11 in use (dg0) 2 1.12 in use (dg0) 3 1.13 in use (dg0) 4 1.14 in use (dg0)
182 Data Domain Operating System User Guide

Display Disk Performance Details

5 1.15 6 1.6 7 1.9 8 1.10 9 1.1 10 1.2 11 1.3 12 1.4 13 1.5 14 1.7 ------------------------Spare Disks Disk (Enc.Slot) ---------1.8 ---------Unused Disks None

in use (dg0) in use (dg0) in use (dg0) in use (dg0) in use (dg0) in use (dg0) in use (dg0) in use (dg0) in use (dg0) in use (dg0) -----------State --------spare ---------

Note MiB = Mebibytes, the base 2 equivalent of Megabytes. TiB = Tebibytes, the base 2 equivalent of Terabytes.

Display Disk Performance Details


The display of disk performance shows statistics for each disk. Each column displays statistics averaged over time since the last disk reset performance command or since the last system power cycle. See Reset Disk Performance Statistics on page 176 for reset details. Command output from a gateway Data Domain System lists each LUN accessed by the Data Domain System as a disk. Disk (Enc.Slot) is the enclosure and disk numbers. Read sects/s is the average number of sectors per second read from each disk. Write sect/s is the average number of sectors per second written to each disk. Cumul. MBytes/s is the average number of megabytes per second written to each disk. Busy is the average percent of time that each disk has at least one command queued.

Disk Management

183

Display Disk Performance Details

Display Use the disk show performance operation or click Disks in the left panel of the Data Domain Enterprise Manager to see disk performance statistics. disk show performance

The display is similar to the following: # disk show performance Disk Read Cumul. Busy (Enc.Slot) sects/s MiBytes/s ----------------------------1.1 378 0.392 11 % 1.2 0 0.000 0 % 1.3 346 0.379 10 % 1.4 0 0.000 0 % 1.5 410 0.414 11 % 1.6 397 0.402 11 % 1.7 360 0.389 11 % 1.8 (spare) (spare) 1.9 358 0.384 10 % 1.10 390 0.399 11 % 1.11 412 0.411 11 % 1.12 379 0.394 11 % 1.13 392 0.399 11 %
184

Write sects/s -------426 0 432 0 439 427 439 (spare) 430 429 430 429 426 (spare)

Data Domain Operating System User Guide

Display Disk Reliability Details

1.14 373 0.390 12 % 1.15 424 0.417 12 % ---------------------------Cumulative 5.583 MiB/s, 11 % busy

427 432 --------

Note MiBytes = MiB = Mebibytes, the base 2 equivalent of Megabytes.

Display Disk Reliability Details


Disk reliability information details the hardware state of each disk. The information is generally for the use of Data Domain support staff when troubleshooting. Disk is the enclosure.disk-id disk identifier. The ATA Bus CRC Err column shows uncorrected raw UDMA CRC errors. Reallocated Sectors indicates the end of the useful disk lifetime when the number of reallocated sectors approaches the vendor-specific limit. The limit is 2000 for Western digital disks and 2000 for Hitachi disks. Use the disk show hardware command to display the disk vendor. Temperature shows the current temperature of each disk in Celsius and Fahrenheit. The allowable temperature range for disks is from 5 degrees centigrade to 45 degrees centigrade. Question marks (?) in the four right-most columns mean that disk data is not accessible. Use the disk rescan command to restore access. Display Use the disk show reliability-data operation or click Disks in the left panel of the Data Domain Enterprise Manager to see the reliability statistics. disk show reliability-data The display is similar to the following:
# disk show reliability-data Disk ATA Bus Reallocated (Encl.Slot) CRC Err ---------- -------- ------1.1 0 0 1.2 0 0 1.3 0 0 1.4 0 0 1.5 0 0 1.6 0 0 Disk Management Temperature Sectors ----------33 C 91 F 33 C 91 F 32 C 90 F 33 C 91 F 34 C 93 F 34 C 93 F 185

Display Disk Reliability Details 1.7 0 0 33 C 91 F 1.8 0 0 33 C 91 F 1.9 0 0 34 C 93 F 1.10 0 0 34 C 93 F 1.11 0 0 35 C 95 F 1.12 0 0 33 C 91 F 1.13 0 0 34 C 93 F 1.14 0 0 34 C 93 F 1.15 0 0 56 C 133 F ---------- -------- ----------------14 drives operating normally. 1 drive reporting excessive temperatures.

186

Data Domain Operating System User Guide

Disk Space and System Monitoring


This chapter:

14

Gives general guidelines for predicting how much disk space your site may use over time. Explains how to deal with Data Domain System components that run out of disk space. Gives background information on how to reclaim Data Domain System disk space.

Note Data Domain offers guidance on setting up backup software and backup servers for use with a Data Domain System. Because such information tends to change often, it is available on the Data Domain Support web site (http://support.datadomain.com/). See the Technical Notes section on the web site. Note Disk space is given in KiB, MiB, GiB, and TiB, the binary equivalents of KB, MB, GB, and TB.

Space Management
A Data Domain System is designed as a very reliable online cache for backups. As new backups are added to the system, old backups are removed. Such removals are normally done under the control of backup software (on the backup server) based on the configured retention period. The process with a Data Domain System is very similar to tape policies where older backups are retired and the tapes are reused for new backups. When backup software removes an old backup from a Data Domain System, the space on the Data Domain System becomes available only after the Data Domain System internal clean function reclaims disk space. A good way to manage space on a Data Domain System is to retain as many online backups as possible with some empty space (about 20% of total space available) to allow for data growth over time. Data growth on a Data Domain System is primarily affected by:

The size and compressibility of the primary storage that you are backing up. The retention period that you specify with the backup software.

187

Estimate Use of Disk Space

If you backup volumes that in total size are near the space available for data storage on a Data Domain System (for example 4 TiB--the base 2 equivalent of TB--on a model DD460, which has 3.9 TiB space available, see the table Data Domain system capacities in the Introduction chapter of the System Hardware Guide) or the retention time for volumes that do not compress well is greater than four months, backups may fill space on a Data Domain System more quickly than expected.

Estimate Use of Disk Space


The Data Domain Systems use of compression when storing data means that you can look at the use of disk space in two ways: physical and virtual. (See Data Compression on page 6 for details about compression.) Physical space is the actual disk space used on the Data Domain System. Virtual space is the amount of space needed if all data and multiple backup images were uncompressed.

Through the Data Domain System, the filesys show space command (or the alias of df) shows both physical and virtual space. See Manage File system Use of Disk Space on page 189. Directly from clients that mount a Data Domain System, use your usual tools for displaying a file systems physical use of space.

The Data Domain System generates log messages as the file system approaches its maximum size. The following information about data compression gives guidelines for disk use over time. The amount of disk space used over time by a Data Domain System depends on:

The size of the initial full backup. The number of additional backups (incremental and full) over time. The rate of growth for data in the backups.

For data sets with average rates of change and growth, data compression generally matches the following guidelines:

For the first full backup to a Data Domain System, the compression factor is about 3:1. Disk space used on the Data Domain System is about one-third the size of the data before the backup. Each incremental backup to the initial full backup has a compression factor of about 6:1. The next full backup has a compression factor of about 60:1. All data that was new or changed in the incremental backups is already in storage. Over time, with a schedule of weekly full and daily incremental backups, the aggregate compression factor for all the data is about 20:1. The compression factor is lower for incremental-only data or for backups without much duplicate data. Compression is higher with only full backups.
Data Domain Operating System User Guide

188

Manage File system Use of Disk Space

Manage File system Use of Disk Space


The Data Domain System command filesys show space (or the alias command df) displays the amount of disk space available for and used by Data Domain System file system components. # filesys show space

Resource /backup: pre-comp /ddvar

Size GiB Used GiB Avail GiB 19.7 0.4 3.2 3.0 151.9 15.7

Use% ---2% 16% ----

Cleanable GiB* -------------0.0 --------------

------------------ -------- -------- --------/backup: post-comp 155.1

------------------ -------- -------- ---------

* Estimated based on last cleaning of 2008/02/12 06:14:02. The /backup: pre-comp line shows the amount of virtual data stored on the Data Domain System. Virtual data is the amount of data sent to the Data Domain System from backup servers. Do not expect the amount shown in the /backup: pre-comp line to be the same as the amount displayed with the filesys show compression command, Original Bytes line, which includes system overhead.

The /backup: post-comp line shows the amount of total physical disk space available for data, actual physical space used for compressed data, and physical space still available for data storage. Warning messages go to the system log and an email alert is generated when the Use% figure reaches 90%, 95%, and 100%. At 100%, the Data Domain System accepts no more data from backup servers. The total amount of space available for data storage can change because an internal index may expand as the Data Domain system fills with data. The index expansion takes space from the Avail GiB amount. If Use% is always high, use the filesys clean show-schedule command to see how often the cleaning operation runs automatically, then use filesys clean schedule to run the

Disk Space and System Monitoring

189

Display the Space Graph

operation more often. Also consider reducing the data retention period or splitting off a portion of the backup data to another Data Domain System.

The /ddvar line gives a rough idea of the amount of space used by and available to the log and core files. Remove old logs and core files to free space in this area.

Display the Space Graph


For information on displaying the space graph, see the chapter Enterprise Manager, section Display the Space Graph.

Reclaim Data Storage Disk Space


When your backup application (such as NetBackup or NetWorker) expires data, the data is marked by the Data Domain System for deletion. However, the data is not deleted immediately. The Data Domain System clean operation deletes expired data from the Data Domain System disks.

During the clean operation, the Data Domain System file system is available for backup (write) and restore (read) operations. Although cleaning uses a noticeable amount of system resources, cleaning is self-throttling and gives up system resources in the presence of user traffic. Data Domain recommends running a clean operation after the first full backup to a Data Domain System. The initial local compression on a full backup is generally a factor of 1.5 to 2.5. An immediate clean operation gives additional compression by another factor of 1.15 to 1.2 and reclaims a corresponding amount of disk space.

A default schedule runs the clean operation every Tuesday at 6 a.m. (tue 0600). You can change the schedule or you can run the operation manually with the filesys clean commands. DataDomain recommends that you run the clean operation at least once a week. If you want to increase file system availability and if the Data Domain System is not short on disk space, consider changing the schedule to clean less often. See Clean Operations on page 220 for details on changing the schedule. When the clean operation finishes, it sends a message to the system log giving the percentage of storage space that was cleaned. A Data Domain system that has become full may need multiple clean operations to clean 100% of the file system, especially if there is an external shelf. Depending on the type of data stored, such as when using markers for specific backup software (filesys option set marker-type ... ), the file system may never report 100% cleaned. The total space cleaned may always be a few percentage points less than 100.

190

Data Domain Operating System User Guide

Maximum Number of Files and Other Limitations

Note Replication between Data Domain systems can affect filesys clean operations. If a source Data Domain system receives large amounts of new or changed data while disabled or disconnected, resuming replication may significantly slow down filesys clean operations.

Maximum Number of Files and Other Limitations


Number of Files
Data Domain recommends storing no more than 100 million files on a system. A larger number of files affects performance, but is not a problem otherwise. Some processes, such as file system cleaning, may run much longer with a very large number of files. For example, the enumeration phase of cleaning takes about 5 minutes for one million files and over 8 hours for 100 million files. A system does not have a set number of files as a capacity limit. Available disk space is used as needed to store data and the metadata that describes files and directories. In round numbers, each file or directory has about 1000 bytes of metadata . A Data Domain System with 5 TB of space available could hold about 5 billion empty files. The amount of space used by data in files directly reduces the amount of space available for metadata, and the number of file and directory metadata entries directly reduces the amount of space available for data.

Inode Reporting
An NFS or CIFS client request causes a Data Domain System to report a capacity of about 2 billion inodes (files and directories). A Data Domain System can safely exceed that number, but the reporting on the client may be incorrect.

Path Name Length


The maximum length of a full path name (including the characters in /backup) in 4.3 or later releases is 1023 bytes. The maximum length of a symbolic link is also 1023 bytes.

Disk Space and System Monitoring

191

When a Data Domain System is Full

Directory Size for Directory Replication


A Data Domain system does have limits for the number of files under a single directory when using directory replication. A single directory with more files than the following limits cannot be used with directory replication initializaiton, resynchronization, or recovery operations. Platforms
DD4xx DD510 DD530 DD560, DD565 DD580 DD690

Maximum number of files


1 million 2 million 4 million 16 million 20 million

When a Data Domain System is Full


A Data Domain System has three levels of being full. Each level has different limitations. At each level, a filesys clean command makes disk space available for continued operations. Deleting files and expiring snapshots do not recalim disk space. Only a filesys clean reclaims disk space.

Level 1: When no more new data can be written to the file system, an informative out of space message is returned. Run the filesys clean command. Level 2: Deleting files and expiring snapshots increases the amount of space used for each file that is involved as the new state is recorded. After deleting a large number of files or expiring a large number of snapshots or both, the space available does not allow any more file deletions. At that time, a misleading permission denied error message appears. A full system that generates permission denied messages is most likely at this level. Run the filesys clean command Level 3: After the permission denied message, you can still expire snapshots until no more disk space is available. Attempts to expire snapshots, delete files, or write new data all fail at this level. Run the filesys clean command

192

Data Domain Operating System User Guide

Multipath

15

Multipath allows external storage I/O paths to be used for failover and load balancing across paths. Multipath is available for all releases from 4.5 onward , on all Data Domain systems that support dual-port HBAs. (Multipath may also be supported if the system has two single-ported HBAs, depending on upgrade, etc.) Note 4.4.x releases have multipath functionality on Gateway systems only. Failover means that for any system that has more than one path, if the path being used fails, the system will begin using the other path, with no interruption of service. Any Data Domain system that has more than one path configured and enabled, failover will happen automatically.

Multipath Commands for Gateway only


The following seven disk commands: disk disk disk disk disk disk disk multipath multipath multipath multipath multipath multipath multipath suspend/resume port option set auto-failback enabled option set auto-failback disabled option reset auto-failback failback resume port suspend port

...are only available on Gateway systems. They display useful gateway-oriented information and control multipathing for gateway systems. They are described below.

Suspend or Resume a Port Connection (Gateway only)


To suspend or resume a port connection, use the command: disk multipath suspend/resume port

193

Multipath Commands for Gateway only

Enable Auto-Failback (Gateway only)


Explanation of Auto-Failback: For example, suppose a two-path system is using its optimal path, then that path goes down, so the system fails over to the second path. Later the optimal path comes back up. What happens now depends on auto-failback:

Case 1: auto-failback is enabled: the system fails back to the optimal path automatically. Case 2: auto-failback is disabled: the system continues using the second path. This continues until the user manually commands it to failback to the optimal path, using the command disk multipath failback.

To enable auto-failback (that is, to configure the system to go back to using the optimal path when it comes back up), use the command: disk multipath option set auto-failback enabled

Disable Auto-Failback (Gateway only)


To disable auto-failback (that is, to configure the system not to go back to using the optimal path when it comes back up until manually commanded to do so), use the command: disk multipath option set auto-failback disabled

Reset Auto-Failback to its Default of enabled (Gateway only)


To reset auto-failback to its default value (enabled), use the command: disk multipath option reset auto-failback

Go back to using the optimal path (Gateway only)


To manually command the system to go back to using the optimal path, use the command: disk multipath failback

Allow I/O on a specified initiator port (Gateway only)


To allow I/O on a specified initiator port, use the disk multipath resume port command. disk multipath resume port

194

Data Domain Operating System User Guide

Multipath Commands for all systems

Disallow I/O on a specified initiator port (Gateway only)


To disallow I/O on a specified initiator port, use the disk multipath suspend port command. This command may be used to stop traffic on particular port(s) during scheduled maintenance of SAN or storage array etc. This command does not drop the FC link. disk multipath suspend port

Multipath Commands for all systems


Display Port Connections
To display port connection information and status, use the disk port show summary operation. disk port show summary Output for Gateway: # disk port show summary
Port Connection Type ---------FC (direct) FC (direct) FC (direct) FC (direct) ---------Link Speed ------4 Gbps 4 Gbps 4 Gbps 4 Gbps ------Port ID -----000002 0000e8 0000e8 0000e8 -----Connected Number of LUNs ------4 4 4 4 ------Status

---3a 3b 4a 4b ----

------online online online online -------

Output for ES20 Expansion Shelves (example is a DD690 with 6 shelves): # disk port show summary
Port ---3a 3b 4a 4b ---Connection Type ---------SAS SAS SAS SAS ---------Link Speed ------12 Gbps 12 Gbps 12 Gbps 12 Gbps ------Connected Enclosure IDs ------------2, 3, 4 5, 6, 7 5, 6, 7 2, 3, 4 ------------Status -----online online online online ------

Multipath

195

Multipath Commands for all systems

Port See the "Data Domain System Hardware User Guide" to match a slot to a port number. A DD580, for example,shows port 3a as a SAS HBA in slot 3 and port 4a as a SAS HBA in slot 4. A gateway Data Domain system with a dual-port Fibre Channel HBA always shows #a as the top port and #b as the bottom port, where # is the HBA slot number, depending on the Data Domain system model.. Connection Type is FC (Fibre Channel) for a gateway system. Link Speed is the HBA port link speed. Port ID identification number of the port. Connected Number of LUNs is the number of LUNs seen through the port. Connected Enclosure IDs is the ID numbers of the shelves connected to the port. Status is online or offline. Offline means that no LUNs are seen by the port.

Enable Monitoring of Multipath Configuration


To enable multipath configuration monitoring, use the disk multipath option set monitor command. disk multipath option set monitor When multipath configuration monitoring is enabled, failures in paths to disk devices trigger alerts. The system gives an alert if a LUN (for Gateway) or a disk drive (for ES20 shelf) has a single path or if a path fails. If multipath configuration changes are made after monitoring is enabled, the changes are not recognized by the monitoring feature. Disable (reset) and then enable (set) monitoring after making configuration changes.

Disable Monitoring of Multipath Configuration


To disable multipath configuration monitoring, use the disk multipath option reset monitor command. disk multipath option reset monitor When multipath configuration monitoring is disabled, failures in paths to disk devices do not trigger alerts. If multipath configuration monitoring is disabled, then path failures and single-pathed LUNs do not cause an alert.

Show Monitoring of Multipath Configuration


To show whether multipath configuration monitoring is enabled or disabled, use the disk multipath option show command. disk multipath option show
196 Data Domain Operating System User Guide

Multipath Commands for all systems

Show Multipath Status


Show configuration and running status for all paths to the disks in the specified enclosure. By default, show information for all enclosures. disk multipath status [port-id] The output of commands may vary greatly depending on whether the command is run on a Gateway system or a system with an expansion shelf. For example: For expansion shelf: # disk multipath status

Port ---3a

Hops ---1 2 3b 2 1 ------ ----

Status ------Active Standby Standby Active -------

Disk ---------2.1 - 2.16 3.1 - 3.16 2.1 - 2.16 3.1 - 3.16 ----------

For Gateway: # disk multipath status

Port Target WWNN ---- ----------------------3a 50:06:01:61:10:20:95:ad 1 3b 50:06:01:61:10:20:95:af 1 ---- -----------------------

Target WWPN LUN Disk Status ----------------------- --- ---- ------50:06:01:61:1f:20:95:ad 0 dev1 Active dev2 Active 50:06:01:61:1f:20:95:af 0 dev1 Standby dev2 Standby ----------------------- --- ---- -------

Port is the port number on the HBA. Looking at the back of a Gateway system, the slots are numbered from right to left, and the ports (on a dual-port Fibre Channel HBA) are given letter "a" for the upper port and "b" for the lower. Thus: - The rightmost slot has port 1a (the upper port) and 1b (the lower port). - The slot to the left of it has port 2a (upper) and 2b (lower). And so on. Hops is the number of cable jumps to reach the destination.

Multipath

197

Multipath Commands for all systems

Target WWNN is the WorldWide Node Name for the target array. Target WWPN is the WorldWide Port Name for the target port. LUN displays Logical Unit Numbers visible by specified system disks (or drives). Disk is the Disk ID. Status is the running status of the path. Possible values: Active, Standby, Failed, Disabled.

Show Multipath History


To show path event history for the past day, use the disk multipath show history command. disk multipath show history For expansion shelf: # disk multipath show history

Time

Port Target (Enc.Disk) ----------------- ---- ---------03/08/07 12:30:04 3a 2.1 ----------------- ---- ----------

Target Serial No. --------------IMS584600001602 ---------------

Disk Serial No. -------------KRVN67ZAKLU9WH --------------

Event -----Active ------

For Gateway: # disk multipath show history


Time ----------------03/08/07 12:30:04 ----------------Port ---1a ---Target WWPN ----------------------50:06:01:61:10:20:95:af ----------------------LUN --0 --Serial No. -------------KRVN67ZAKLU9WH -------------Event -----Active ------

Time is the time when an event occurred. Port is the initiator of a path identified by PCI slot and HBA port number. Target WWPN is the target of a path identified by WWPN. Target (Enc. Disk) is the target of a path identified by Enclosure and Disk. LUN is the Logical Unit Number.

198

Data Domain Operating System User Guide

Multipath Commands for all systems

Target Serial No. is the Serial Number of the shelf controller. Disk Serial No. is the Serial Number of the Disk. Event is the Type of Event: Active, Standby, Failed, Disabled.

Show Multipath Statistics


Show statistics for all paths of all disks. disk multipath show stats For an expansion shelf: # disk multipath show stats

Port ---3a 3b ----

enc ---2 3 2 3 ----

Read Requests -------123456 0 0 123456 --------

Read Failures -------0 0 0 0 --------

Write Requests -------123456 0 0 123456 --------

Write Failures -------0 0 0 0 --------

For a second expansion shelf: # disk multipath show stats

Port ---3a

disk ---2.1 2.2 ... 2.2 2.2 ... ----

Read Requests -------123456 0 0 123456 --------

Read Failures -------0 0 0 0 --------

Write Requests -------123456 0 0 123456 --------

Write Failures -------0 0 0 0 --------

3b

----

Multipath

199

Multipath Commands for all systems

For Gateway: # disk multipath show stats


Port ---3a 3b ---Target WWPN ----------------------50:06:01:61:10:20:95:ad 1 50:06:01:61:10:20:95:af 1 ----------------------LUN --0 2 0 2 --Disk ---dev1 Active dev2 Standby ---Status ------Active Standby -------

Read Requests -------123456 123456 0 0 --------

Read Failures -------0 0 0 0 --------

Write Requests -------123456 123456 0 0 --------

Write Failures -------0 0 0 0 --------

enc is the enclosure ID. Port is the port number identified by PCI slot ID and Port number on HBA. Target WWPN is the Port WWN of the target LUN is the Logical Unit Number. Disk is the Disk ID. Status is the Running status of the path. Possible values: Active, Standby, Failed, Disabled. Read Requests is the number of read requests issued since the last reset. A 64-bit number. Read Failures is the number of read request failures that have occurred since last reset. A 64-bit number. Write Requests is the Number of write requests issued since last reset. A 64-bit number. Write Failures is the number of write request failures that have occurred since the last reset. A 64-bit number.

Clear Multipath Statistics


Clear the statistics of all paths to the specified disk. By default, clear statistics for all disks in all enclosures. disk multipath reset stats

200

Data Domain Operating System User Guide

SECTION 5: File System and Data Protection

201

This page intentionally left blank.

202

Data Domain Operating System User Guide

Data Layout Recommendations


This chapter gives data layout recommendations for Data Domain systems.

16

Introduction
Data Domain sells a number of platforms that provide an ideal disk-based environment for efficiently storing backups and archived data. These appliances are easy to setup and install and set the standard for storage efficiency through a combination of deduplication and compression technologies. While these appliances are easy to install, configure, and manage, questions arise as to how best to organize the data stored on them to maximally benefit from their use. It is common for a user to wonder how well the data is being compressed and several tools are provided to answer this question. But when questions arise as to how effective the compression is on specific data sets or types, some simple organization at the outset can help simplify this troubleshooting down the line. This paper provides an outline of some of these recommendations. Following these recommendations when the appliance is first configured will make determining the compression characteristics of data sets much easier. It will also simplify backup and recovery processes by clearly separating various data types so they can be quickly identified and accessed.

Issue
The primary reason customers are interested in Data Domain systems is to make the most effective use of their storage footprint. It is important to be able to measure and understand these compression effects and to know for certain what is compressing well and what isn't. By using the directory structure on the Data Domain system, it is easier to observe and troubleshoot these issues.

Background
The Data Domain system is an appliance which presents three types of interfaces to the data center environment; NFS via IP and Ethernet, CIFS (Microsoft file sharing) via IP and Ethernet, or Virtual Tape Library emulation via fibre channel. These are well understood industry-standard access
203

Background

mechanisms that are simple to setup and use. The appliance also has a small set of configuration and monitoring tools accessible via either command line or web-based GUI. This paper will focus on those commands used to report on the deduplication and compression effects that characterize the system.

Reporting on compression
The reason directory organization is an important consideration on a Data Domain is that one administrative command reports how well the compression capabilities of a DDR are being utilized. That command is filesys show compression <directory>. The documentation for this command reads: filesys show compression [path] [last {n hours | n days}]

In the display, the value for bytes/storage_used is the compression ratio after all compression of data (global and then local) plus the overhead space needed for meta data. In the Original bytes line, (which includes system overhead) do not expect the amount shown to be the same as the amount displayed with the filesys show space command, Pre-compression line, which does not include system overhead. The Original Bytes gives the cumulative (since file creation) number of bytes written to all files that were updated in the previous time period (if a time period is given in the command). The value may be different on a replication destination than on a replication source for the same files or file system. On the destination, internal handling of replicated meta-data and unwritten regions in files lead to the difference. The value for Meta-data includes an estimate for data that is in the Data Domain System internal index and is not updated when the amount of data on the Data Domain System decreases after a file system clean operation. Because of the index estimate, the amount shown is not the same as the amount displayed with the filesys show space command, Meta-data line.

The display is similar to the following: # filesys show compression /backup/usr Total files: 6,018; bytes/storage_used: 10.7 Original Bytes:6,599,567,913,746 Globally Compressed: 992,690,774,605 Locally Compressed: 608,225,239,283 Meta-data: 7,329,091,080 It is recommended that the optional parameter "last 24 hours" be used, since this reports on the data most recently backed up and gives the most accurate measure of how recent compression is behaving. Without this optional parameter, the compression reported is the overall compression experienced during the lifetime of the filesystem. When the system is first being placed into service, much of the data is seen as new so the early compression is generally lower than it will be later. Over time it improves and should reach a near-steady state which the "last 24 hours" option allows to be monitored.
204 Data Domain Operating System User Guide

Background

General guidelines for monitoring compression:


Use filesys show compression last 24 hours to get the compression for the last day's backup Use filesys show compression last 7 days to get a rough idea of the compression for the last week. This command is more useful to find the backup dataset size for a week. Use df to get the real compression numbers for the DDR.

By separating the data stored on the Data Domain system into separate subdirectories, the overall compression effects can be observed and measured using the command: # filesys show compression All compressed data on a Data Domain system is stored on the /backup filesystem. Therefore, all recommended organization takes place below this level.

Considerations
Several approaches exist for organizing the data. 1. Client source of data 2. Category of data - NFS vs. CIFS vs. VTL 3. Application type It's not really important which of these are used or combined as long as enough organization is provided to be able to determine the compression characteristics of specific areas of storage. At the same time, it is important to avoid too much organizing that gets in the way of effectively using the Data Domain system. If too many directories are created, it could complicate setting up backup and recovery policies which leads to more management and opportunities for error. So a careful balance needs to be maintained. An example of a way to line up directory structure is given in the figure Directory Structure Example on page 206.

Data Layout Recommendations

205

Background

Figure 17 Directory Structure Example

Further explanation and discussion for the above table The first level of organization separates the data by which style of access is used to read/write the data on the Data Domain system. The next level separates out the major sources of backup data sent to the Data Domain. In some circumstances, breaking this backup data into one additional level of organization can help understand how the data from major applications are handled and compressed. Be aware that when using the command filesys show compression <directory name> that specifying a <directory name> that has sub-directories will show a compression summary for all the sub-directories as well. To get the most granular information, specify the lowest relevant <directory name> in the tree whenever possible.

206

Data Domain Operating System User Guide

Background

NFS issues
The Network Filesystem was originally developed by Sun Microsystems and is the defacto standard today for sharing filesystem information across various flavors of UNIX platforms today. All major UNIX derivatives including Solaris, AIX, HP-UX, Linux, and Free-BSD support this method of access over Ethernet.

Filesystem organizations
The example shown in Table 1 shows a separation of backup data into two types: home directories and Oracle data. It is not uncommon for two separate backup policies to exist for this situation, an enterprise backup application that can backup all user home directories, and the use of Oracle's RMAN utility to backup Oracle database information. Further separating the Oracle archivelog files from the rest of the database also provides the ability to monitor how the two portions independently compress. Keeping these directories separate allows administrators to know how space is being used and adjust the retention policies accordingly. A general purpose best practice is to isolate database logfiles from the database data and control files wherever possible. Logfiles generally do not compress terribly well since they frequently have data patterns never seen before, so keeping them separated allows their possibly negative effect on overall compression to be measured. For large environments with significantly different databases, an additional level of decomposition can be added either above or below the database / logfile separation.

Mount options
Since each of these subdirectories is also available as an NFS export it is not unreasonable to take advantage of this fact and make only those directories available to the specific servers performing that type of backup. This provides improved security to the overall environment. Example of a UNIX /etc/vfstab or /etc/fstab file: dd460a:/backup/NFS/HomeDirs /backup/target rw,nolock,rsize=32768,wsize=32768,vers=3,proto=tcp 0 0 dd580a:/backup/NFS/Oracle/data /backup/Oracle-data rw,nolock,rsize=32768,wsize=32768,vers=3,proto=tcp 0 0 dd580a:/backup/NFS/Oracle/archivelogs /backup/Oracle-logs rw,nolock,rsize=32768,wsize=32768,vers=3,proto=tcp 0 0 On the Data Domain system the "nfs add client" command can be used to restrict export access of the mount points. Using "nfs show clients" we can see this in action:

Data Layout Recommendations

207

Background

# nfs show clients

path -------------------/backup/vm /backup/vm /backup/vm /backup/vm /backup/vm /backup/vm /backup/misc_backups /backup/sample_data /backup/app_os_images

client ---------------------172.28.0.205 172.28.1.1 172.28.1.2 192.168.28.31 blade1-vm-data.se.local blade2-vm-data.se.local gen1.se.local * 192.168.28.50

options -------------------------------------rw,no_root_squash,no_all_squash,secure rw,no_root_squash,no_all_squash,secure rw,no_root_squash,no_all_squash,secure rw,no_root_squash,no_all_squash,secure rw,no_root_squash,no_all_squash,secure rw,no_root_squash,no_all_squash,secure rw,no_root_squash,no_all_squash,secure ro,no_root_squash,no_all_squash,secure rw,no_root_squash,no_all_squash,secure

CIFS issues
The Common Internet Filesystem is used by Microsoft Windows products to share filesystem information across a LAN. The approach mentioned above for NFS apply equally to CIFS with an appropriate substitution of terms. NFS mounts become CIFS shares; Oracle becomes SQL Server or Exchange Server; etc.

VTL issues
The tape image files for all VTL library definitions are stored under the /backup/vtc directory. By default, all tapes images defined and created are stored in the Default directory (/backup/vtc/Default) unless other VTL "pools" are utilized. When creating tape definitions (part of VTL commissioning) the administrator can optionally assign tapes to various pools and give each pool a name. These pools are implemented by creating subdirectories under /backup/vtc, which keeps the various tapes grouped and separated so they can be managed, and most notably, replicated as separate entities. It is therefore a good idea to use the pool mechanism to keep collections of tapes used for different purposes separated and organized. Since they are in separate subdirectories, the compression effects of each separate pool can be determined using the command: # filesys show compression /backup/vtc/<pool name> You can also use the command: # vtl tape show pool <poolname> summary

208

Data Domain Operating System User Guide

Archive implications

OST issues
The best practice recommendation is to create one LSU on the DD system for optimal interaction with NetBackup's capacity management and intelligent resource selection algorithms. Use the ost lsu show command to display all the logical storage units. If an lsu name is given, display all the images in the logical storage unit. If compression is specified, the logical storage unit or images' original, globally compressed and locally compressed sizes will also be displayed. ost lsu show [compression] [lsu-name] Example output for the commands Without an LSU specified, the command shows summary information for all the LSUs. # ost lsu show compression List of LSUs and their compression info: LSU_NBU1: Total files: 4; bytes/storage_used: 206.6 Original Bytes: 437,850,584 Globally Compressed: 2,149,216 Locally Compressed: 2,113,589 Meta-data: 6,124 When an LSU is specified, the command shows information for the given LSU. # ost lsu show compression LSU_NBU1 List of images in LSU_NBU1 and their compression info: zion.datadomain.com_1184350349_C1_HDR:1184350349:jp1_policy1:4: 1::: Total files: 1; bytes/storage_used: 9.1 Original Bytes: 8,872 Globally Compressed: 8,872 Locally Compressed: 738 Meta-data: 236

Archive implications
Archived data tends to remain stored on the Data Domain system for much longer periods than backup data. It is also not uncommon for the data to only be written a single time to the appliance, which results in reduced opportunities for the deduplication technology to have the same benefit as seen for traditional backups. Keeping the archive data separate allows its effects on overall compression to be observed and accounted for.

Data Layout Recommendations

209

Very large environments

Very large environments


In very large environments where multiple Data Domain systems are required to meet the backup needs, similar guidelines would still hold, except that now they are spread across several systems. Multiple "/backup" root directories would now be involved so it would be reasonable to spread the data-descriptive directories across separate appliances. For instance, one appliance might be used for all NFS traffic, another for all CIFS traffic, and so forth. Of course, its even more important to ensure that the I/O load is effectively spread across all appliances, so in some circumstances it may be reasonable to have a "/backup/NFS" folder on more than one appliance.

IMPORTANT NOTE!
Note IMPORTANT: Keep in mind that deduplication only operates across a single Data Domain system. This means that data spread across several will not be deduplicated. If you have a large environment consisting of multiple Data Domain systems, it is important that the same data be sent to the same appliance every time. If a failure prevents this and a single backup has to be sent to an alternate appliance it could have significant effects on compression. Taking the manual step of moving this backup to its original destination after the failure is corrected may be necessary, depending on the degree that the compression is degraded.

Summary
By applying some early organization to the directory structure configured on the Data Domain system, future storage management and troubleshooting issues can be simplified and often avoided. This paper outlines some of the reasons for doing this and recommendations that can be followed. Each site is still unique, so these recommendations should be understood as to their spirit, and the detailed deployment performed to the specific circumstances that the Data Domain system will be used in.

Additional Notes on the Filesys Show Compression command


The Filesys Show Compression command is a reporting tool provided on the Data Domain that gives an estimate of the compression experienced by data written to the DDR. The information provided is a summary of per-file data collected when the file was last written. Subsequent changes to the filesystem afterwards, e.g. deletion of previously matching data, can cause significant changes in the realtime compression effects which are not realistically reported.

210

Data Domain Operating System User Guide

Additional Notes on the Filesys Show Compression command

Let's look at an example. Let's write a 2MB file to the DDR and observe that it experiences a 5X compression. We immediately write the same file again to a different location. It is natural to assume that the second copy will be highly deduplicated, so let's say it gets 200X compression. Filesys Show Compression <File1> will report 5X and Filesys Show Compression <File2> will show 200X. We then delete <File1>. Filesys Show Compression <File2> will still show 200X even though it obviously should report the 5X value that the first copy of the file received. Herein lies the potential for confusion. There are other less significant factors that can affect the numbers and which offer more opportunities for the exact numbers to be off. Therefore, the exact numbers reported by Filesys Show Compression are less interesting than the comparative numbers displayed when various separate directories are reported, or trends are observed over time. Obvious from the example above, any large-scale deletions can have an effect, sometimes significant, on the reported numbers that may need to be accounted for. Only the system administrator will know about such deletions, which may be explicitly executed, or done in the background through the expiration process built into all Enterprise backup software.

Data Layout Recommendations

211

Additional Notes on the Filesys Show Compression command

212

Data Domain Operating System User Guide

File System Management

17

The filesys command


The filesys command allows you to display statistics, capacity, status, and utilization of the Data Domain System file system. The command also allows you to clear the statistics file and to start and stop the file system processes. The clean operations of the filesys command reclaim physical storage within the Data Domain System file system. Note All Data Domain system commands that display the use of disk space or the amount of data on disks compute and display amounts using base 2 calculations. For example, a command that displays 1 GiB of disk space as used is reporting: 230 bytes = 1,073,741,824 bytes. 1 KiB = 210 bytes = 1024 bytes. 1 MiB = 220 bytes = 1,048,576 bytes 1 GiB = 230 bytes = 1,073,741,824 bytes 1 TiB = 240 bytes = 1,099,511,627,776 bytes

Statistics and Basic Operations


The following operations manage file system statistics and status displays and start and stop file system processes.

Start the Data Domain System File system Process


To start the Data Domain System file system, allowing Data Domain System operations to begin, use the filesys enable operation. Administrative users only. filesys enable

213

Statistics and Basic Operations

Stop the Data Domain System File system Process


To stop the Data Domain System file system, which stops Data Domain System operations (including cleaning), use the filesys disable operation. Administrative users only. filesys disable

Stop and Start the Data Domain System File system


To disable and enable the Data Domain System file system in one operation, use the filesys restart command. Administrative users only. filesys restart

Delete All Data in the File system


To delete all data in the Data Domain System file system and re-initialize the file system, use the filesys destroy operation. The operation also removes Replicator configuration settings. Deleted data is not recoverable. The basic command takes about one minute. The and-zero option writes zeros to all disks, which can take many hours. Administrative users only. filesys destroy [and-zero] The display includes a warning similar to the following: # filesys destroy The 'filesys destroy' command irrevocably destroys all data in the '/backup' data collection, including all virtual tapes, and creates a newly initialized (empty) file system. The 'filesys destroy' operation will take about a minute. File access is disabled during this process. Are you sure? (yes|no|?) [no]: Note When filesys destroy is run on a system with retention-lock enabled: 1. All data is destroyed including retention-locked data. 2. All filesys options are returned to default; this means retention-lock is not enabled and the min-retention-period as well as max-retention-period options are set back to default values on the newly created filesystem. After a filesys destroy, all NFS clients connected to the system may need to be remounted.

214

Data Domain Operating System User Guide

Statistics and Basic Operations

Fastcopy
To copy a file or directory tree from a Data Domain System source directory to another destination on the Data Domain System, use the filesys fastcopy operation. See Snapshots on page 231 for shapshot details. filesys fastcopy [force] source src-path destination dest-path src-path The location of the directory or file that you want to copy. The first part of the path must be /backup. Snapshots always reside in /backup/.snapshot. Use the snapshot list command to list existing snapshots. dest-path The destination for the directory or file being copied. The destination cannot already exist. force Allows the fastcopy to proceed without warning in the event the destination exists. The force option is useful for scripting, because it is not interactive. filesys fastcopy force causes the destination to be an exact copy of the source even if the two directories had nothing in common before. Use Case: Users may want or need to use fastcopy force if they are scripting fastcopy operations to simulate cascaded replication, the major use case for the option. It is not needed for interactive use, because regular fastcopy will warn if the destination exists and then re-execute with the force option if allowed to proceed. Note If the destination has retention-locked files, fastcopy and fastcopy force will fail, aborting the moment they encounter retention-locked files. For example, to copy the directory /user/bsmith from the snapshot scheduled-200704-27 and put the bsmith directory into the user directory under /backup: # filesys fastcopy source /backup/.snapshot/scheduled-2007-04-27/user/bsmith destination /backup/user/bsmith Like a standard unix copy, filesys fastcopy goes through making the destination equal to the source, but not at a particular point in time. If you change either folder while copying, there are no guarantees that the two are or were ever equal.

Display File system Space Utilization


The display shows the amount of space available for and used by Data Domain System file system components.

File System Management

215

Statistics and Basic Operations

The /backup: pre-comp line shows the amount of virtual data stored on the Data Domain System. Virtual data is the amount of data sent to the Data Domain System from backup servers. Do not expect the amount shown in the /backup: pre-comp line to be the same as the amount displayed with the filesys show compression command, Original Bytes line, which includes system overhead. The /backup: post-comp line shows the amount of total physical disk space available for data, actual physical space used for compressed data, and physical space still available for data storage. Warning messages go to the system log and an email alert is generated when the Use% figure reaches 90%, 95%, and 100%. At 100%, the Data Domain System accepts no more data from backup servers. The total amount of space available for data storage can change because an internal index may expand as the Data Domain system fills with data. The index expansion takes space from the Avail GiB amount. If Use% is always high, use the filesys clean show-schedule command to see how often the cleaning operation runs automatically, then use filesys clean schedule to run the operation more often. Also consider reducing the data retention period or splitting off a portion of the backup data to another Data Domain System.

The /ddvar line gives a rough idea of the amount of space used by and available to the log and core files. Remove old logs and core files to free space in this area.

Display To display the space available to and used by file system components, use the filesys show space operation or click File system in the left panel of the Data Domain Enterprise Manager. Values are in gigabytes to one decimal place. filesys show space The display is similar to the following:
# filesys show space Resource -----------------/backup: pre-comp /backup: post-comp /ddvar -----------------Size GiB -------9511.5 98.4 ------Used GiB --------117007.4 7170.5 37.3 --------Avail GiB --------2341.0 56.1 --------Use% ---75% 40% ---Cleanable GiB* -------------257.8 --------------

* Estimate based on last cleaning of 2007/11/20 14:48:26.

Note GiB = Gibibyte, the base 2 equivalent of Gigabyte.

216

Data Domain Operating System User Guide

Statistics and Basic Operations

Display File system Status


To display the state of the file system process, use the filesys status operation. The display gives a basic status of enabled or disabled with more detailed information for each basic status. filesys status The display is similar to the following: # filesys status The filesystem is enabled and running If the file system was shut down with a Data Domain System command, such as filesys disable, the display includes the command. For example: # filesys status The filesystem is disabled and shutdown. [filesys disable]

Display File system Uptime


To display the amount of time that has passed since the file system was last enabled, use the filesys show uptime operation. The display is in days and hours and minutes. filesys show uptime The display is similar to the following: # filesys show uptime Filesys has been up 47 days, 23:28

Display Compression - For Files


To display the amount of compression for a single file, multiple files, or a file system, use the filesys show compression command. Optionally, display compression for a given number of hours or days. In general, the more often a backup is done for a particular file or file system, the higher the compression. Note that the display on a busy system may not return for one to two hours. Other factors may also influence the display. Call Data Domain Technical Support to analyze displays that seem incorrect. filesys show compression [filename] [last {n hours | n days}] [no-sync]

In the display, the value for bytes/storage_used is the compression ratio after all compression of data (global and then local) plus the overhead space needed for meta data. In the Original bytes line, (which includes system overhead) do not expect the amount shown to be the same as the amount displayed with the filesys show space command, Pre-compression line, which does not include system overhead.

File System Management

217

Statistics and Basic Operations

The Original Bytes gives the cumulative (since file creation) number of bytes written to all files that were updated in the previous time period (if a time period is given in the command). The value may be different on a replication destination than on a replication source for the same files or file system. On the destination, internal handling of replicated meta-data and unwritten regions in files lead to the difference. The value for Meta-data includes an estimate for data that is in the Data Domain System internal index and is not updated when the amount of data on the Data Domain System decreases after a file system clean operation. Because of the index estimate, the amount shown is not the same as the amount displayed with the filesys show space command, Meta-data line.

The display is similar to the following: # filesys show compression /backup/naveen/ last 2 d Total files: 4; bytes/storage_used: 4.2 Original Bytes: 4,486,393,430 Globally Compressed (g_comp): 2,965,916,936 Locally Compressed (l_comp): 1,054,560,528 Meta-data: 9,697,288

Display Compression - Summary


To display a summary of the amount of compression over the last 7 days, use the filesys show compression command. filesys show compression [summary | daily | daily-detailed] {[last <n> { hours | days | weeks | months}] | [start <date> [end <date>]]} The output is as follows:
# filesys show compression From 2007-11-07 12:00 To 2007-11-14 12:00 Pre-Comp (GiB) -------114961.8 5583.4 269.6 -------Post-Comp (GiB) --------7348.8 562.2 16.6 --------Global-Comp Factor ----------4.9x 6.6x 8.4x ----------Local-Comp Factor ---------3.2x 1.5x 1.9x ----------

-----------Current: Written:* Last 7 day Last 24 hr -----------Compression Factor (%)


218

Data Domain Operating System User Guide

Statistics and Basic Operations

------------15.6x (93.6%) 9.9x (89.9%) 16.3x (93.8%) ------------* Does not include the effects of pre-comp file deletes/truncates since the last cleaning on 2007/11/09 14:48:26.

Key: Pre-Comp = Data written before compression Post-Comp = Storage used after compression Compression Factor = pre-comp / post-comp Compression % = ((pre-comp - post-comp) / pre-comp) * 100 Global-Comp Factor = pre-comp / (size after de-dupe) Local-Comp Factor = (size after de-dupe) / post-comp

Display Compression - Daily


To display the amount of compression daily over the last 4 full weeks and the current partial week, use the filesys show compression daily command. (This is 29 - 34 days depending on the current day of the week--it always begins on a Sunday.) filesys show compression [summary | daily | daily-detailed] {[last <n> { hours | days | weeks | months}] | [start <date> [end <date>]]} The output is as follows: # filesys show compression daily
From 2007-10-15 13:00 To 2007-11-14 12:00 Sun -----Mon ------151656.5 215.4 7.7x -22816.5 106.3 7.7x -29Tue -----16240.4 13.5 17.8x -23313.1 52.5 6.0x -30Wed -----17263.8 13.6 19.4x -24341.6 62.3 5.5x -31Thu -----18275.7 14.1 19.5x -25484.7 71.8 6.8x -1Fri -----19310.6 16.5 18.8x -26280.2 9.8 28.7x -2Sat ------202229.4 177.8 12.5x -272211.2 164.0 13.5x -3Weekly -----4976.5 451.0 11.0x

--------Date Pre-Comp Post-Comp Factor

-212682.1 290.5 9.2x -28-

7129.4 757.2 9.4x

File System Management

219

Clean Operations
2982.5 325.1 9.2x -41525.6 233.9 6.5x -111854.4 318.9 5.8x 540.2 164.4 3.3x -5454.2 16.4 27.6x -12520.7 22.5 23.1x 736.2 66.9 11.0x -6579.4 35.2 16.5x -13495.7 20.0 24.8x 378.4 27.2 13.9x -7246.8 16.7 14.8x -14269.6 16.6 16.3x Post-Comp (GiB) --------7348.8 Global-Comp Factor ----------4.9x 330.2 27.3 12.1x -8304.4 13.8 22.0x 265.6 14.8 18.0x -9311.0 11.4 27.2x 2325.6 135.1 17.2x -101827.7 159.0 11.5x 7558.5 760.8 9.9x

5249.2 486.5 10.8x

3140.4 378.0 8.3x Local-Comp Factor ---------3.2x Compression Factor (%) ------------15.6x (93.6%)

-----------Current: Written:* Last 7 day 5583.4 562.2 6.6x 1.5x 9.9x (89.9%) Last 24 hr 269.6 16.6 8.4x 1.9x 16.3x (93.8%) ------------ ----------------------------------------------* Does not include the effects of pre-comp file deletes/truncates since the last cleaning on 2007/11/09 14:48:26. Key: Pre-Comp = Data written before compression Post-Comp = Storage used after compression Compression Factor = pre-comp / post-comp Compression % = ((pre-comp - post-comp) / pre-comp) * 100 Global-Comp Factor = pre-comp / (size after de-dupe) Local-Comp Factor = (size after de-dupe) / post-comp

Pre-Comp (GiB) -------114961.8

Clean Operations
The filesys clean operation reclaims physical storage occupied by deleted objects in the Data Domain file system. When application software expires backup or archive images and when the images are not present in a snapshot, the images are not accessible or available for recovery from the application or from a shapshot. However, the images still occupy physical storage. Only a filesys clean operation relcaims the physical storage used by files that are deleted and that are not present in a snapshot.

During the clean operation, the Data Domain System file system is available for backup (write) and restore (read) operations. Although cleaning uses a noticeable amount of system resources, cleaning is self-throttling and gives up system resources in the presence of user traffic.

220

Data Domain Operating System User Guide

Clean Operations

Data Domain recommends running a clean operation after the first full backup to a Data Domain System. The initial local compression on a full backup is generally a factor of 1.5 to 2.5. An immediate clean operation gives additional compression by another factor of 1.15 to 1.2 and reclaims a corresponding amount of disk space. When the clean operation finishes, it sends a message to the system log giving the percentage of storage space that was cleaned.

A default schedule runs the clean operation every Tuesday at 6 a.m. (tue 0600). You can change the schedule or you can run the operation manually with the filesys clean commands. Data Domain recommends running the clean operation at least once a week. If you want to increase file system availability and if the Data Domain System is not short on disk space, consider changing the schedule to clean less often. A Data Domain system that is full may need multiple clean operations to clean 100% of the file system, especially when one or more external shelves are attached. Depending on the type of data stored, such as when using markers for specific backup software (filesys option set marker-type ... ), the file system may never report 100% cleaned. The total space cleaned may always be a few percentage points less than 100. With collection replication, the clean operation does not run on the destination. With directory replication, the clean operation does not run on directories that are replicated to the Data Domain System (where the Data Domain System is a destination), but does run on other data that is on the Data Domain System. Note Any operation that shuts down the Data Domain System file system, such as the filesys disable command, or that shuts down the Data Domain System, such as a system power-off or reboot, stops the clean operation. The clean does not restart when the system and file system restart. Either manually restart the clean or wait until the next scheduled clean operation. Note Replication between Data Domain systems can affect filesys clean operations. If a source Data Domain system receives large amounts of new or changed data while disabled or disconnected, resuming replication may significantly slow down filesys clean operations.

Start Cleaning
To manually start the clean process, use the filesys clean start operation. The operation uses the current setting for the scheduled automatic clean operation and cleans up to 34% of the total space available for data on a DD560 or DD460 system. If the system is less than 34% full, the operation cleans all data. Administrative users only. filesys clean start

File System Management

221

Clean Operations

For example, the following command runs the clean operation and reminds you of the monitoring command. When the operation finishes, a message goes to the system log giving the amount of free space available. # filesys clean start Cleaning started. Use filesys clean watch to monitor progress.

Stop Cleaning
To stop the clean process, use the clean stop operation. Stopping the process means that all work done so far is lost. Starting the process again means starting over at the beginning. If the clean process is slowing down the rest of the system, consider using the filesys clean set throttle operation to reset the amount of system resources used by the clean process. The change in the use of system resources takes place immediately. Administrative users only. filesys clean stop

Change the Schedule


To change the date and time when clean runs automatically, use the clean set schedule operation. The default time is Tuesday at 6 a.m. (tue 0600). The operation is available only to administrative users.

Daily runs the operation every day at the given time. Monthly starts on a given day or days (from 1 to 31) at the given time. Never turns off the clean process and does not take a qualifier. With the day-name qualifier, the operation runs on the given day(s) at the given time. A day-name is three letters (such as mon for Monday). Use a dash (-) between days for a range of days. For example: tue-fri. Time is 24-hour military time. 2400 is not a valid time. mon 0000 is midnight between Sunday night and Monday morning. The most recent invocation of the scheduling operation cancels the previous setting.

The command syntax is: filesys clean set schedule daily time filesys clean set schedule monthly day-numeric-1 [,day-numeric-2,...]time filesys clean set schedule never filesys clean set schedule day-name-1[,day-name-2,...]time
222 Data Domain Operating System User Guide

Clean Operations

For example, the following command runs the operation automatically every Tuesday at 4 p.m.: # filesys clean set schedule tue 1600 To run the operation more than once in a month, set multiple days in one command. For example, to run the operation on the first and fifteenth of the month at 4 p.m.: # filesys clean set schedule monthly 1,15 1600

Set the Schedule or Throttle to the Default


To set the clean schedule to the default of Tuesday at 6 a.m. (tue 0600), the default throttle of 50%, or both, use the filesys clean reset operation. The operation is available only to administrative users. filesys clean reset {schedule | throttle | all}

Set Network Bandwidth Used


To set clean operations to use a lower level of system resources when the Data Domain System is busy, use the filesys set throttle operation. At a percentage of 0 (zero), cleaning runs very slowly or not at all when the system is busy. A percentage of 100 allows cleaning to use system resources in the usual way. The default is 50. When the Data Domain System is not busy with backup or restore operations, cleaning runs at 100% (uses resources as would any other process). Administrative users only. filesys clean set throttle percent For example, to set the clean operation to run at 30% of its possible speed: # filesys clean set throttle 30

Update Statistics
To update the If 100% cleaned numbers that show in the output from filesys show space, use the filesys clean update-stats operation. With a full file system, the update operation can take up to 12 hours. Administrative users only. filesys clean update-stats

Display All Clean Parameters


To display all of the settings for the clean operation, use the filesys clean show config operation. filesys clean show config
File System Management 223

Clean Operations

The display is similar to the following.: # filesys clean show config 50 Percent Throttle Filesystem cleaning is scheduled to run "Tue" at "0600".

Display the Schedule


To display the current date and time for the clean operation, use the filesys clean show schedule operation. filesys clean show schedule The display is similar to the following.: # filesys clean show schedule Filesystem cleaning is scheduled to run Tue at 0600

Display the Throttle Setting


To display the throttle setting for cleaning operations, use the filesys clean show throttle operation. filesys clean show throttle The display is similar to the following.: # filesys clean show throttle 100 Percent Throttle

Display the Clean Operation Status


To display the active/inactive status of the clean operation, use the filesys clean status operation. When the clean operation is running, the command displays progress. filesys clean status The display is similar to the following: # filesys clean status Cleaning started at 2007/04/06 10:21:51: phase 6 of 10 64.6% complete, 2496 GiB free; time: phase 1:06:32, total 8:53:21

224

Data Domain Operating System User Guide

Compression Options

Monitor the Clean Operation


To monitor an ongoing clean operation, use the filesys clean watch operation. The output is the same as output from the filesys clean status command, but continuously updates. Enter a <CTRL> C to stop monitoring the progress of a clean operation. The operation continues, but the reporting stops. Use the filesys clean start command to restart monitoring. filesys clean watch

Compression Options
A Data Domain system compresses data at two levels: global and local. Global compression compares received data to data already stored on disks. Data that is new is then locally compressed before being written to disk. Command options allow changes at both compression levels.

Local Compression
A Data Domain System uses a local compression algorithm developed specifically to maximize throughput as data is written to disk. The default algorithm allows shorter backup windows for backup jobs, but uses more space. Local compression options allow you to choose slower performance that uses less space, or you can set the system for no local compression.

Changing the algorithm affects only new data and data that is accessed as part of the filesys clean process. Current data remains as is until a clean operation checks the data. To enable the new setting, use the filesys disable and filesys enable commands.

Set Local Compression


To set the compression algorithm, use the filesys option set local-compressiontype operation. The setting is for all data received by the system. filesys option set local-compression-type {none | lz | gzfast | gz}

lz The default algorithm that gives the best throughput. Data Domain recommends the lz option. gzfast A zip-style compression that uses less space for compressed data, but more CPU cycles. Gzfast is the recommended alternative for sites that want more compression at the cost of lower performance. gz A zip-style compression that uses the least amount of space for data storage (10% to 20% less than lz), but also uses the most CPU cycles (up to twice as many as lz).

File System Management

225

Compression Options

none Do no data compression.

Reset Local Compression


To reset the compression algorithm to the default of lz, use the filesys option reset local-compression-type operation. filesys option reset local-compression-type

Display the Algorithm


To display the current algorithm, use the filesys option show local-compressiontype operation. filesys option show local-compression-type

Global Compression
DD OS 4.0 and later releases use a global compression algorithm called type 9 as the default. Earlier releases use an algorithm called type 1 (one) as the default.

A Data Domain system using type 1 global compression continues to use type 1 when upgraded to a new release. A Data Domain system using type 9 global compression continues to use type 9 when upgraded to a new release. A DD OS 4.0.3.0 or later Data Domain system can be changed from one type to another if the file system is less than 40% full. Directory replication pairs must use the same global compression type.

Set Global Compression


To change the global compression setting, use the filesys option set globalcompression-type command. filesys option set global-compression-type {1 | 9} To change the setting (to type 1, for example) and activate the change, use the following commands: # filesys option set global-compression-type 1 # filesys disable # filesys enable

226

Data Domain Operating System User Guide

Replicator Destination Read/Write Option

Reset Global Compression


To remove a manually set global compression type, use the filesys option reset global-compression-type command. The file system continues to use the current type. [Only when a filesys destroy command is entered does the type used change to the default of type 9. Caution: the 'filesys destroy' command irrevocably destroys all data in the '/backup' data collection, including all virtual tapes, and creates a newly initialized (empty) file system.] filesys option reset global-compression-type

Display the Type


To display the current global compression type, use the filesys option show global-compression-type command. filesys option show global-compression-type

Replicator Destination Read/Write Option


The read/write setting of the file system on a Replicator destination Data Domain System is read-only. With some backup software, the file system must be reported as writable for restoring or vaulting data from the destination Data Domain System. The commands in this section change and display the reported setting of the destination file system. The actual state of the file system remains as read-only.

Before changing the reported setting, use the filesys disable command. After changing the setting, use the filesys enable command. When using CIFS on the Data Domain System, use the cifs disable command before changing the reported state and use the cifs enable command after changing the reported state.

Report as Read/Write
Use the filesys option enable report-replica-as-writable command on the destination Data Domain System to report the file system as writable. Some backup applications must see the replica as writable to do a restore or vault operation from the replica. filesys option enable report-replica-as-writable

File System Management

227

Tape Marker Handling

Report as Read-Only
Use the filesys option disable report-replica-as-writable command on the destination Data Domain System to report the file system as read-only. filesys option disable report-replica-as-writable

Return to the Default Read-Only Setting


Use the filesys option reset report-replica-as-writable command on the destination Data Domain System to reset reporting to the default of the file system as read-only. filesys option reset report-replica-as-writable

Display the Setting


Use the filesys option show report-replica-as-writable command on the destination Data Domain System to display the current reported setting. filesys option show report-replica-as-writable

Tape Marker Handling


Backup software from some vendors inserts markers (tape markers, tag headers, or other names are used) in all data streams (both file system and VTL backups) sent to a Data Domain system. Markers can significantly degrade data compression on a Data Domain system. The filesys option ... marker-type commands allow a Data Domain system to handle specific marker types while maintaining compression at expected levels. Note When backing-up a network attached storage device using NDMP (not the Data Domain System NDMP feature), the backup application is not in control of the data stream and does not insert tape markers. In such cases, the Data Domain System tape marker feature is not needed for either file system or VTL backups.

Set a Marker Type


Use the filesys option set marker-type command to have a Data Domain system deal with markers inserted into backup data by some backup software.

The setting is system-wide and applies to all data received by a Data Domain system. If a Data Domain system is set for a marker type and data is received that has no markers, compression and system performance are not affected.
Data Domain Operating System User Guide

228

Tape Marker Handling

If a Data Domain system is set for a marker type and data is received with markers of a different type, compression is degraded for the data with different markers. filesys option set marker-type {cv1 | eti1 | hpdp1 | nw1 | tsm1 | tsm2 | none}

The options are:


cv1 for CommVault Galaxy with VTL and file system backups. eti1 for HP NonStop systems using ETI-NET EZX/BackBox. hpdp1 for HP DP versions 5.1, 5.5, and 6.0 with VTL and file system backups. nw1 for Legato NetWorker with VTL. tsm1 for IBM Tivoli Storage Manager on media servers with small endian processor architecture, such as x86 Intel or AMD. tsm2 for IBM Tivoli Storage Manager on media servers with big endian processor architecture, such as SPARC or IBM mainframe. PowerPC cna be configured as either big or small endian. Check with your system administrator if you are not sure about the media server architecture configuration. none for data with no markers (none is also the default setting). # filesys disable # filesys enable

After changing the setting, enter the following two commands to enable the new setting:

Reset to the Default


Use the filesys option reset marker-type command to return the marker setting to the default of none. filesys option reset marker-type

Display the Marker Setting


Use the filesys option show marker-type command to display the current marker setting. filesys option show marker-type

File System Management

229

Tape Marker Handling

230

Data Domain Operating System User Guide

Snapshots

18

The snapshots command manages file system snapshots. A snapshot is a read-only copy of the Data Domain System file system from the top directory: /backup. Snapshots are useful for avoiding version skew when backing up volatile data sets, such as tables in a busy data base, and for retrieving earlier versions of a directory or file that was deleted. If the Data Domain System is a source for collection replication, snapshots are replicated. If the Data Domain System is a source for directory replication, snapshots are not replicated. Snapshots must be created separately on a directory replication destination. Snapshots are created in the system directory: /backup/.snapshot. Each directory under /backup also has a .snapshot directory with the name of each snapshot that incudes the directory. The filesys fastcopy command can use snapshots to copy a file or directory tree from a snapshot to the active file system.

Create a Snapshot
To create a snapshot, use the snapshot create operation. snapshot create name [retention {date | period}] Choose a descriptive name. A retention date is a four-digit year, a two-digit month, and a two-digit day separated by dots ( . ), slashes ( / ), or dashes ( - ). For example, 2009.05.22. A retention period is a number of days, weeks or wks, or months or mos with no space between the number and the days, weeks, or months. For example, 6wks. The months or mos period is always 30 days. With a retention date, the snapshot is retained until midnight (00:00, the first minute of the day) of the given date. With a retention period, the snapshot is retained until the same time of day as the creation. For example, when a snapshot is created at 8:48 a.m. on April 27, 2007: # snapshot create test22 retention 6wks Snapshot "test22" created and will be retained until Jun 08:48. 8 2007

231

List Snapshots

Note The maximum number of snapshots allowed to be stored on a system is 100. If the number reaches 100, the system generates an alert. If your system becomes filled with snapshots, you can resolve this by expiring snapshots and then running filesys clean.

List Snapshots
To list existing snapshots, use the snapshot list option. The display gives the snapshot name, pre-compression amount of data in the snapshot, the creation date, the retention date, and the status. Status is either blank or Expired. An expired snapshot remains available until the next file system clean operation. Use the snapshot expire command to set a future expiration date for an expired, but still available, snapshot. snapshot list For example:
# snapshot list Name -------------------SS_FULL_1 SS_INCR_1 SS_INCR_2 SS_FULL_2 DAILY_1 DAILY_2 WEEKLY_1 DAILY_3 scheduled-2007-05-05 scheduled-2007-07-07 scheduled-2007-08-02 -------------------Pre-Comp (GB) Create Date ------------- ----------------948.1 944.4 938.7 939.9 942.8 940.7 937.8 937.3 944.6 944.5 943.9 Feb 1 2007 22:16 Feb 1 2007 23:09 Feb 2 2007 00:31 Mar 2 2007 00:48 Mar 12 2007 01:03 Mar 13 2007 02:24 Apr 12 2007 02:51 Apr 13 2007 03:40 May 5 2007 13:08 Jul 7 2007 13:09 Aug 2 2007 13:11 Aug 2 2007 07:33 Aug 2 2007 11:16 Aug 2 2007 13:09 Aug 2 2007 09:52 Aug 2 2007 07:33 Aug 2 2007 07:33 Aug 1 2007 13:08 Aug 7 2007 13:09 Sep 1 2007 13:11 ---------------------expired expired expired expired expired expired expired Retain Until ---------------Status -------

------------- -----------------

Set a Snapshot Retention Time


To set or reset the retention time of an existing snapshot, use the snapshot expire operation. snapshot expire name [retention {date | period | forever}] The name is the name of an existing snapshot. A retention date is a four-digit year, a two-digit month, and a two-digit day separated by dots ( . ), slashes ( / ), or dashes ( - ). For example, 2009.05.22.

232

Data Domain Operating System User Guide

Expire a Snapshot

A retention period is a number of days, weeks or wks, or months or mos with no space between the number and the days, weeks, or months. For example, 6wks. The months or mos period is always 30 days. The value forever means that the snapshot does not expire. With a retention date, the snapshot is retained until midnight (00:00, the first minute of the day) of the given date. With a retention period, the snapshot is retained until the same time of day as the snapshot expire command was entered. For example: # snapshot expire tester23 retention 5wks Snapshot "tester23" will be retained until Jun 1 2007 09:26.

Expire a Snapshot
To immediately expire a snapshot, use the snapshot expire operation with no options. An expired snapshot remains available until the next file system clean operation. snapshot expire name (See also filesys clean.)

Rename a Snapshot
To change the name of a snapshot, use the snapshot rename operation. snapshot rename name new-name For example, to change the name from snap12-20 to snap12-21: # snapshot rename snap12-20 snap12-21 Snapshot snap12-20 renamed to snap12-21.

Snapshot Scheduling
The commands above this point had to do with the capturing of a single one-time snapshot at the point in time when the command is executed. The commands below have to do with arranging a series of snapshots to be taken at a regular series of times in the future. Such a series of snapshots is called a snapshot schedule, or schedule for short. We therefore speak of adding a snapshot schedule to the set of all snapshot schedules. Note It is strongly recommended that snapshot schedules always explicitly specify a retention time. The default retention time is 14 days. If no retention time is specified, all snapshots will be retained 14 days, consuming valuable resources.

Snapshots

233

Snapshot Scheduling

Note There can be multiple snapshot schedules active at the same time. Note If multiple snapshots are scheduled to occur at the same time, only one will be retained. However, which one is retained is indeterminate, thus only one snapshot should be scheduled for a given time.

Add a Snapshot Schedule


To add a snapshot schedule to the the Data Domain System, use the snapshot add schedule operation, explained under Syntax below. snapshot add schedule

Syntax
There are several possible syntaxes: snapshot add schedule <name> [days <days>] time <time>[,<time> ...] [retention <period>] The default for days is daily and the user can specify a list of hours. snapshot add schedule <name> [days <days>] time <time> [every <mins>] [retention <period>] The default for days is daily. The user can also specify the interval in mins. snapshot add schedule <name> [days <days>] time <time>[-<time>] [every <hrs | mins>] [retention <period>] The default for days is daily. When every is omitted it defaults to every 1hr. Where: time can be of the form: - 10:10 - 1010 - 10:00-2300 NOTE: Time is expressed in 24hrs format (not am/pm) and ":" is optional. days can be of the form: 234

mon,tue : For Monday, Tuesday every week mon-fri : For Monday through Friday every week daily : For every day of the week 1,2 : For days in the month 1-3 : For 1,2,3 days in the month
Data Domain Operating System User Guide

Snapshot Scheduling

last: Last day of the month

period can be of the form: 5days 2mos 3yrs

Names of Snapshots Created by a Schedule:

The naming convention for scheduled snapshots is the word scheduled followed by a four-digit year, a two-digit month, a two-digit day, a two-digit hour, and a two-digit minute. All elements of the name are separated by a dash ( - ). For example: scheduled-2007-04-27-13-41. The name every_day_8_pm is the name of a snapshot schedule. Snapshots generated by that schedule might have the names scheduled-2008-03-24-20-00, scheduled-2008-03-25-20-00, etc.

Additional notes:

The default retention time for a scheduled snapshot is 14 days. Snapshots reside in the directory /backup/.snapshot/

The days-of-week are one or more three-letter day abbreviations, such as tue for Tuesday. Use a dash ( - ) between days to denote a range. For example, mon-fri creates a snapshot every day Monday through Friday. The time uses a 24 hour clock that starts at 00:00 and goes to 23:59. The format in the command is a three or four digit number with an optional colon ( : ) between hours and minutes. For example, 4:00 or 04:00 or 0400 sets the time to 4:00 a.m., and 14:00 or 1400 sets the time to 2:00 p.m. The retention period is a number plus days, weeks or wks, or months or mos with no space between the number and the days, weeks, or months tag. For example, 6wks. The months or mos period is always 30 days. For example, to schedule a snapshot every Monday and Thursday at 2:00 a.m. with a retention of two months: # snapshot add schedule mon thu 02:00 retention 2mos Snapshots are scheduled to run "Mon, Thu" at "0200". Snapshots are retained for "60" days.

Further Examples:
1. Every day at 8:00pm add schedule every_day_8_pm days daily time 20:00

Snapshots

235

Snapshot Scheduling

OR add schedule every_day_8_pm days mon-sun time 20:00 Note The name every_day_8_pm is the name of a snapshot schedule. Snapshots generated by that schedule will have names like scheduled-2008-03-24-20-00, scheduled-2008-03-25-20-00, etc. a. Every midnight add schedule every_midnight days daily time 00:00 retention 3 days OR add schedule every_midnight days mon-sun time 00:00 retention 3 days 2. Every weekday at 6:00am add schedule wkdys_6_am days mon-fri time 06:00 retention 4 days OR add schedule wkdys_6_am days mon,tue,wed,thu,fri time 06:00 retention 4 days 3. Every weekend sun at 10:00am add schedule every_sunday_10_am days sun time 10:00 retention 2 mos a. Every sunday midnight add schedule every_sunday_midnight days sun time 00:00 retention 2 mos 4. Every 2 hrs add schedule every_2_hours days daily every 2hrs retention 3 days a. Every hour add schedule every_hour days daily every 1hrs retention 3 days b. Every 2 hrs 15mins past the hour add schedule every-2h-15-past days daily time 00:15-23:15 every 2 hrs retention 3 days c. Every 2 hrs between 8:00am-5:00pm on weekdays.
236 Data Domain Operating System User Guide

Snapshot Scheduling

add schedule wkdys-every-2-hrs-8a_to_5p days mon-fri time 08:00-17:00 every 2 hrs retention 3 days 5. A specific day of week at a specific time (for e.g., every week on Mondays, Tuesdays at 8:00am) add schedule ev-wk-mon-and-tu-8-am days mon,tue time 08:00 retention 3 mos 6. Every specific day of a month at a specific time (for e.g, every 2nd day in the month at 10:15am) add schedule ev_mo_2nd_day_1015a days 2 time 10:15 retention 3 mos 7. Every last day in a month at 11:00pm add schedule ev_mo_last_day_11pm days last time 23:00 retention 2 yrs a. Beginning of every month add schedule ev_mo_1st_day_1st_hr days 1 time 00:00 retention 2 yrs 8. Every 15mins add schedule ev_15_mins days daily time 00:00-23:00 every 15mins retention 5 days 9. Every week day at 10:30am and 3:30pm add schedule ev_weekday_1030_and_1530 days mon-fri time 10:30,15:30 retention 2 mos

Modify a Snapshot Schedule


To modify an already-existing snapshot schedule, use the snapshot modify schedule operation, with the same syntax as the snapshot add schedule operation. There are several possible syntaxes: snapshot modify schedule <name> [days <days>] time <time>[,<time> ...] [retention <period>] The default for days is daily and the user can specify a list of hours. snapshot modify schedule <name> [days <days>] time <time> [every <mins>] [retention <period>] The default for days is daily. The user can also specify the interval in mins. snapshot modify schedule <name> [days <days>] time <time>[-<time>] [every <hrs | mins>] [retention <period>]

Snapshots

237

Snapshot Scheduling

The default for days is daily. When every is omitted it defaults to every 1hr.

Remove All Snapshot Schedules


To reset to the default of no snapshot schedules, use the snapshot reset schedule operation. snapshot reset schedule

Display a Snapshot Schedule


To display a given snapshot schedule, use the snapshot show schedule <name> operation. snapshot show schedule <name>

Display all Snapshot Schedules


To display a list of all snapshot schedules currently in effect, use the snapshot show schedule operation without an argument. snapshot show schedule For example, # snapshot show schedule Snapshots are scheduled to run "daily" at "0700". Snapshots are scheduled to run "daily" at "1900". Snapshots are retained for "60" days.

Delete a Snapshot Schedule


To delete a specific snapshot schedule, use the snapshot del schedule <name> operation. snapshot del schedule <name>

Delete all Snapshot Schedules


To delete all snapshot schedules, use the snapshot del schedule operation with the argument all. snapshot del schedule all

238

Data Domain Operating System User Guide

Snapshot Scheduling

Note that there are two ways to delete all scheduled snapshots: snapshot del schedule all or snapshot reset schedule

Snapshots

239

Snapshot Scheduling

240

Data Domain Operating System User Guide

Retention Lock

19

The Retention Lock Feature


The retention lock feature allows the user to keep selected files from being modified and deleted for a specified retention period of up to 70 years. Once a file is committed to be a retention-locked file, it cannot be deleted until its retention period is reached, and its contents cannot be modified. The retention period of a retention-locked file can be extended but not reduced. The access control information of a retention-locked file may be updated. The retention lock feature can only be enabled if there is a retention lock license. Enabling the retention lock feature affects only the ability to commit non-retention-locked files to be retention-locked files and the ability to extend the retention period of retention-locked files. Any retention-locked file is always protected from modification and premature deletion, regardless of whether there is a retention lock license and whether the retention lock feature is enabled. Once retention lock has ever been enabled on a Data Domain system, you cannot rename non-empty folders or directories on that system (although you can rename empty ones). Note A file must be explicitly committed to be a retention-locked file through client-side file commands before the file is protected from modification and premature deletion. Most archive applications and selected backup applications will issue these commands when appropriately configured. Applications that do not issue these commands will not trigger the retention lock feature. Note The retention lock feature supports a maximum retention period of 70 years and does not support the "retain forever" option offered by certain archive applications. Also, certain archiving applications may impose a different limit (such as 30 years) on retention period, so please check with the appropriate vendor.

241

The Retention Lock Feature

Note A file must be explicitly committed to be a retention-locked file through client-side file commands before the file is protected from modification and premature deletion. These commands may be issued directly by the user or automatically by applications that support the retention lock feature. Applications that do not issue these commands will not trigger the retention lock feature. Note The "retention period" referred to here under this section titled The Retention Lock Feature differs from the retention period for snapshots. The retention period for the retention lock feature specifies the minimum period of time a retention-locked file is retained whereas the retention period for snapshots specifies the maximum length of time snapshot data is retained.

Enable the Retention Lock Feature


To enable the retention lock feature, use the filesys retention-lock enable command. DDOS# filesys retention-lock enable

Disable the Retention Lock Feature


To disable the retention lock feature, use the filesys retention-lock disable command. DDOS# filesys retention-lock disable

Set the Minimum and Maximum Retention Periods


To set the minimum retention period, use the filesys retention-lock option set min-retention-period command. DDOS# filesys retention-lock option set min-retention-period To set the maximum retention period, use the filesys retention-lock option set max-retention-period command. DDOS# filesys retention-lock option set max-retention-period The period is specified in a similar way as for snapshot retention, requiring a number followed by units. The units are any of the following: min hr day
242 Data Domain Operating System User Guide

The Retention Lock Feature

mo year The period should not be more than 70 years; any period larger than 70 years results in an error. The limit of 70 years may be raised in a subsequent release. By default, the min-retention-period is 12 hours and the max-retention-period is 5 years. These default values may be subsequently revised. For example, to set the min-retention-period to 24 months: DDOS# filesys retention-lock option set min-retention-period 24 mo

Reset the Minimum and Maximum Retention Periods


To reset both the minimum and maximum retention periods to their default values, use the filesys retention-lock option reset command. The default min-retention-period is 12 hours and the default max-retention-period is 5 years. DDOS# filesys retention-lock option reset

Show the Minimum and Maximum Retention Periods


To show the minimum and maximum retention periods, use the filesys retention-lock option show command. DDOS# filesys retention-lock option show

Reset Retention Lock for Files on a Specified Path


To reset retention lock for all files on a specified path, that is, allow all files on the specified path to be modified or deleted (with the appropriate access rights), use the filesys retention-lock reset command. For example, to reset the retention lock on all files in /backup/dir1: DDOS# filesys retention-lock reset /backup/dir1 Resetting retention lock raises an alert and logs the names of the retention-locked files that have been reset. On receiving such an alert, the user should verify that the particular reset operation is intended.

Show Retention Lock Status


To show retention lock status, use the filesys retention-lock status command. The possible values of retention lock status are: enabled, disabled, or previously enabled.
Retention Lock 243

The Retention Lock Feature

DDOS# filesys retention-lock status

Client-Side Retention Lock File Control


Note The commands listed in this section are for the client-side interface, not the Data Domain Operating system CLI (Command Line Interface). To go beyond setup/configuration of the retention lock feature on the Data Domain system and actually control the retention locking of individual files, it is necessary to use the client-side interface.

Create Retention-Locked File and Set Retention Date


The user creates a file in the usual way and then sets the last access time (atime) of the file to the desired retention date of the file. If the atime is set to a value that is larger than the current time plus the configured minimum retention period, then the file is committed to be a retention-locked file. Its retention date is set to the smaller of the atime value and the current time plus the configured maximum retention period. Setting the atime for a non-retention-locked file to a value below the current time plus the configured minimum retention period is ignored without error. Setting of atime can be accomplished with the (Unix) command: ClientOS# touch -a -t [atime] [filename] The format of [atime] is: [[CC]YY]MMDDhhmm[.ss] Example: Suppose the current date/time is December 18th 2007 at 1 p.m., that is, 200712181300, and suppose the minimum retention period is 12 hours. Adding the min retention period of 12 hours to the current date/time gives 200712190100. Thus if atime for a file is set to a value greater than 200712190100, the file becomes retention-locked: ClientOS# touch -a -t 200912312230 SavedData.dat Note The file has to be completely written to the Data Domain system before it is committed to be a retention-locked file.

Extend Retention Date:


To extend the retention date of a retention-locked file, the user sets its atime to a value larger than the current retention date. If the new value is less than the current time plus the configured minimum retention period, the atime update is ignored without error. Otherwise, the retention date is set to the smaller of the new value and the current time plus the configured maximum retention period.

244

Data Domain Operating System User Guide

The Retention Lock Feature

Identify Retention-Locked Files and List Retention Date:


To determine whether a file is a retention-locked file, the user attempts to set the atime of the file to a value smaller than its current atime. The attempt will fail with a permission denied error if and only if the file is a retention-locked file. The retention date for a retention-locked file is its atime value. This can be listed by the following command: ClientOS# ls -l --time=atime [filename]

Delete an Expired Retention-Locked File:


The user invokes the standard file delete operation on the retention-locked file to be deleted. The command is typically: ClientOS# rm [filename] or ClientOS# del [filename] Note If the retention date of the retention-locked file has not expired, the delete operation will result in a permission denied error The user needs to have the appropriate access rights to delete the file, independent of the retention lock feature.

Retention Lock Sample Procedure:


This is an example of using the retention lock feature: Using Data Domain Operating system Commands: 1. Add the retention lock license: DDOS# license add ABCD-EFGH-IJKL-MNOP 2. Enable retention lock: DDOS# filesys retention-lock enable 3. Display the status of the retention lock license: DDOS# license show 4. Display the status of the retention lock feature: DDOS# filesys retention-lock status
Retention Lock 245

The Retention Lock Feature

5. Set the minimum retention period for the Data Domain system: DDOS# filesys retention-lock option set min-retention-period 96 hr 6. Set the maximum retention period for the Data Domain system: DDOS# filesys retention-lock option set max-retention-period 30 year 7. Reset both minimum and maximum retention periods to their default values: DDOS# filesys retention-lock option reset The min and max retention periods have now been reset to their defaults: 12 hours and 5 years, respectively. 8. Show the maximum and minimum retention periods: DDOS# filesys retention-lock option show Now using Client Operating system commands on the client system: Suppose the current date/time is December 18th 2007 at 1 p.m., that is, 200712181300. Adding the min retention period of 12 hours gives 200712190100. Thus if atime for a file is set to a value greater than 200712190100, the file becomes retention-locked. 9. Put a retention lock on the existing file SavedData.dat, by setting its atime to a value greater than the current time plus the minimum retention period: ClientOS# touch -a -t 200912312230 SavedData.dat 10. Extend the retention date of the file: ClientOS# touch -a -t 202012121230 SavedData.dat 11. Identify retention-locked files and list retention date: ClientOS# touch -a -t 202012121200 SavedData.dat ClientOS# ls -l --time=atime SavedData.dat 12. Delete an expired retention-locked file: Assuming the retention date of the retention-locked file has expired as determined in the previous step, ClientOS# rm SavedData.dat

246

Data Domain Operating System User Guide

The Retention Lock Feature

Now using Data Domain Operating system commands: 13. Disable the retention lock feature DDOS# filesys retention-lock disable Until retention lock has been re-enabled, it is now not possible to place a retention lock on files. However, any files that were previously retention-locked remain so.

Notes on Retention Lock:


Retention Lock and Replication
Both Directory Replication and Collection Replication replicate the locked or unlocked state of files. That is, files that are retention-locked in the source will be retention-locked in the destination. However:

Collection replication replicates min and max retention periods to the destination system. Directory replication does not replicate min and max retention periods to the destination system.

Replication resync will fail if the destination is not empty and retention lock is currently or was previously enabled on either the source or destination system.

Retention Lock and Fastcopy


Fastcopy does not copy the locked or unlocked state of files. Files that are retention-locked in the source are not retention-locked in the destination. If you try to fastcopy to a destination that has retention-locked files, the fastcopy operation will abort the moment it encounters retention-locked files at the destination.

Retention Lock and Filesys Destroy


When filesys destroy is run on a system with retention lock enabled what happens is: 1. All data is destroyed including retention-locked data. 2. All filesys options are returned to default, this means retention lock is not enabled and min-retention-period as well as max-retention-period options are set back to default values on the newly created filesystem.

Retention Lock

247

The Retention Lock Feature

248

Data Domain Operating System User Guide

Replication - CLI

20

The replication command sets up and manages the Data Domain Replicator for replicating data between Data Domain Systems. The Replicator is a licensed product. Contact Data Domain for license keys. Use the license add command to add one key to each Data Domain System in the Replicator configuration.

Collection Replication
Collection replication replicates the complete /backup directory from one Data Domain System (a source that receives data from backup systems) to another Data Domain System (a destination). Each Data Domain System is dedicated as a source or a destination and each can be in only one replication pair. The destination is a read-only system except for receiving data from the source. With collection replication:

A destination Data Domain System can be mounted as read-only for access from other systems. A destination Data Domain System removed from a collection pair (with the replication break command) cannot be brought back into the pair or be used as a destination for another source until the file system is emptied with the filesys destroy command. Note that the filesys destroy command erases all Replicator configuration settings. A destination Data Domain System removed from a collection pair becomes a stand-alone Data Domain System that can be used as a source for replication. With collection replication, all user accounts and passwords are replicated from the source to the destination. Any changes made manually on the destination are overwritten after the next change is made on the source. Data Domain recommends making changes only on the source.

Directory Replication
Directory replication provides replication at the level of individual directories. Each Data Domain System can be the source or the destination for multiple directories and can also be a source for some directories and a destination for others. During directory replication, each Data Domain System can also perform normal backup and restore operations. Replication command options with
249

Using Context

directory replication may target a single replication pair (source and destination directories) or may target all pairs that have a source or destination on the Data Domain System. Each replication pair configured on a Data Domain system is called a context. With directory replication:

The maximum number of contexts allowed on a DD1xx, DD4xx, or DD5xx system is twenty. The maximum on a DD690 system is sixty. Be sure that the destination Data Domain system has enough network bandwidth and disk space to handle all traffic from the originators. A destination Data Domain System must have available storage capacity that is at least the size of the expected maximum size of the source directory. The destination must have adequate space. When directory replication is initialized or when using the replication resync operation, the total number of replicated source files for all contexts can be no more than one million with DD4xx, DD530, and DD510 Data Domain systems and no more than two million with DD560 and larger Data Domain systems. A single destination Data Domain system can receive backups from both CIFS clients and NFS clients as long as separate directories are used for CIFS and NFS. Do not mix CIFS and NFS data under the same directory. Source or destination directories may not overlap. A destination directory that does not already exist is created automatically when replication is initialized. After replication is initialized, ownership and permissions of the destination directory are always identical to those of the source directory. In the replication command options, a specific replication pair is always identified by the destination.

Throttle options for limiting the bandwidth used by replication:


Apply to all replication pairs and all network interfaces on a system. Each throttle setting affects all replication pairs and network interfaces equally. Affect only outbound network traffic. Calculate the proper tcp buffer size for replication usage, using bandwidth and delay settings together.

Using Context
Except for the replication add operation, all replication commands that can use a destination variable can take either the complete destination specification or a context number. Context numbers appear in the output from a number of commands, such as replication status.

250

Data Domain Operating System User Guide

Configure Replicator

Look for the number in a command outputs first column that has the heading CTX. To use the context number, preface the number with rctx://. For example, to display statistics for the destination labeled as context 2, use the following command: # replication show stats rctx://2

Configure Replicator
When configuring replication, please note the following two things: Note 1. When putting in the path, don't put in the mount point you see on your media servers. Ex. If the media server shows the path as /ddata1/dir1 -then the path is actually /backup/dir1 on the appliance. The /ddata1 is your NFS mount point, and on the appliance all the directories you've created off your mount point are under the /backup directory. Note 2. Before setting up replication you need to ensure the hostname that you have on your appliances is on the network, and that each appliance can see each other across the network. If all appliances are connected to their network switches you won't have a problem, but if you have direct connections from media server to data domain appliance, then you need to be careful about what your hostname resolves to.

Example: Suppose you don't hook all the LAN cards on our appliances to a switch, but instead you have cross-connected them directly to the media servers, and you only have 1 interface on the network (the gui manager). You need to change the hostname to that ip address on both boxes. To configure a Replicator pair, use the replication add operation on both the source and destination Data Domain Systems. Administrative users only. replication add source source destination destination

The source and destination host names must be exactly the same as the names returned by the hostname command on the source and destination Data Domain Systems. When a Data Domain system is at or near full capacity, the command may take 15 to 20 seconds to finish. For collection replication: The destination directory must be empty. Enter the filesys disable command on both the source and destination. On the destination only, enter the filesys destroy command.

Replication - CLI

251

Replicating VTL Tape Cartridges and Pools

Start the source and destination variables with col://. For example, enter a command similar to the following on the source and destination Data Domain Systems: replication add source col://hostA destination col://hostB

Enter the filesys enable command on both the source and destination. The Data Domain System file system must be enabled. The source directory must exist. The destination directory should be empty. Start the source and destination variables with dir:// and include the directory that is the replication target. For example, enter a command similar to the following on the source and destination Data Domain Systems: replication add source dir://hostA/backup/dir2 destination dir://hostB/backup/hostA/dir2

For directory replication: -

When the host name for a source or destination does not correspond to the network name through which the Data Domain Systems will communicate, use replication modify connection -host command on the other system to direct communications to the correct network name. A sub-directory that is under a source directory in a replication context cannot be used in another replication context. Any directory can be in only one context at a time.

Replicating VTL Tape Cartridges and Pools


Replicating VTL tape cartridges (or pools) simply means replicating directories that contain VTL tape cartridges (or pools). There has been some confusion over pool replication, which is nothing but directory replication of directories that contain pools, and acts no differently.

All these types of directory replication are the same (except for the destination name limitation below) when configuring replication and when using the replication command set. Examples in this chapter that use dir:// are also valid for pool://. (To avoid exposing the full directory names to the VTL cartridges, we created the UNI pool as a shorthand [UNI stands for User to Network Interface].) Replicating vtl pools and tape cartridges does not require the VTL license on the destination Data Domain system. Destination name limitation: The pool name must be unique on the destination, and the destination cannot include levels of directories between the destination hostname and the pool name. For example, a destination of pool://hostB/hostA/pool2 is not allowed.

252

Data Domain Operating System User Guide

Start Replication

Start the source and destination variables with pool:// and include the pool that is the replication target. For example, enter a command similar to the following on both Data Domain Systems: Version of the command using pool: replication add source pool://hostA/pool2 destination pool://hostB/pool2 Version of the command using dir: replication add source dir://hostA/backup/vtc/pool destination dir://hostB/backup/vtc/pool2

Start Replication
To start replication between a source and destination, use the replication initialize operation on the source. The command checks that the configuration and connections are correct and returns error messages if any problems appear. If the source holds a lot of data, the initialize operation can take many hours. Consider putting both Data Domain Systems in the Replicator pair in the same location with a direct link to cut down on initialization time. A destination variable is required. Administrative users only. replication initialize destination For a successful initialization with directory replication:

The source directory must exist. The destination directory must be empty.

For successful initialization with collection replication:


Run the filesys destroy command on the destination. Configure replication on the source and on the destination. Run the filesys enable command on the destination. Run the replication initialize command on the source.

Test environments at Data Domain give the following guidelines for estimating the time needed for replication initialization. Note that the following are guidelines only and may not be accurate in specific production environments. Directory Replication Initialization

Over a T3, 100ms WAN, performance is about 40 MiB/sec. of pre-compressed data, which gives data transfer of: 40 MiB/sec. = 25 seconds/GiB = 3.456 TiB/day

Replication - CLI

253

Suspend Replication

Note MiB=MibiBytes, the base 2 equivalent of Megabytes. GiB=GibiBytes, the base 2 equivalent of Gigabytes. TiB=TibiBytes, the base 2 equivalent of Terabytes.

Over a gibibit (the base 2 equivalent of gigabit) LAN, performance is about 80 MiB/sec. of pre-compressed data, which gives data transfer of about double the rate for a T3 WAN.

Collection Replication Initialization


Over a WAN, performance depends on the line speed. Over a gibibit LAN, performance is about 70 MiB/sec. of compressed data.

Suspend Replication
To temporarily halt the replication of data between source and destination, use the replication disable operation on either the source or the destination. On the source, the operation stops the sending of data to the destination. On the destination, the operation stops serving the active connection from the source. If the file system is disabled on either Data Domain System when replication is disabled, replication remains disabled even after the file system is restarted. Administrative users only. The replication disable command is for short-term situations only. A filesys clean operation may proceed very slowly on a replication context when that context is disabled, and cannot reclaim space for files that are deleted but not yet replicated. Use the replication break command to permanently stop replication and to avoid slowing filesys clean operations. replication disable {destination | all} Note Using the command "replication break" on a collection replication replica or recovering originator will require a "filesys destroy" on that machine before the file system can be enabled on it again.

Resume Replication
To restart replication that is temporarily halted, use the replication enable operation on the Data Domain System that was temporarily halted. On the source, the operation resumes the sending of data to the destination. On the destination, the operation resumes serving the active connection from the source. If the file system is disabled on either Data Domain System when replication is enabled, replication is enabled when the file system is restarted. Administrative users only. replication enable {destination | all}

254

Data Domain Operating System User Guide

Remove Replication

Note If the source Data Domain system received large amounts of new or changed data during the halt, resuming replication may significantly slow down filesys clean operations.

Remove Replication
To remove either the source or destination Data Domain System from a Replicator pair or to remove all Replicator configurations from a Data Domain system, use the replication break operation. A destination variable or all is required.

Always run the filesys disable command before the break operation and the filesys enable command after. With collection replication, a destination is left as a stand-alone read/write Data Domain System that can then be used as a source. With collection replication, a destination cannot be brought back into the replication pair or used as a destination for another source until the file system is emptied with the filesys destroy command. With directory replication, a destination directory must be empty to be used again (whether with the original source or with a different source), or, alternatively, replication resync must be used. replication break {destination | all}

Note Using the command "replication break" on a collection replication replica or recovering originator will require a "filesys destroy" on that machine before the file system can be enabled on it again.

Reset Authentication between the Data Domain Systems


To reset authentication between a source and destination, use the replication reauth operation on both the source and the destination. Messages similar to Authentication keys out of sync, or Key out of sync signal the need for a reset. Reauthorization is primarily used when replacing a source Data Domain System. See Procedure: Replace a Directory Source - New Name on page 275. A destination variable is required. Administrative users only. replication reauth destination

Replication - CLI

255

Move Data to a New Source

Move Data to a New Source


To move data from a surviving destination to a new source, use the replication recover operation on the new source. Administrative users only. replication recover destination

With collection replication, first use the filesys disable and filesys destroy operations on the new source. With directory replication, the target directory on the source must be empty. See Procedure: Set Up and Start Many-to-One Replication on page 275. Do not use the operation on a destination. If the replication break command was run earlier, the destination cannot be used to recover a source. A destination variable is required. Also see Procedure: Replace a Directory Source - New Name on page 275 for an example of using the recover option when replacing a source Data Domain System.

Use the replication watch command to display the progress of the recovery process.

Recover from an aborted recovery


"Abort recover" is used to recover from a failed recovery. This command is only executed on the destination. Once the command is executed on the destination, the user can reconfigure replication on the source and restart recovery. To recover from a failed recovery, use the replication abort recover command. replication abort recover destination

Resynchronize Source and Destination


To resynchronize replication when directory replication is broken between a source and destination, use the replication resync command. (Both source and destination must already be configured.) Do not use the command with collection replication. replication resync destination A replication resynchronization is useful when converting from collection replication to directory replication and when a directory replication destination runs out of space while the source destination still has data to replicate. See Procedure: Recover from a Full Replication Destination on page 277 for an example of using the command when a directory replication destination runs out of space.
256 Data Domain Operating System User Guide

Convert from Collection to Directory Replication

Note If you try to replicate to a Data Domain system that has retention-lock enabled, and the destination isnt empty, replication resync wont work.

Convert from Collection to Directory Replication


To convert an existing collection replication pair to directory replication use the replication resync command. See Procedure: Convert from Collection to Directory on page 277 for the complete conversion process. replication resync destination

Abort a Resync
To stop an ongoing resync operation, use the replication abort resync command on both the source and destination directory replication Data Domain systems. replication abort resync destination

Change a Source or Destination Hostname


When replacing a system and using a new name for the replacement system, use the replication modify operation on the other side of the replication pair. The new-host-name must be exactly the same as displayed by the hostname command on the system with the new hostname. If the replication pair has a throttle setting, the setting applies with the new destination. If you are changing the hostname on an existing source Data Domain system, use the replication modify operation on the destination. Do not use the command if you want to change the hostname on an existing destination. Call Data Domain Technical Support before changing the hostname on an existing destination. When using the replication modify command, always run the filesys disable command first and the filesys enable command after. Administrative users only. replication modify destination {source-host | destination-host} host-name For example, if the local destination dest-orig.ca.company.com is moved from California to New York, run a command similar to the following on both the source and destination: # replication modify dir://ca.company.com/backup/dir2 destination-host ny.company.com

Replication - CLI

257

Connect with a Network Name

Connect with a Network Name


A source Data Domain system connects to the destination Data Domain System using the destination name as returned by the hostname command on the destination. If the destination host name does not resolve to the correct IP address for the connection, use the modify connection-host option to give the correct name to use for the connection. The connection-host name can also be a numeric IP address. When specifying a connection-host, an optional port number can also be used. The connection-host option may be required when a connection passes through a firewall. and is required when connecting to an alternate listen-port on the destination.. The option may be needed after adding a new source/destination pair or after renaming either a source or a destination. replication modify destination connection-host host-name The following example is run on the source to inform the source that the destination host ny.company.com has a network name of ny2.company.com. Note that the destination variable for the context does not change and is still ny.company.com/backup/dir2. # replication modify dir://ny.company.com/backup/dir2 connection-host ny2.company.com

Change a Destination Port


The default listen-port for a destination Data Domain System is 2051. Use the replication modify command on a source to change the port to which the source sends data. A destination can have only one listen port. If multiple sources use one destination, each source must send to the same port. replication modify destination connection-host host-name [port port] The following example is run on the source to inform the source that the destination host ny.company.com has a listen-port of 2161. Then use the replication option set listen-port command on thedestination to set an alternate listen-port. For example, on the source: # replication modify dir://ny.company.com/backup/dir2 connection-host ny.company.com port 2161 On the destination:: # replication option set listen-port 2161

258

Data Domain Operating System User Guide

Change the Port on a Destination

Change the Port on a Destination


To change the port from which the destination receives data from sources, use the replication option set listen-port operation on the destination system. A destination Data Domain system can have only one listen-port. If a destination has multiple sources, all sources must send to the same port. On a source, use the replication modify command with the port option to use a destination port other than the default. The default port is 2051. replication option set listen-port port

Throttling
Add a Scheduled Throttle Event
To change the rate of network bandwidth used by replication, use the throttle add operation. The default network bandwidth use is unlimited. replication throttle add sched-spec rate The sched-spec must include:

One or more three-letter days of the week (such as mon, tue, or wed) or the word daily (to set the schedule every day of the week). A time of day in 24 hour military time.

The rate includes a number or the word unlimited. The number can include a tag for bits or bytes per second. Do not use a space between the number and the bits or bytes specification. For example, 2000KiB. The default rate is bits per second. In the rate variable:

bps or b equals raw bits per second Kibps, Kib, or K equals 1024 bits per second Bps or B equals bytes per second KiBps or KiB equals 1024 bytes per second

Note Kib=Kibibits, the base 2 equivalent of Kb or Kilobits. KiB=Kibibytes, the base 2 equivalent of KB or Kilobytes. The rate can also be 0 (the zero character), disable, or disabled. Each stops replication until the next rate change. For example, the following command limits replication to 20 kibibytes per second starting on Mondays and Thursdays at 6:00 a.m. # replication throttle add mon thu 0600 20KiB
Replication - CLI 259

Throttling

Replication runs at the given rate until the next scheduled change or until new throttle commands force a change. The default rate with no scheduled changes is to run as fast as possible at all times. The add operation may change the current rate. For example, if on Monday at Noon, the current rate is 20 KiB, and the schedule that set the current rate started on mon 0600, a new schedule change for Monday at 1100 at a rate of 30 KiB (mon 1100 30KiB) makes the change immediately. Note The system enforces a minimum rate of 98,304 bits per second (12 KiB).

Set a Temporary Throttle Rate


To set a throttle rate until the next scheduled change or until a system reboot, use the throttle set current operation. A temporary rate cannot be set if the replication throttle set override command is in effect. replication throttle set current rate The rate includes a number or the word unlimited. The number can include a tag for bits or bytes per second. Do not use a space between the number and the bits or bytes specification. For example, for 2000 kibibytes, use 2000KiB. The default rate is bits per second. In the rate variable:

bps or b equals raw bits per second Kibps, Kib, or K equals 1024 bits per second Bps or B equals bytes per second KiBps or KiB equals 1024 bytes per second

Note Kib=Kibibits, the base 2 equivalent of Kb or Kilobits. KiB=Kibibytes, the base 2 equivalent of KB or Kilobytes. The rate can also be 0 (the zero character), disable, or disabled. Each stops replication until the next rate change. As an example, the following command sets the rate to 2000 kibibytes per second: # replication throttle set current 2000KiB Note The system enforces a minimum rate of 98,304 bits per second (12 KiB).

Delete a Scheduled Throttle Event


To remove one or more throttle schedule entries, use the throttle del operation. replication throttle del sched-spec

260

Data Domain Operating System User Guide

Throttling

The sched-spec must include:


One or more three-letter days of the week (such as mon, tue, or wed) or the word daily to delete all entries for the given time. A time of day in 24 hour military time.

For example, the following command removes an entry for Mondays at 1100: # replication throttle del mon 1100 The command may change the current rate. For example, assume that on Monday at Noon, the current rate is 30 KiB (Kibibytes, the base 2 equivalent of KB or Kilobytes), and the schedule that set the current rate started on mon 1100. If you now delete the scheduled change for Monday at 1100 (mon 1100), the replication rate immediately changes to the next previous scheduled change, such as mon 0600 20KiB.

Set an Override Throttle Rate


To set a throttle rate that overrides scheduled rate changes, use the throttle set override operation. The rate stays at the override level until another override command is entered. replication throttle set override rate The rate includes a number or the word unlimited. The number can include a tag for bits or bytes per second. Do not use a space between the number and the bits or bytes specification. For example, 2000KiB. The default rate is bits per second. In the rate variable:

bps or b equals raw bits per second Kibps, Kib, or K equals 1024 bits per second Bps or B equals bytes per second KiBps or KiB equals 1024 bytes per second

Note Kib=Kibibits, the base 2 equivalent of Kb or Kilobits. KiB=Kibibytes, the base 2 equivalent of KB or Kilobytes. The rate can also be 0 (the zero character), disable, or disabled. Each stops replication until the next rate change. As an example, the following command sets the rate to 2000 kibibytes per second: # replication throttle set override 2000KiB Note The system enforces a minimum rate of 98,304 bits per second (12 KiB).

Replication - CLI

261

Throttling

Reset Throttle Settings


To reset any or all of the throttle settings, use the throttle reset operation. replication throttle reset {current | override | schedule | all}

A reset of current removes the rate set by the replication throttle set current command. The rate returns to a scheduled rate or to the default if no rate is scheduled. A reset of override removes the rate set by the replication throttle set override command. The rate returns to a scheduled rate or to the default if no rate is scheduled. The default network bandwidth use is unlimited. The reset of schedule removes all scheduled change entries. The rate remains at a current or override setting, if either is active, or returns to the default of unlimited. The reset of all removes any current or override settings and removes all scheduled change entries, returning the system to the default, which is unlimited.

Throttle Reset Options


To reset system bandwidth to the default of unlimited and delay to the default of none, use the replication option reset operation. Use the filesys disable command before making changes and use the filesys enable command after making changes. replication option reset {bandwidth | delay | listen-port}

TOE versus Throttling:


This affects systems with optional 10GbE cards, and systems as replication source or collection replication destination. We do not support TOE (TCP offload engine) and replication throttling at same time. If a system has TOE enabled on 10GbE card, when customer enables replication throttling, TOE will be disabled automatically. If throttling is enabled and customer enables TOE afterwards, TOE will also be disabled at next scheduled throttling time or manual replication throttle set command. Whenever TOE is disabled automatically, there is a log message saying disable TOE on <network interface> - it may impact network performance on 10G interfaces."

262

Data Domain Operating System User Guide

Scripted Cascaded Directory Replication

Scripted Cascaded Directory Replication


A cascaded replication topology is one where directory replication can logically exist between three Data Domain systems in a serialized fashion. For example, DD-A replicates a directory to DD-B, which then replicates the directory to DD-C. However, since a given directory on a system cannot be configured as both a destination and source replication context simultaneously, the directory contents on DD-B must be copied from the destination context to a separate source context before it can be replicated to DD-C. This additional copy step is made efficient with the use of the 'fastcopy force' command. The 'fastcopy force' command ensures that the target directory is identical to the source directory upon completion, and leverages the underlying deduplication capabilities to eliminate unnecessary data movement . The complete configuration of this replication topology requires an external script (not supplied by Data Domain) to trigger the 'fastcopy force' command. Deciding when to trigger the 'fastcopy force' could be based on a timed schedule, for instance when the backup is completed and the replication to the intermediate node is anticipated to be complete. This can be refined to include a call to 'replication sync' on the first node to ensure that the contents on the intermediate node are up to date before calling 'fastcopy force' on the intermediate node. A downside to this approach is that it delays the start of the replication to the final node. To aid this issue, it is possible to call 'fastcopy force' early (i.e. prior to the replication to the intermediate node being completed), and then call it periodically, stopping after a final iteration once 'replication sync' returns. Bear in mind, there will be additional overhead associated with the multiple 'fastcopy force' calls in this case, increasing as the number of files in the directory increases. In the event DD-A requires recovery, replication recover can be used to recover data from DD-B. In the event DD-B requires recovery, the simplest method is to use replication resync from DD-A to DD-B. Another option to recover DD-B that might be attractive in the event the available link speed from DD-C to DD-B is significantly greater than DD-A to DD-B would be to use replication recover from DD-C to DD-B, and then use fastcopy on DD-B to re-populate the destination directory for the DD-A->DD-B context, followed by a replication resync from DD-A to DD-B.

Procedure: Set Replication Bandwidth and Network Delay


Using bandwidth and network-delay settings together, replication will calculate the proper tcp buffer size for replication usage. This should only be needed for high latency, high bandwidth WANs where the default tcp setting is not good enough to provide best throughput. Caution:

If you set bandwidth or delay you MUST set both. Bandwidth and delay must be set on both sides of the connection. For a destination with multiple sources, use the values with the maximum product.

Here is the procedure:

Replication - CLI

263

Display Bandwidth and Delay Settings

1. Prepare: Find the actual bandwidth for each server. Find the actual network delay values for each server (for example, by using the ping command). 2. Disable replication on all servers: replication disable all 3. For each server, wait until replication status reports disconnected: replication status 4. For each server, set the bandwidth to its actual value, in Bytes per second: replication option set bandwidth value Note The "replication option set" of bandwidth and network delay only needs to be executed once on any Data Domain system even with multiple replication server contexts. The setting is global to the box. 5. For each server, set the network delay to its actual value, in milliseconds: replication option set delay value 6. Re-enable replication on all servers: replication enable all

Display Bandwidth and Delay Settings


To display the current bandwidth and delay settings, use the replication option show operation. If the current setting is the default of none, the operation returns to a command prompt with no setting information. replication option show {destination | all}

Display Replicator Configuration


CTX The context number for directory replication or a 0 (zero) for collection replication. Source The Data Domain system that receives data from backup applications. Destination The Data Domain system that receives data from the replication source Data Domain system. Connection Host and Port A source Data Domain system connects to the destination Data Domain system using the destination name as returned by the hostname command on the destination or by using a destination name or IP address and port given with the replication modify

264

Data Domain Operating System User Guide

Display Replicator Configuration

connection-host command. The destination host name may not resolve to the correct IP address for the connection when connecting to an alternate interface on the destination or when a connection passes through a firewall. Enabled The replication process is yes (enabled and available to replicate data) or no (disabled and not available to replicate data). Display To display the configuration parameters, use the show config operation. replication show config [destination | all] The display with no destination variable or all option is similar to the following: # replication show config all CTX Source --- ----------------------------------1 dir://host2.company.com/backup/dir2 2 dir://host3.company.com/backup/dir3 --- ----------------------------------Destination ----------------------------------dir://host3.company.com/backup/dir3 dir://host2.company.com/backup/dir2 ----------------------------------Enabled ------Yes Yes ------On the replica, the per-context display is modified to include an asterisk; if at least one context was marked with an asterisk, the footnote "Used for recovery only" is also displayed. Connection Host and Port -----------------------host3.company.com host3.company.com ------------------------

The display with a destination variable is similar to the following. The all option returns a similar display for each context. # replication show config dir://host3.company.com/backup/dir2 CTX: 2 Source: dir://host2.company.com/backup/host2 Destination: dir://host3.company.com/backup/host2

Replication - CLI

265

Display Replication History

Connection Host: Connection Port: Enabled:

ccm34.datadomain.com (default) yes

Display Replication History


To display a history of replication, use the replication show history operation. Statistics are generated only once an hour, so the smallest interval that displays is one hour. replication show history {destination | all} [duration number {hr | min}] [interval number {hr | min}] Pre-Comp (KB) Remaining The amount of pre-compression data that is not replicated. Replicated (KB) Pre-Comp The amount of pre-compressed data that is replicated. Replicated (KB) Network The amount of compressed data sent over the network. Sync-as-of Time The source automatically runs a replication sync operation every hour and displays the time local to the source. If the source and destination are in different time zones, the Sync-as-of Time may be earlier than the time stamp in the Time column. A value of unknown appears during replication initialization. For example: # replication show history dir://system3/backup/dir2 Date Time CTX Pre-Comp (KB) Replicated (KB) Remaining Pre-Comp Network ---------- -------- --- ------------- -----------------------2007/05/02 10:55:47 1 0 0 0 2007/05/02 11:55:48 1 8,654,332 20,423,648 5,308 2007/05/02 12:55:49 1 10,174,480 96,400,921 16,654 ---------- -------- --- ------------- -----------------------Sync-as-of Time --------------Tue May 1 15:39 Tue May 1 15:39 Wed May 2 11:55 ---------------

266

Data Domain Operating System User Guide

Display Performance

Display Performance
To display current replication activity, use the replication show performance command. The default interval is two seconds. Network (KB/s) is the amount of compressed data per second transfered over the network. replication show performance {destination | all} [interval sec] [count count] For example: # replication show performance rctx://2 05/02 09:00:38 rctx://2 Pre-comp Network (KB/s) (KB/s) --------- --------163469 752 163469 777 170054 756 176351 824

Display Throttle settings


To display all scheduled throttle entries, rates, and the current rate, use the replication throttle show operation. replication throttle show [kib] Note kib=Kibibytes, the base 2 equivalent of KB or Kilobytes. The kib option displays the rate in kibibytes per second. Without the option, the rate is displayed in bits per second. The display is similar to the following: # replication throttle show kib Time Sun Mon Tue Wed Thu Fri Sat -----------------06:00 90 15:00 200 18:00 500 -----------------All units in KiBps (1024 bytes (8192 bits) per second). Active schedule: Mon, 06:00 at 90 KiBps.

Replication - CLI

267

Display Replication Complete for Current Data

Display Replication Complete for Current Data


To display when data currently available for replication is completely replicated, use the replication sync option on the source Data Domain System. The command output updates periodically and the command line cursor does not return until the operation is complete replication sync [destination] The outputs current value represents data on the source that is yet to be replicated to the destination. The value represents only data available at the time the command is given. Data received after the command begins is not added to the output. When the current value is equal to or greater than the outputs sync_target value, replication is complete for all of the data that was available for replication at the time the command began. For example: # replication sync 0 files flushed. current=2832642 sync_target=2941532 head=2841234 To run the same operation with no returned output and with the cursor available immediately (a quiet mode), use the replication sync start form: replication sync start [destination] To check on progress when running the operation in quiet mode, use the replication sync status command: replication sync status [destination]

Display Initialization, Resync, or Recovery Progress


To display the progress of a replication initialization, resync, or recovery operation, use the replication watch command: replication watch destination

Display Status
To display Replicator configuration information and the status of replication operations, use the replication status operation. replication status [destination | all] With no option, the display is similar to the following: # replication status CTX Destination Enabled --- ----------------------------------- ------268 Data Domain Operating System User Guide

Display Status

1 dir://host2.company.com/backup/dir2 yes 2 dir://host3.company.com/backup/dir3 yes --- ----------------------------------- ------Connected ----------------Thu Jan 12 17:06 disconnected ----------------Lag -----00:00 698:32 ------

Enabled The enabled state (yes or no) of replication for each replication pair. Connected The most recent connection date and time or connection state for a replication pair. Lag Backup data on a replication source is given a time stamp when the data is received from the originating client. The difference between that time and the time the same data is received by the replication destination is the lag. Lag is not the time needed to complete replication. Lag is a record of how long the most recently replicated data was on the source before being sent to the destination. Lag can immediately drop from a high to a low number if the last record processed was on the source for a long time before being replicated. If data was on the source for less than five minutes before being replicated or if the source is not sending new data, a generic message of Less than 5 minutes appears. Output from the replication status command shows whether or not any data remains to be sent from the source. With a destination variable, the display is similar to the following. The all option returns a similar display for each context. The displays include the information above plus: # replication status dir://host2.company.com/backup/dir2 Mode: source Destination: dir://ccm34.datadomain.com/backup/dir2 Enabled: yes Local filesystem status: enabled Connection: connected since Thu Jan 12 17:06:41 State: normal Error: no error Lag: less than 5 minutes Current throttle: unlimited Mode The role of the local system: source or destination. Local Filesystem Status The enabled/disabled status of he local file system. Connected Includes both the state and the date and time of the last change in the connection state. State The state of the replication process. Error A listing of any errors in the replication process. Current Throttle The current throttle setting.

Replication - CLI

269

Display Statistics

Display Statistics
Replication statistics give the following information: CTX: The context number for directory replication or a 0 (zero) for collection replication. Destination: The replication destination. Network bytes sent: the count of bytes sent over the network. Does not include TCP/IP headers. Does include internal replication control information and metadata, as well as filesystem data. Post-compressed bytes sent: same as network bytes sent Pre-compressed bytes sent: the sum of the size(s) of the file(s) replicated on this context. Note: this includes logical bytes associated with the current file thats being replicated. Post-compressed bytes received: network bytes (as defined above) received. Syncd-as-of-Time: the timestamp of the replication log record most recently executed on the replica. The timestamp indicates when the log record was generated on the originator. Pre-compressed bytes remaining (directory replication only): the sum of the size(s) of the file(s) remaining to be replicated for this context. Note: this includes the *entire* logical size of the current file being replicated, so if a very large file is being replicated, this number may not change for a noticeable period of timeit only changes after the current file finishes. Compression ratio: the ratio of pre-compressed bytes transferred to network bytes transferred. Compressed data remaining (collection replication only): the amount of compressed filesystem data remaining to be sent. Display To display Replicator statistics for all replication pairs or for a specific destination pair, use the replication show stats operation. replication show stats [destination | all] The display is similar to the following: # replication show stats

CTX Destination --1 2 -------------------------dir://33.dd.com/backup/c dir://r4.dd.com/backup/r ------------------------

Post-comp Bytes Sent ------------1,300,752,840 918,769,652 -------------

Pre-comp Bytes Sent ------------5,005,099,008 829,429,248 -------------

270

Data Domain Operating System User Guide

Display Statistics

Post-comp Bytes Received --------------2,380,674,376 52,400,012 ---------------

Sync'ed-as-of Time ---------------Mon Mar 17 13:06 Mon Mar 17 13:06 ----------------

Pre-comp Bytes Remaining -------0 0 --------

To display statistics for the destination labeled as context 1, use the following command: # replication show stats rctx://1 The display is similar to the following: # replication show stats rctx://1
CTX: Destination: Network bytes sent: Pre-compressed bytes sent: Compression ratio: Sync'ed-as-of time: Pre-compressed bytes remaining: 1 dir://33.company.com/backup/rig14_8 3,904 612 0.0 Tue Dec 11 18:30 0

Actual example of show stats all:


Heres some actual output for replication show stats all. In this example, an engineer created a file a bit larger than 1GB by writing some data. Then he created 7 copies of it using filesystem fastcopy. The fact that he only wrote approx. 1GB shows up in Pre-compressed bytes written to source; Network bytes sent to destination is a bit larger than this, due to metadata exchanged as part of the replication protocol. Finally, of course, Pre-compressed bytes sent to destination gives the full ~8GB, being the sum of the sizes of the 8 files involved. Finally, 7.6 is the ratio between pre-comp bytes sent and network bytes sent.

Replication - CLI

271

Hostname Shorthand

Originator: sym2# replication show stats all CTX: 1 Destination: dir://syrah33.datadomain.com/backup/example


Network bytes sent to destination: Pre-compressed bytes written to source: Pre-compressed bytes sent to destination: Pre-compressed bytes remaining: Files remaining: Compression ratio: Sync'ed-as-of time: 1,134,514,576 1,073,741,824 8,590,163,968 0 0 7.6 Wed Apr 2 16:40

Replica: sym3# replication show stats all CTX: 1 Destination: dir://syrah33.datadomain.com/backup/example


Network bytes received from source: Pre-compressed bytes written to source: Pre-compressed bytes sent to destination: Pre-compressed bytes remaining: Files remaining: Compression ratio: Sync'ed-as-of time: 1,134,515,676 1,073,741,824 8,590,163,968 0 0 7.6 Wed Apr 2 16:40

Hostname Shorthand
With all Replicator commands that use a hostname to identify the source or destination, the hostname can be left out if the hostname refers to the local system. Use the same three slashes ( /// ) that would bracket the hostname if the hostname was included. For example, the replication add command when given on the source Data Domain system could be entered in either of the following ways: replication add source dir://hostA/backup/dir2 destination dir://hostB/backup/dir2

272

Data Domain Operating System User Guide

Procedure: Set Up and Start Directory Replication

replication add source dir:///backup/dir2 destination dir://hostB/backup/dir2 The same command given on the destination Data Domain system could be done in either of the following ways: replication add source dir://hostA/backup/dir2 destination dir://hostB/backup/dir2 replication add source dir://hostA/backup/dir2 destination dir:///backup/dir2 Use the same format with collection replication. Add a third slash, even though a third slash is not otherwise used with collection replication. For example, the replication add command for collection replication entered on the source could be done in either of the following ways: replication add source col://hostA destination col://hostB replication add source col:/// destination col://hostB

Procedure: Set Up and Start Directory Replication


To set up directory replication using Data Domain Systems hostA and hostB for a directory named dir2:

Run the following command on both the source and destination Data Domain Systems: replication add source dir://hostA/backup/dir2 destination dir://hostB/backup/dir2

Run the following command on the source. The command checks that both Data Domain Systems in the pair can communicate and starts all Replicator processes. If a problem appears, such as that communication between the Data Domain Systems is not possible, you do not need to re-initialize after fixing the problem. Replication should begin as soon as the Data Domain Systems can communicate. replication initialize

Replication - CLI

273

Procedure: Set Up and Start Collection Replication

Procedure: Set Up and Start Collection Replication


For collection replication only, use the filesys disable command on both the source and destination before adding a replication pair and use the filesys enable command after adding a pair. Start the <source> and <destination> variables with col://. The user can not enable the filesystem before they add a replication pair on the destination. Otherwise, replication will fail during initialization. See the following example. To set up and start collection replication between two Data Domain Systems, hostA and hostB:

Run the following command on both the source and destination Data Domain Systems: filesys disable

Run the following command only on the destination: filesys destroy

Run the following command on both the source and destination Data Domain Systems. See Configure Replicator on page 251 for the details of using the command: replication add source col://hostA destination col://hostB

Run the following command on both the source and destination Data Domain Systems: filesys enable

Run the following command on the source. The command checks that both Data Domain Systems in the pair can communicate and starts all Replicator processes. If a problem appears, such as that communication between the Data Domain Systems is not possible, you do not need to re-initialize after fixing the problem. Replication should begin as soon as the Data Domain Systems can communicate. replication initialize

Procedure: Set Up and Start Bidirectional Replication


To set up and start directory replication for dir2 from hostA to hostB and for dir1 from hostB to hostA:

Run both of the following commands on hostA and hostB: replication add source dir://hostA/backup/dir2 destination dir://hostB/backup/dir2 replication add source dir://hostB/backup/dir1 destination dir://hostA/backup/dir1

Run the following command on hostA. replication initialize dir://hostB/backup/dir2

274

Data Domain Operating System User Guide

Procedure: Set Up and Start Many-to-One Replication

Run the following command on hostB. replication initialize dir://hostA/backup/dir1

Procedure: Set Up and Start Many-to-One Replication


To set up and start directory replication for directories from hostA and hostB to hostC:

Run the following command on hostA and hostC: replication add source dir://hostA/backup/dir2 destination dir://hostC/backup/dir2

Run the following command on hostB and hostC: replication add source dir://hostB/backup/dir1 destination dir://hostC/backup/dir1

Run the following command on hostA. replication initialize dir://hostC/backup/dir2

Run the following command on hostB. replication initialize dir://hostC/backup/dir1

Procedure: Replace a Directory Source - New Name


If the source (hostA) for directory replication is replaced or changed out, use the following commands to integrate (with hostB) a new source that uses a new name (hostC).

If the new source has any data in the target directories, delete all data from the directories. Run the following commands on the destination: filesys disable replication modify dir://hostB/backup/dir2 source-host hostC replication reauth dir://hostB/backup/dir2 filesys enable Run the following commands on the new source: replication add source dir://hostC/backup/dir2 destination dir://hostB/backup/dir2 replication recover dir://hostB/backup/dir2

Replication - CLI

275

Procedure: Replace a Collection Source - Same Name

Use the following command to see when the recovery is complete. Note the State entry in the output. State is normal when recovery is done and recovering while recovery is in progress. Also, a messages log file entry, replication recovery completed is sent when the process is complete. The byte count may be equal on both sides, but the recovery is not complete until data integrity is verified. The recovering directory is read-only until recovery finishes. # replication status dir://hostC/backup/dir2 CTX: 2 Mode: source Destination: dir://hostC/backup/dir2 Enabled: yes Local filesystem status: enabled Connection: connected since Sat Apr State: recovering Error: no error Destination lag: less than 5 minutes Current throttle: unlimited

8 23:38:11

Procedure: Replace a Collection Source - Same Name


If the source (hostA) for collection replication is replaced or changed out, use the following commands to integrate (with hostB) a new source that uses the same name as the previous source.

If the new source was using the VTL feature, use the following command on the source: vtl disable

Run the following command on the destination and the new source: filesys disable

Run the following command only on the new source to clear all data from the file system: filesys destroy

Run the following command on the destination: replication reauth

Run the following commands on the new source: replication add source col://hostA destination col://hostB replication recover See the last bullet in the previous procedure for checking the progress of the recovery.

276

Data Domain Operating System User Guide

Procedure: Recover from a Full Replication Destination

Procedure: Recover from a Full Replication Destination


When using directory replication, a destination Data Domain system can become full before a source Data Domain system replicates all of a context to the destination. For example, to recover a context of dir://hostA/backup/dir2:

On the source and destination Data Domain systems, run commands similar to the following: filesys disable replication break dir://hostB/backup/dir2 filesys destroy filesys enable On the destination, run a file system cleaning operation: filesys clean

On both the source and destination, add back the original context: replication add source dir://hostA/backup/dir2 destination dir://hostB/backup/dir2

On the source, run a replication resynchronization operation for the target context: replication resync dir://hostB/backup/dir2

Procedure: Convert from Collection to Directory


The conversion process started by the replication resync command involves filtering all data from the source Data Domain System to the destination Data Domain System, even though all data is already on the destination. The filtering leads to a longer conversion time than may be expected.

Over a T3, 100ms WAN, performance is about 100 MiB/sec., which gives data transfer of: 100 MiB/sec. = 10 seconds/GiB = 8.6 TiB/day

Note MiB=MibiBytes, the base 2 equivalent of Megabytes. GiB=GibiBytes, the base 2 equivalent of Gigabytes. TiB=TibiBytes, the base 2 equivalent of Terabytes.

Over a gibibit (the base 2 equivalent of gigabit) LAN, performance is about 120 MiB/sec., which gives data transfer of: 120 MiB/sec. = 8.3 seconds/GiB = 10.3 TiB/day

Replication - CLI

277

Procedure: Seeding

Use the following procedure to convert a collection replication pair (source is hostA, destination is hostB) to directory replication.

Run commands similar to the following on both of the collection replication systems: filesys disable replication break col://hostB filesys destroy filesys enable Run a command similar to the following on both systems: replication add source dir://hostA/backup destination dir://hostB/backup/hostA

On the source, run a replication resynchronization operation: replication resync dir://hostB/backup/hostA

Use the replication watch command to display the progress of the conversion process.

Procedure: Seeding
A Data Domain System that already holds data in its file system can be used as a source Data Domain System for replication. Part of setting up replication with such a Data Domain System is to transfer the current data on the source Data Domain System to the destination Data Domain System. The procedure for the transfer is called seeding. As seeding over a WAN may need large amounts of bandwidth and time, Data Domain provides alternate seeding procedures for the following replication configurations:

One-to-one One source Data Domain System replicates data to one destination Data Domain System. Replication can be collection or directory type. Bidirectional A source Data Domain System, such as ddr01, replicates data to the destination ddr02. At the same time, ddr02 is a source for replication to ddr01. Each Data Domain System is a source for its own data and a destination for the other Data Domain Systems data. Bidirectional replication can be directory replication only. Many-to-one More than one source Data Domain System replicates data to a single destination Data Domain System. Many-to-one replication can be directory replication only.

278

Data Domain Operating System User Guide

Procedure: Seeding

One-to-One
For collection replication, the destination Data Domain System file system must be empty. In the following example, ddr01 is the source Data Domain System and ddr02 is the destination. 1. Ship the destination Data Domain System (ddr02) to the source Data Domain System (ddr01) site. 2. Follow the standard Data Domain installation process to install the destination Data Domain System. 3. Connect the Data Domain Systems with a direct link to cut down on initialization time. 4. Boot up the destination Data Domain System. (The source Data Domain System should already be in service.) 5. Enter the following command on both Data Domain Systems: # filesys disable 6. Enter a command similar to the following on both Data Domain Systems: # replication add source col://ddr01.company.com destination col://ddr02.company.com 7. Enter the following command on both Data Domain Systems: # filesys enable 8. On the source, enter a command similar to the following. If the source holds a lot of data, the initialize operation can take many hours. # replication initialize col://ddr02.company.com 9. Wait for initialization to complete. Output from the replication initialize command details initialization progress. 10. On the destination, enter the following command: # system poweroff 11. Move the destination Data Domain System to its permanent location, company2.com in this example. 12. Boot up the destination Data Domain System.

Replication - CLI

279

Procedure: Seeding

13. On the destination Data Domain System, run the config setup command and make any needed changes. For example, the system hostname is a fully-qualified domain name that may be different in the new location.

14. On ddr02, enter commands similar to the following to change the replication destination host to the new hostname: # filesys disable # replication modify col://ddr02.company.com destination-host ddr02.company2.com # filesys enable 15. :On ddr01, enter commands similar to the following to change the destination hostname: # filesys disable # replication modify col://ddr02.company.com destination-host ddr02.company2.com # filesys enable For directory replication, the source directory must exist and the destination directory must be empty. In the following example, ddr01 is the source Data Domain System and ddr02 is the destination. 1. Ship the destination Data Domain System (ddr02) to the source Data Domain System (ddr01) site, company.com in this example. 2. Follow the standard Data Domain installation process to physically install ddr02. 3. Connect the Data Domain Systems with a direct link to cut down on initialization time. 4. Boot up ddr02. (The source Data Domain System should already be in service.) 5. Configure ddr02 using the standard Data Domain process. 6. Enter a command similar to the following on both Data Domain Systems:

280

Data Domain Operating System User Guide

Procedure: Seeding

# replication add source dir://ddr01.company.com/backup/data01 destination dir://ddr02.company.com/backup/data01 7. On ddr01, enter a command similar to the following. If the source holds a lot of data, the initialize operation can take many hours. # replication initialize dir://ddr02.company.com/backup/data01 8. Wait for initialization to complete. Output from the replication initialize command details initialization progress.

9. On ddr02, enter the following command: # system poweroff 10. Move ddr02 to its permanent location, company2.com in this example. 11. Boot up the destination Data Domain System. 12. On ddr02, run the config setup command and make any needed changes, such as hostname, which is a fully-qualified domain name that may be different in the new location. 13. On ddr02, enter commands similar to the following to change the replication destination host to the new hostname: # filesys disable # replication modify dir://ddr02.company.com/backup/data01 destination-host ddr02.company2.com # filesys enable 14. On ddr01, enter commands similar to the following to change the destination host to the new hostname: # filesys disable # replication modify dir://ddr02.company.com/backup/data01 destination-host ddr02.company2.com # filesys enable

Replication - CLI

281

Procedure: Seeding

Bidirectional
With bidirectional replication, the seeding process uses three Data Domain Systems: one permanent Data Domain System at each customer site and one temporary Data Domain System that is physically moved from one site to another. Bidirectional replication must use directory-type replication. For directory replication, the source directory must exist and the destination directory must be empty. The instructions below use the name ddr01 for the first permanent Data Domain System that is replicated, ddr02 for the second permanent Data Domain System that is replicated, and ddr-temp for the Data Domain System that is moved from one site to another. Bidirectional replication is done in eight phases:

Copy source data from the first permanent Data Domain System (ddr01) to the temporary Data Domain System (ddr-temp). Move ddr-temp to the site of the second permanent Data Domain System (ddr02). Transfer the ddr01 source data from ddr-temp to ddr02. Setup and start replication between ddr01 and ddr02 for ddr01 source data. Copy the ddr02 source data to ddr-temp Move ddr-temp back to the ddr01 site. Transfer the ddr02 source data to ddr01. Setup and start replication between ddr02 and ddr01 for ddr02 source data.

Copy source data from the first Data Domain System (ddr01): 1. Ship the temporary Data Domain System (ddr-temp) to the ddr01 site, company.com in this example. 2. Follow the standard Data Domain hardware installation process to physically setup ddr-temp. 3. Connect ddr01 and ddr-temp with a direct link to cut down on initialization time. 4. Boot up ddr-temp. (Ddr01 should already be in service.) 5. Configure ddr-temp using the standard Data Domain command config setup. 6. Enter a command similar to the following on both Data Domain Systems. Note the use of an added temp directory for the destination. # replication add source dir://ddr01.company.com/backup/data01 destination dir://ddr-temp.company.com/backup/temp/data01
282 Data Domain Operating System User Guide

Procedure: Seeding

7. On ddr01, enter a command similar to the following. # replication initialize dir://ddr-temp.company.com/backup/temp/data01 8. Wait for initialization to finish. If ddr01 holds a lot of data, the initialize operation can take many hours. Output from the replication initialize command details initialization progress. 9. On ddr01 and ddr-temp, enter commands similar to the following to break replication: # filesys disable # replication break dir://ddr-temp.company.com/backup/temp/data01 # filesys destroy # filesys enable

10. On ddr-temp, enter the following command: # system poweroff Move the temporary Data Domain System. 1. Move ddr-temp to the ddr02 site, company2.com in this example. 2. Follow the standard Data Domain hardware installation process to physically set up ddr-temp. 3. Connect ddr02 and ddr-temp with a direct link to cut down on initialization time. 4. Boot up ddr-temp. (Ddr02 should already be in service.) 5. On ddr-temp, run the config setup command and make any needed changes, such as hostname, which is a fully-qualified domain name that may be different in the new location.

Transfer the ddr01 source data from ddr-temp to ddr02.

Replication - CLI

283

Procedure: Seeding

1. Set up replication with ddr-temp as the source and ddr02 as the destination. Enter a command similar to the following on both ddr-temp and ddr02. Note that the added temp directory is used for both source and destination. # replication add source dir://ddr-temp.company2.com/backup/temp/data01 destination dir://ddr02.company2.com/backup/temp/data01 2. On ddr-temp, enter a command similar to the following to transfer data to ddr02: # replication initialize dir://ddr02.company2.com/backup/temp/data01 3. Wait for initialization to finish. If ddr-temp holds a lot of data, the initialize operation can take many hours. Output from the replication initialize command details initialization progress. 4. On ddr-temp and ddr02, enter commands similar to the following to break replication: # filesys disable # replication break dir://ddr02.company2.com/backup/temp/data01 # filesys destroy # filesys enable Setup and start replication between ddr01 and ddr02 for data from ddr01. Note that the temp directory is NOT used for either the source or the destination. 1. Enter a command similar to the following on both ddr01 and ddr02 to set up replication: # replication add source dir://ddr01.company.com/backup/data01 destination dir://ddr02.company2.com/backup/data01 2. On ddr01, enter a command similar to the following to initialize replication. The initialization process should take a short time as the process transfers only metadata and backup application data that is new since data was transferred to ddr-temp. The metadata goes to the specified location on ddr02, in this example: /backup/data01. Backup application data that was transferred from ddr-temp to ddr02 remains on ddr02 and is not replicated again. # replication initialize dir://ddr02.company2.com/backup/data01 3. Wait for initialization to finish. Output from the replication initialize command details initialization progress.

284

Data Domain Operating System User Guide

Procedure: Seeding

4. If ddr-temp has space for the current ddr01 data and space for the ddr02 data, leave ddr-temp as is. Take into account that any common data between the two data sets gets compressed on ddr-temp, using less space. If ddr-temp does not have enough space for both sets of data, mount or map the ddr-temp directory /backup from another system and delete /temp. Copy the ddr02 source data to ddr-temp. ddr-temp should still be installed at the ddr02 site and communicating with ddr02. 1. Enter a command similar to the following on both Data Domain Systems. Note the use of the added temp directory for both the source and the destination. # replication add source dir://ddr02.company2.com/backup/temp/data02 destination dir://ddr-temp.company2.com/backup/temp/data02 2. On ddr02, enter a command similar to the following. # replication initialize dir://ddr-temp.company2.com/backup/temp/data02 3. Wait for initialization to finish. If ddr02 holds a lot of source data, the initialize operation can take many hours. Output from the replication initialize command details initialization progress. 4. On ddr01 and ddr-temp, enter commands similar to the following to break replication: # filesys disable # replication break dir://ddr-temp.company2.com/backup/temp/data02 # filesys destroy # filesys enable 5. On ddr-temp, enter the following command: # system poweroff Move the temporary Data Domain System. 1. Move ddr-temp back to the ddr01 site. 2. Follow the standard Data Domain hardware installation process to physically setup ddr-temp.

Replication - CLI

285

Procedure: Seeding

3. Connect ddr01 and ddr-temp with a direct link to cut down on initialization time. 4. Boot up ddr-temp. (Ddr01 should already be in service.) 5. On ddr-temp, run the config setup command and make any needed changes, such as hostname, which is a fully-qualified domain name that may be different in the current location.

Transfer the ddr01 source data from ddr-temp to ddr02. 1. Set up replication with ddr-temp as the source and ddr01 as the destination. Enter a command similar to the following on both ddr-temp and ddr01. Note that the added temp directory is used for both source and destination. # replication add source dir://ddr-temp.company.com/backup/temp/data02 destination dir://ddr01.company.com/backup/temp/data02 2. On ddr-temp, enter a command similar to the following to transfer the ddr02 source data to ddr01: # replication initialize dir://ddr01.company.com/backup/temp/data02 3. Wait for initialization to finish. If ddr-temp holds a lot of data, the initialize operation can take many hours. Output from the replication initialize command details initialization progress. 4. On ddr01 and ddr-temp, enter commands similar to the following to break replication: # filesys disable # replication break dir://ddr01.company.com/backup/temp/data02 # filesys destroy # filesys enable Setup and start replication between ddr02 and ddr01 for data from ddr02. Note that the temp directory is NOT used for either the source or the destination. 1. Enter a command similar to the following on both ddr02 and ddr01 to set up replication: # replication add source dir://ddr02.company2.com/backup/data02 destination dir://ddr01.company.com/backup/data02

286

Data Domain Operating System User Guide

Procedure: Seeding

2. On ddr02, enter a command similar to the following to initialize replication. The initialization process should take a short time as the process transfers only metadata and backup application data that is new since data was transferred to ddr-temp. The metadata goes to the specified location on ddr01, in this example: /backup/data02. Backup application data that was transferred from ddr-temp to ddr01 remains on ddr01 and is not replicated again. # replication initialize dir://ddr01.company.com/backup/data02 3. Wait for initialization to finish. Output from the replication initialize command details initialization progress. 4. On ddr02, mount or map the directory /backup from another system and delete /temp. 5. On ddr01, mount or map the directory /backup from another system and delete /temp.

Many-to-One
With many-to-one replication, the seeding process uses a temporary Data Domain System to receive data from each source Data Domain System site. The temporary Data Domain System is physically moved from one source site to another and then moved to the destination Data Domain System site. Many-to-one replication must use directory-type replication. For directory replication, the source directory must exist and the destination directory must be empty. The instructions below use the name ddr01 for the first Data Domain System that is replicated, ddr02 for the second Data Domain System that is replicated, ddr-dest for the single destination Data Domain System, and ddr-temp for the Data Domain System that is moved from site to site. Many-to-one replication is done in six phases for the example in this section:

Copy source data from the first source Data Domain System (ddr01) to the temporary Data Domain System (ddr-temp). Move ddr-temp to the second source Data Domain System (ddr02) site. Copy source data from ddr02 to ddr-temp. Move ddr-temp to the site of the destination Data Domain System (ddr-dest). Transfer the ddr01 and ddr02 source data from ddr-temp to ddr-dest. Setup and start replication between ddr01 and ddr-dest and between ddr02 and ddr-dest.

Copy source data from the first Data Domain System (ddr01):

Replication - CLI

287

Procedure: Seeding

1. Ship the temporary Data Domain System (ddr-temp) to the ddr01 site, company.com in this example. 2. Follow the standard Data Domain hardware installation process to physically setup ddr-temp. 3. Connect ddr01 and ddr-temp with a direct link to cut down on initialization time. 4. Boot up ddr-temp. (Ddr01 should already be in service.) 5. Configure ddr-temp using the standard Data Domain command config setup. 6. Enter a command similar to the following on both Data Domain Systems. Note the use of an added temp directory for the destination. # replication add source dir://ddr01.company.com/backup/data01 destination dir://ddr-temp.company.com/backup/temp/data01 7. On ddr01, enter a command similar to the following. # replication initialize dir://ddr-temp.company.com/backup/temp/data01 8. Wait for initialization to finish. If ddr01 holds a lot of data, the initialize operation can take many hours. Use the replication initialize command to see initialization progress. 9. On ddr01 and ddr-temp, enter commands similar to the following to break replication: # filesys disable # replication break dir://ddr-temp.company.com/backup/temp/data01 # filesys destroy # filesys enable 10. On ddr-temp, enter the following command: # system poweroff Move the temporary Data Domain System to the second (ddr02) source site. 1. Move ddr-temp to the ddr02 site, company2.com in this example. 2. Follow the standard Data Domain hardware installation process to physically set up ddr-temp. 3. Connect ddr02 and ddr-temp with a direct link to cut down on initialization time.
288 Data Domain Operating System User Guide

Procedure: Seeding

4. Boot up ddr-temp. (Ddr02 should already be in service.) 5. On ddr-temp, run the config setup command and make any needed changes, such as hostname, which is a fully-qualified domain name that may be different in the new location.

Copy source data from the second source Data Domain System (ddr02): 1. Enter a command similar to the following on ddr-temp and ddr02. Note the use of an added temp directory for the destination. # replication add source dir://ddr02.company2.com/backup/data02 destination dir://ddr-temp.company2.com/backup/temp/data02 2. On ddr02, enter a command similar to the following. # replication initialize dir://ddr-temp.company2.com/backup/temp/data02 3. Wait for initialization to finish. If ddr02 holds a lot of data, the initialize operation can take many hours. Output from the replication initialize command details initialization progress. 4. On ddr02 and ddr-temp, enter commands similar to the following to break replication: # filesys disable # replication break dir://ddr-temp.company2.com/backup/temp/data02 # filesys destroy # filesys enable 5. On ddr-temp, enter the following command: # system poweroff

Move the temporary Data Domain System to the destination (ddr-dest) site. 1. Move ddr-temp to the ddr-dest site, company3.com in this example. 2. Follow the standard Data Domain hardware installation process to physically set up ddr-temp. 3. Connect ddr-dest and ddr-temp with a direct link to cut down on initialization time.

Replication - CLI

289

Procedure: Seeding

4. Boot up ddr-temp. (Ddr-dest should already be in service.) 5. On ddr-temp, run the config setup command and make any needed changes, such as hostname, which is a fully-qualified domain name that may be different in the new location.

Transfer the ddr01 and ddr02 source data from ddr-temp to ddr-dest. 1. Set up a replication context with ddr-temp as the source and ddr-dest as the destination. Enter a command similar to the following on both ddr-temp and ddr-dest. Note that the added temp directory is used for both sources and destinations. # replication add source dir://ddr-temp.company3.com/backup/temp destination dir://ddr-dest.company3.com/backup/temp 2. On ddr-temp, enter a command similar to the following to transfer the ddr01 and ddr02 source data to ddr-dest: # replication initialize dir://ddr-dest.company3.com/backup/temp 3. Wait for initialization to finish. If ddr-temp holds a lot of data, the initialize operation can take many hours. Output from the replication initialize command details initialization progress. 4. On ddr-dest and ddr-temp, enter commands similar to the following to break replication: # filesys disable # replication break dir://ddr-dest.company3.com/backup/temp # filesys destroy # filesys enable Setup and start replication between ddr01 and ddr-dest and between ddr02 and ddr-dest. Note that the temp directory is NOT used for either the sources or the destinations. 1. Enter a command similar to the following on both ddr01 and ddr-dest to set up ddr01 replication: # replication add source dir://ddr01.company.com/backup/data01 destination dir://ddr-dest.company3.com/backup/data01

290

Data Domain Operating System User Guide

Migration

2. Enter a command similar to the following on both ddr02 and ddr-dest to set up ddr02 replication: # replication add source dir://ddr02.company2.com/backup/data02 destination dir://ddr-dest.company3.com/backup/data02 3. On ddr01, enter a command similar to the following to initialize replication. The initialization process should take a short time as the process transfers only metadata and backup application data that is new since data was transferred to ddr-temp. The metadata goes to the specified location on ddr-dest, in this example: /backup/data01. Backup application data that was transferred from ddr-temp to ddr-dest remains on ddr-dest and is not replicated again. # replication initialize dir://ddr-dest.company3.com/backup/data01 4. On ddr02, enter a command similar to the following to initialize replication. The initialization process should take a short time as the process transfers only metadata and backup application data that is new since data was transferred to ddr-temp. The metadata goes to the specified location on ddr-dest, in this example: /backup/data02. Backup application data that was transferred from ddr-temp to ddr-dest remains on ddr-dest and is not replicated again. # replication initialize dir://ddr-dest.company3.com/backup/data02 5. Wait for initialization to finish. Output from the replication initialize command details initialization progress. 6. On ddr-dest, mount or map the directory /backup from another system and delete the temporary directory.

Migration
The migration command copies all data from one Data Domain system to another and may also copy replication contexts (configurations). Use the command when upgrading to a larger capacity Data Domain system. Migration is usually done in a LAN environment. See the procedures at the end of this section for using migration with a Data Domain system that is part of a replication pair.

All data under /backup is always migrated and exists on both systems after migration. After migrating replication contexts, the migrated contexts still exist on the migration source. After migrating a context, break replication for that context on the migration source. Do not run backup operations to a migration source during a migration operation. A migration destination does not need a replication license unless the system will use replication.
291

Replication - CLI

Migration

The migration destination must have a capacity that is the same size as or larger than the migration source. The migration destination must have an empty file system. Any setting of the systems replication throttle feature also applies to migration. If the migration source has throttle settings, use the replication throttle set override command to set the throttle to the maximum (unlimited) before starting migration.

Set Up the Migration Destination


To prepare a Data Domain system to be a migration destination, use the migration receive operation. Administrative users only. Use the operation:

Only on the migration destination. Before entering the migration send command on the migration source. After running the filesys disable and filesys destroy operations on the destination.

The command syntax is: migration receive source-host src-hostname For example, to prepare a destination for migration from a migration source named hostA: # filesys disable # filesys destroy # migration receive source-host hostA Note When preparing the destination, DO NOT run the filesys enable command.

Start Migration from the Source


To start migration, use the migration send operation on the migration source. Administrative users only. Use the operation:

Only on the migration source. Only when no backup data is being sent to the migration source. After entering the migration receive command on the migration destination. migration send obj-spec-list destination-host dest-hostname

The command syntax is:

292

Data Domain Operating System User Guide

Migration

The obj-spec-list is /backup for systems that do not have a replication license. With replication, the obj-spec-list is one or more contexts from the migration source. After migrating a context, all data from the context is still on the source system, but the context configuration is only on the migration destination. A context in the obj-spec-list can be:

The destination string as defined when setting up replication. Examples are: dir://hostB/backup/dir2 col://hostB pool://hostB/pool2 The context number as shown in output from the replication status command. For example: rctx://2

The keyword all, which migrates all contexts from the migration source to the destination.

Backup jobs to the Data Domain system should be stopped during the first migration phase as write access is blocked during the first phase. Backup jobs can be resumed during the second phase. The first phase takes a maximum of 30 minutes for a Data Domain system with a full /backup file system. Use the migration watch command to track the first migration phase. New data written to the source is marked for migration until you enter the migration commit command. New data written to the source after a migration commit command is not migrated. Note that write access to the source is blocked from the time a migration commit command is given until the migration process finishes. The migration send command stays open until a migration commit command is entered. In the following examples, remember that all data on the migration source is always migrated, even when a single directory replication context is specified in the command.

To start migration of data only (no replication contexts, even if replication contexts are configured) to a migration destination named hostC, use a command similar to the following: # migration send /backup destination-host hostC

To start a migration that includes a collection replication context (replication destination string) of col://hostB: # migration send col://hostB destination-host hostC To start migration with a directory replication context of dir://hostB/backup/dir2: # migration send dir://hostB/backup/dir2 destination-host hostC To start migration with two replication contexts using context numbers 2 and 3: # migration send rctx://2 rctx://3 destination-host hostC

Replication - CLI

To migrate all replication contexts:


293

Migration

# migration send all destination-host hostC

Create an End Point for Data Migration


The migration commit command limits migration to data received by the source at the time the command is entered. You can enter the command and limit the migration of new data at any time after entering the migration send command. All data on the source Data Domain system at the time of the commit command (including data newly written since the migration started) is migrated to the destination Data Domain system. Data Domain recommends entering the commit command after all backup jobs for the context being migrated are finished. Write access to the source is blocked after entering the migration commit command and during the time needed to complete migration. After the migration process finishes, the source is opened for write access, but new data is no longer migrated to the destination. After the commit, new data for the contexts migrated to the destination should be sent only to the destination. Administrative users only. migration commit

Display Migration Progress


To track the initial phase of migration (when write access is blocked), use the migration watch operation. The command output shows the percent completed. migration watch

Stop the Migration Process


To kill a migration that is in progress, use the migration abort operation. The operation stops the migration process and returns the Data Domain system to its previous state. If the migration source Data Domain system is part of a replication pair, replication is re-started. Run the command on the migration source and the migration destination. Administrative users only. Note A migration abort leaves the password on the destination system the same as the password on the migration source. migration abort Note Using the migration abort command on a migration destination will require a filesys destroy on that machine before the file system can be enabled on it again.

294

Data Domain Operating System User Guide

Migration

Display Migration Statistics


To display migration statistics during the migration process, use the migration show stats operation. migration show stats Migration statistics have the following columns: Bytes Sent The total number of bytes sent from the migration source. The value includes backup data, overhead, and network overhead. On the destination, the value includes overhead and network overhead. Use the value (and the next value, Bytes received) to estimate network traffic generated by migration. Bytes Received The total number of bytes received at the destination. On the destination, the value includes data, overhead, and network overhead. On the source, the value includes overhead and network overhead. Use the value (and the previous value) to estimate network traffic generated by migration. Received Time The date and time of the most recent records received. Processed Time The date and time of the most recent records processed. For example: # migration show stats Destination Bytes Sent ----------hostB ---------------------153687473704 -----------Bytes Received ---------1974621040 ---------Received Time ---------------Fri Jan 13 09:37 ----------------

Processed Time ---------------Fri Jan 13 09:37 ----------------

Display Migration Status


To display the current status of migration, use the migration status operation. migration status For example:

Replication - CLI

295

Migration

# migration status CTX: Mode: Destination: Enabled: Local file system status Connection State: Error: Destination lag: Current throttle: Contexts under migration:

0 migration source hostB yes enabled connected since Tue Jul 17 15:20:09 migrating 3/3 60% no error 0 unlimited dir://hostA/backup/dir2

Procedure: Migrate between Source and Destination


To migrate data from a source, hostA, to a destination, hostB (ignoring replication contexts): 1. On hostB (the migration destination): # filesys disable # filesys destroy # migration receive source-host hostA 2. On hostA (the source), run the following command: # migration send /backup destination-host hostB 3. On either host, run the following command to display migration progress: # migration watch 4. At the appropriate time for your site, create a migration end point. The three phases of migration may take many hours. During that time, new data sent to the source is also marked for migration. To allow backups with the least disruption, use the following command after the three migration phases finish. On hostA: # migration commit

Procedure: Migrate with Replication


To migrate data and a context from a source, hostA, to a destination, hostC, when hostA is also a directory replication source for hostB:

296

Data Domain Operating System User Guide

Migration

1. On hostC (the migration destination), run the following commands. # filesys disable # filesys destroy # migration receive source-host hostA 2. On hostA (the migration and replication source), run the following command. Note that the command also disables the file system. # migration send dir://hostB/backup/dir2 destination-host hostC 3. On the source migration host, run the following command to display migration progress: # migration watch 4. First on hostA and then on hostC, run the following command. Note that the command also disables the file system. # migration commit 5. on hostB (the replication destination), run commands similar to the following to change the replication source to hostC: # filesys disable # replication modify dir://hostB/backup/dir2 source-host hostC # filesys enable

Replication - CLI

297

Migration

298

Data Domain Operating System User Guide

SECTION 6: Data Access Protocols

299

This page intentionally left blank.

300

Data Domain Operating System User Guide

NFS Management
The nfs command manages NFS clients and displays NFS statistics and status.

21

A Data Domain System exports the directories /ddvar and /backup. /ddvar contains Data Domain System log files and core files. Add clients from which you will administer the Data Domain System to /ddvar. /backup is the target for data from your backup servers. The data is compressed before being stored. Add backup servers as clients to /backup. If you choose to add a client to /backup and to /ddvar, consider adding the client as read-only to /backup to guard against accidental deletions of data.

Quicker Start Guide for NFS


Administrators more familiar with Windows than UNIX may find getting the initial directory structure created for a UNIX environment a bit tricky. This paper outlines some steps that will make this easier. It is assumed that root access is available on the UNIX box, and the Data Domain system is setup and on the network, with NFS configured as outlined in the DD OS Quick Start Guide.

Shorthand steps:
In this example: bee = initial client UNIX system kay = second client UNIX system which requires secure access to the Data Domain system ddsys = Data Domain system All three systems are defined appropriately so that their IP addresses resolve correctly. 1) ensure '/backup' can be seen as an export: bee# showmount -e ddsys Export list for ddsys: /backup *

301

Quicker Start Guide for NFS

2) create a directory on 'bee' to mount '/backup' from 'ddsys' onto: bee# mkdir /mnt-ddsys 3) mount the directory bee# mount -o hard,bg,intr,rsize=32768,wsize=32768,nolock, proto=tcp,vers=3 ddsys:/backup /mnt-ddsys NOTE: On Sun Solaris, use "llock" instead of "nolock". The other parameters are explained in the man page for your particular UNIX platform. 4) Create the desired subdirectory bee# mkdir /mnt-ddsys/NBU-mediasvr1 5) If desired, set the correct ownership and mode on the directory bee# chown bkup-operator /mnt-ddsys/NBU-mediasvr1 bee# chmod 700 /mnt-ddsys/NBU-mediasvr1 6) Done, now dismount bee# umount /mnt-ddsys bee# rmdir /mnt-ddsys This example creates an new sub-directory that will allow full access only by the 'bkup-operator' userid. If this is not required and access should be available to any user on 'kay', then set the mode to 777 instead of 700. Now we go to the Data Domain system and create an export entry so that only the system "kay" can access the sub-directory just created on the Data Domain system. 1) access the Data Domain system command line, usually using "ssh" and login as an administrator (usually "sysadmin") 2) create the desired export: sysadmin@ddsys# nfs add client /backup/NBU-mediasvr1 kay For security purposes, the '/backup' directory should only be reachable by specific clients required to create sub-directories following the methods above. If '/backup' is left exported to everyone then any workstation can mount that directory and have full view of all sub-directories below it. Therefore, it's a good idea to restrict this access: sysadmin@ddsys# nfs del /backup * sysadmin#ddsys# nfs add /backup <list of admin hosts> If "Permission denied" is returned to any of these commands, check: a) mount command: client and "secure" export setting on the Data Domain system sysadmin@ddsys# nfs show clients
302 Data Domain Operating System User Guide

Add NFS Clients

b) creating the sub-directory: "squash" settings on the Data Domain system sysadmin@ddsys# nfs show clients Example output of "nfs show clients" sysadmin@ddsys# nfs show clients
path -------------/backup /ddvar /backup/oracle client ------------192.168.28.30 b2-rh-nb2 192.168.28.50 options ---------------------------------------(rw,no_root_squash,no_all_squash,secure) (rw,no_root_squash,no_all_squash,secure) (rw,no_root_squash,no_all_squash,secure)

Add NFS Clients


To add NFS clients that can access the Data Domain System, use the nfs add export client-list nfs-options operation. Add clients for administrative access to /ddvar. Add clients for backup operations to /backup. A client added to a subdirectory under /backup has access only to that subdirectory. The client-list can have a comma, a space, or both between list entries. To give access to all clients, the client-list can be an asterisk (*). nfs add {/ddvar | /backup[/subdir]} client-list [(nfs-options)] The client-list can contain class-C IP addresses, IP addresses with either netmasks or length, hostnames, or an asterisk (*) followed by a domain name, such as *.yourcompany.com. The nfs-options list can have a comma, a space, or both between entries. The default NFS options for an NFS client are: rw, no_root_squash, no_all_squash, and secure. The list accepts the following options: ro Read only permission. rw Read and write permissions. root_squash Map requests from uid/gid 0 to the anonymous uid/gid. no_root_squash Turn off root squashing. all_squash Map all user requests to the anonymous uid/gid. no_all_squash Turn off the mapping of all user requests to the anonymous uid/gid. secure Requires that requests originate on an Internet port that is less than IPPORT_RESERVED (1024). insecure Turn off the secure option

NFS Management

303

Remove Clients

anonuid=id Set an explicit user-ID for the anonymous account. The id is an integer bounded from -65635 to 65635. anongid=id Set an explicit group-ID for the anonymous account. The id is an integer bounded from -65635 to 65635. For example, to add an NFS client with an IP address of 192.168.1.02 and read/write access to /backup: with the secure option: # nfs add /backup 192.168.1.02 (rw,secure) Netmasks, as in the following examples, are supported: # nfs add /backup 192.168.1.02/24 (rw,secure) # nfs add /backup 192.168.1.02/255.255.255.0 (rw,secure)

Remove Clients
To remove NFS clients that can access the Data Domain System, use the nfs del export client-list operation. A client can be removed from access to /ddvar and still have access to /backup. The client-list can contain IP addresses, hostnames, and an asterisk (*) and can be comma-separated, space separated, or both. nfs del {/ddvar | /backup[/subdir]} client-list For example, to remove an NFS client with an IP address of 192.168.1.02 from /ddvar access: # nfs del /ddvar 192.168.1.02

Enable Clients
To allow access for NFS clients to a Data Domain System, use the nfs enable operation. nfs enable

Disable Clients
To disable all NFS clients from accessing the Data Domain System, use the nfs disable operation. nfs disable

304

Data Domain Operating System User Guide

Reset Clients to the Default

Reset Clients to the Default


To return the list of NFS clients that can access the Data Domain System to the factory default, use the nfs reset clients operation. The factory default is an empty list. No NFS clients can access the Data Domain System when the list is empty. The operation is available to administrative users only. nfs reset clients

Clear the NFS Statistics


To clear the NFS statistics counters and reset them to zero, use the nfs reset stats operation. nfs reset stats

Display Active Clients


The list of active clients shows all clients that have been active in the past 15 minutes and the mount path for each client. Display To display active NFS clients, use the nfs show active operation. nfs show active The display is similar to the following: # nfs show active NFS Active Clients path client ---------------------------/ddvar jsmith.yourcompany.com /backup djones.yourcompany.com ----------------------------

Display Allowed Clients


The list of NFS clients allowed to access the Data Domain System shows the mount path and the NFS options for each client.

NFS Management

305

Display Statistics

Display To display all NFS clients, use the nfs show clients operation or click NFS in the left panel of the Data Domain Enterprise Manager. nfs show clients The display is similar to the following: # nfs show clients NFS Client List path client options) ------------------------------------------------------/ddvar jsmith (rw,root_squash,no_all_squash,secure) /backup djones (rw,no_root_squash,no_all_squash,secure) -------------------------------------------------------

Display Statistics
To display NFS statistics for a Data Domain System, use the nfs show stats operation. nfs show stats The following example shows relevant entries, but not all possible entries: # nfs show stats NFS statistics: NFSPROC3_NULL NFSPROC3_GETATTR NFSPROC3_SETATTR NFSPROC3_LOOKUP NFSPROC3_ACCESS NFSPROC3_READLINK NFSPROC3_READ NFSPROC3_WRITE NFSPROC3_CREATE NFSPROC3_MKDIR NFSPROC3_SYMLINK NFSPROC3_MKNOD NFSPROC3_REMOVE NFSPROC3_RMDIR NFSPROC3_RENAME
306

: : : : : : : : : : : : : : :

0 327 30 66 455 0 0 6080507 10 0 0 0 0 0 11

[0] [0] [0] [24] [0] [0] [0] [0] [0] [0] [0] [0] [0] [0] [1]

Data Domain Operating System User Guide

Display Detailed Statistics

NFSPROC3_LINK NFSPROC3_READDIR NFSPROC3_READDIRPLUS NFSPROC3_FSSTAT NFSPROC3_FSINFO NFSPROC3_PATHCONF NFSPROC3_COMMIT Total Requests

: : : : : : : :

0 0 0 0 0 0 0 6081406

[0] [0] [0] [0] [0] [0] [0]

FH statistics: There are currently (2) exported filesystems. Stats for export point [/backup]: File system Type = SFS Number of cached entries = 28 Number of file handle lookups = 6083544 (cache miss = 28) Max allowed file cache size = 200, max streams = 64 Number of authentication failures = 0 Number of currently open file streams = 1 Stats for export point [/ddvar]: File system Type = UNIX Number of cached entries = 0 Number of file handle lookups = 0 (cache miss = 0) Max allowed file cache size = 200, max streams = 64 Number of authentication failures = 0 Number of currently open file streams = 0

Display Detailed Statistics


The nfs show detailed-stats operation displays statistics used by Data Domain support staff for troubleshooting. nfs show detailed-stats

Display Status
To display NFS status for a Data Domain System, use the nfs status operation. nfs status The display looks similar to the following:

NFS Management

307

Display Timing for NFS Operations

# nfs status The NFS system is currently active and running Total number of NFS requests handled = 6160900

Display Timing for NFS Operations


To display information about the time needed for NFS operations, use the nfs show histogram operation. Administrative users only. nfs show histogram The column headers are: Op The name of the NFS operation. mean-ms The mathematical mean time for completion of the operations. stddev The standard deviation for time to complete operations, derived from the mean time. max-s The maximum time taken for a single operation. <10ms The number of operations that took less than 10ms. 100ms The number of operations that took between 10ms and 100ms. 1s The number of operations that took between 1 second and 10 seconds. 10s The number of operations that took between 1 second and 10 seconds. >10s The number of operations that took over 10 seconds.

308

Data Domain Operating System User Guide

CIFS Management

22

The cifs command manages CIFS (Common Internet File system) backups and restores from and to Windows clients, and displays CIFS statistics and status. CIFS system messages on the Data Domain System go to a CIFS log directory. The location is: /ddvar/log/windows Note When configuring a destination Data Domain System as part of a Replicator pair, configure the authentication mode, WINS server (if needed), and other entries as with the originator in the pair. The exceptions are that a destination does not need a backup user and will probably have a different backup server list (all machines that can access data on the destination).

CIFS Access
A CIFS client can map to two shares on a Data Domain System. Use the cifs add command (see Add a Client on page 311) to make a share available to a client. A client is typically a Windows workstation, not a user.

/ddvar is the share for administrative tasks, such as looking at a log file. /backup is the share used by a Windows backup account for data storage and retrieval.

Any user that logs in to a Data Domain System is put into one of two groups. The user group is limited to commands that display statistics and status. The admin group can make configuration changes and use the display commands.

If the Data Domain System and a user account are in the same domain (or in a related trusted domain), the user can login to the Data Domain System through a client that is known to the Data Domain System. If the user has no matching local account on the Data Domain System, the user is part of the user group. If the user has a matching local account on the Data Domain System and the local account is part of the admin group, the user is logged in as part of the admin group.

309

CIFS Access

If the Data Domain System is in a workgroup, a user can login to the Data Domain System through a client that is known to the Data Domain System. The user must have a matching account (name and password) added to the Data Domain System as a local user account (see Add a User below). The user is logged in as part of the group specified for the local account, user or admin.

For access to the Data Domain System command line interface, use the SSH (or TELNET if enabled) utility to log into the Data Domain System or use a web-based browser to connect to the Data Domain Enterprise Manager graphical user interface. Note Permissions changes made to /backup or /ddvar from a CIFS administrative account may cause unexpected limitations in access to the Data Domain System and may not be reversible from the CIFS account. By default, folders are created with permission bits of 755 and files with permission bits of 744.

Add a User
To add a user, use the command user add user-name. The command asks for a password and confirmation or you can include the password as part of the command. Users added to the Data Domain System can have a privilege level of admin or user. The default is user. user add user-name [password password][priv admin | user] All user accounts on a Data Domain System act as CIFS local (built-in) accounts, which means that the user name can access data in /backup on the Data Domain System, and the user name can log in to the Data Domain System and use the Data Domain System command set for managing the system. See the Data Domain System command adminaccess for the available access protocols. For example, to add a user with a name of backup22, a password of usr256, and user privilege: # user add backup22 password usr256 For a Windows client that needs file access to a Data Domain System, enter a command similar to the following from a command prompt on the Windows client (usually a Windows media server). The example below maps /backup from Data Domain System rstr02 to drive H on the Windows system and gives user backup22 access to /backup: > net use H: \\rstr02\backup /USER:rstr02\backup22 For administrative access from Windows users in the same domain as the Data Domain system, see Allow Access from Windows on page 110.

310

Data Domain Operating System User Guide

CIFS Access

Add a Client
Each Windows backup server that will do backup and restore operations with a Data Domain System must be added as a backup client. To add a backup client that hosts a backup user account, use the cifs add /backup command. Each Windows machine that will host an administrative user for a Data Domain System must be added as an administrative client. Administrative clients use the /ddvar directory on a Data Domain System. To add a Windows machine that hosts an administrative user account as a client on the Data Domain System, use the cifs add /ddvar command. List entries can be comma-separated, space-separated, or both. To give access to all clients, the client-list can be an asterisk (*). cifs add /backup client-list cifs add /ddvar client-list The client-list can contain class-C IP addresses, IP addresses with either netmasks or length, hostnames, or an asterisk (*) followed by a domain name, such as *.yourcompany.com. For example, to add a client named srvr24 that will do backups and restores with the Data Domain System: # cifs add /backup srvr24 Netmasks, as in the following examples, are supported: # cifs add /backup 192.168.1.02/24 # cifs add /backup 192.168.1.02/255.255.255.0

Secured LDAP with Transport Layer Security (TLS)


Active-directory domains may be set up for secured LDAP sessions using TLS or with the Security Options, Domain Controller: LDAP server signing requirements Property set to Require signing. Before joining a Data Domain System to such an active-directory domain, take the following actions: 1. Access the Data Domain System so that you can copy a file to the Data Domain System. Data Domain recommends: Use the FTP utility to log in as the user sysadmin. Join the Data Domain System to a workgroup: - Use the Data Domain System command cifs set authentication to join the Data Domain System to a workgroup - On the Data Domain System, add as a client a system in the workgroup that has access to the Certificate Authority certificate. - Map the Data Domain System directory /ddvar on the Windows client.

CIFS Management

311

CIFS Commands

2. Copy the CA certificate to the location /ddvar/releases/cacerts on the Data Domain System and give the certificate file the name ca.cer. 3. If you earlier set authentication to the workgroup mode, use the cifs reset authentication command on the Data Domain System to return to the default of no mode. 4. On the Data Domain System, run the following command: # cifs option set start-tls enabled With the CA certificate on the Data Domain System, use the cifs set authentication command to join the Data Domain System to an active-directory domain only. See Set the Authentication Mode on page 316.

CIFS Commands
The cifs command enables and disables access, sets the authentication mode, and displays status and statistics. All cifs operations are available only to administrative users.

Enable Client Connections


To allow CIFS clients to connect to a Data Domain System, use the cifs enable operation. cifs enable

Disable Client Connections


To block CIFS clients from connecting to a Data Domain System, use the cifs disable operation. cifs disable

Remove a Backup Client


To remove a Windows backup client, use the cifs del /backup operation. List entries can be comma-separated, space-separated, or both. cifs del /backup client-list For example, to remove the backup client srvr24: # cifs del /backup srvr24
312 Data Domain Operating System User Guide

CIFS Commands

Remove an Administrative Client


To remove a Windows administrative client, use the cifs del /ddvar operation. List entries can be comma-separated, space-separated, or both. cifs del /ddvar client-list For example, to remove the administrative client srvr22: # cifs del /ddvar srvr24

Remove All CIFS Clients


To remove all of the CIFS clients from a Data Domain System, use the cifs reset clients operation. cifs reset clients

Set a NetBIOS Hostname


To change the NetBIOS hostname of the Data Domain System, use the cifs set nb-hostname operation. The default NetBIOS name is the first component of the fully-qualified hostname used by the Data Domain System. If you are using domain authentication, the nb-name cannot be over 15 characters long. Use the cifs show config command to see the current NetBIOS name. cifs set nb-hostname nb-name For example, to give a Data Domain System the name of rstr12 for NetBIOS use: # cifs set nb-hostname rstr12

Remove the NetBIOS Hostname


To remove the NetBIOS hostname of the Data Domain System, use the reset nb-hostname operation. cifs reset nb-hostname

Create a Share on the Data Domain System


The default shares on a Data Domain system are /ddvar and /backup. To create more shares, use the cifs share create operation.

CIFS Management

313

CIFS Commands

cifs share create share-name path path {max-connections number | clients client-list | browsing {enabled | disabled} | writeable {enabled | disabled} | users user-names | comment comment} share-name Use a descriptive name for the share. path The path to the target directory. max-connections The maximum number of connections to the share that are allowed at one time. client-list A list of the clients that can access the share. The client list is a comma-separated list of clients that are allowed to access the share. Other than the comma delimiter, there should not be any whitespace (blank, tab) characters. The list must be enclosed in double quotes. Some valid client lists are: "host1,host2" "host1,10.24.160.116" Some invalid client lists are: "host1 "

"host1 ,host2" "host1, 10.24.160.116" "host1 10.24.160.116" browsing The share can be seen (enabled, which is the default) or not seen (disabled) by web browsers. writeable Make the share writeable (enabled, the default) or not writeable (disabled). user-names The user names list is a comma-separated list of user names. Other than the comma delimiter, any whitespace (blank, tab) characters are treated as part of the user name, because aWindows user name can have a space character anywhere in the name. The list must be enclosed in double quotes. All users from the client-list can access the share unless you give one or more user names. With one or more names, only the listed names can access the share. In the list of user names, group names can occur. Group names must have an at (@) symbol before them. Group names and user names should be separated only by commas, not spaces. There can be spaces inside the name of a group, but there should not be spaces between groups. In the example below, there are two groups followed by two users. Some valid user names lists are: "user1,user2" "user1,@group1"
314 Data Domain Operating System User Guide

CIFS Commands

" user-with-one-leading-space,user2" "user1,user-with-two-trailing-spaces "user1,@CHAOS\Domain Admins" comment A descriptive comment about the share. For example: # cifs share create dir2 path /backup/dir2 clients * users dsmith,jdoe Note As of the DD OS 4.5.0.0 release, DD OS supports the MMC (Microsoft Management Console) features: - Share management, except for browsing when adding a share and the changing of the default Offline settings of manual. - Session management. - Open file management, except for deleting files. - Local users and groups can be displayed, but not added, changed, or removed. "

Delete a share
To delete a share, use the cifs share destroy operation. cifs share destroy share-name

Enable a Share
To enable a share, use the cifs share enable operation. cifs share enable share-name

Disable a Share
To disable a share, use the cifs share disable operation. cifs share disable share-name

Modify a Share
To modify a share, use the cifs share modify operation.

CIFS Management

315

CIFS Commands

cifs share modify share-name {max-connections number | clients client-list | browsing {enabled | disabled} | writeable {enabled | disabled} | users user-names} share-name Use a descriptive name for the share. path The path to the target directory. max-connections The maximum number of connections to the share that are allowed at one time. client-list A list of clients that can access the share. All existing clients for the share are overwritten with the new client-list. The list can be client names or IP addresses. With more than one entry in the list, use double quotes ( ) around the list and commas (not spaces) between each entry. The list must be enclosed in double quotes. For example: # cifs share modify backup clients a,b,c,d browsing The share can be seen (enabled, which is the default) or not seen (disabled) by web browsers. writeable Make the share writeable (enabled, the default) or not writeable (disabled). user-name All users from the client-list can access the share unless you give one or more user names. With one of more names, only the listed names can access the share. The list must be enclosed in double quotes.

Set the Authentication Mode


The Data Domain System can use the authentication modes of: active-directory, domain, or workgroup. Use the cifs set authentication operations to choose or change a mode. Each mode has a separate syntax. The active-directory mode joins a Data Domain System to an active-directory-enabled domain. The realm must be a fully-qualified name. Data Domain recommends not specifying a domain controller. When not using a domain controller, first specify a WINS server. The Data Domain System must meet all active-directory requirements, such as a clock time that is no more than five minutes different than the domain controller. See Procedure: Time Servers and Active Directory Mode on page 326 for information about time servers. Optionally, include multiple domain controllers or all ( * ). The domain controller list entries can be comma-separated, space-separated, or both. cifs set authentication active-directory realm {[dc1[dc2 ...]] | *} Note Before joining an active-directory domain that uses secure LDAP sessions with TLS, see Secured LDAP with Transport Layer Security (TLS) on page 311.

316

Data Domain Operating System User Guide

CIFS Commands

The domain mode puts the Data Domain System into an NT4 domain. Include a domain name and optionally, a primary domain controller or backup and primary domain controllers or all ( * ). cifs set authentication domain domain [[pdc [bdc]] | *] The workgroup mode means that the Data Domain System verifies user passwords. cifs set authentication workgroup wg-name

Remove an Authentication Mode


To set authentication to the default of workgroup, use the cifs reset authentication operation. cifs reset authentication

Add an IP Address/NetBIOS hostname Mapping


To add an IP address/NetBIOS hostname mapping to the lmhosts file, use the cifs hosts add ipaddr host-list operation. One IP address can have multiple host names. cifs hosts add ipaddr host-list For example, to add the IP address for the machine srvr22: # cifs hosts add 192.168.10.25 srvr22 Added "srvr22" -> "192.168.10.25" mapping to hosts list.

Remove All IP Address/NetBIOS hostname Mappings


To remove all IP address/NetBIOS hostnames from the lmhosts file, use the cifs hosts reset operation. cifs hosts reset

Remove an IP Address/NetBIOS hostname Mapping


To remove an IP address/NetBIOS hostname mapping from the lmhosts file, use the cifs hosts del ipaddr operation. cifs hosts del ipaddr For example, to remove the IP address 192.168.10.25: # cifs hosts del 192.168.10.25 Removed mapping 192.168.10.25 -> srvr22.

CIFS Management

317

CIFS Commands

Resolve a NetBIOS Name


To display the IP address used for any NetBIOS name on the WINS server, use the cifs nb-lookup operation. The CIFS feature must already be enabled. cifs nb-lookup net-bios-name For example, to display the IP address for the machine srvr22: # cifs nb-lookup srvr22 querying srvr22 on 192.168.1.255 192.168.1.14 morgan<00>

Identify a WINS server


To identify a WINS server for resolving NetBIOS names to IP addresses, use the cifs set wins-server operation. cifs set wins-server ipaddr For example, to use a WINS server with the IP address of 192.168.1.12: # cifs set wins-server 192.168.1.12

Remove the WINS server


To remove the WINS server IP address, use the reset wins-server operation. cifs reset wins-server

Set Authentication to the Active Directory Mode


To set authentication to the active directory mode, use the command: cifs set authentication active-directory <realm> {[ <dc1> [<dc2 ...>]] | * } The <realm> must be a fully-qualified name. Data Domain recommends not specifying a domain controller; use "*" in most cases. The system must meet all active-directory requirements, such as a clock time that is no more than five minutes different than the domain controller. The domain controllers can be a list of addresses or names that are comma-separated or space-separated or both. When you do "cifs set authentication active-directory", it prompts for a user account. You can enter a user on YourCompany.com, or you can enter a user in a domain which is a trusted domain of YourCompany.com. Your trusted domain user must have permission to create accounts in the YourCompany.com domain.

318

Data Domain Operating System User Guide

Set CIFS Options

When you enter the command "cifs set authentication active directory", the Data Domain system automatically adds a host entry to the DNS server, so it is not necessary to pre-create the DNS host entry for the Data Domain system. If you set nb-hostname (using "cifs set nb-hostname"), then the entry is created for nb-hostname instead of the system hostname, otherwise it uses the system hostname. See also the command "cifs option set organizational-unit", which is used in conjunction with "cifs set authentication active-directory".

Set CIFS Options


Set Organizational Unit
The command to set this option is: cifs option set organizational-unit <value> Organizational-unit - Set the OU (organizational-unit) to the desired OU. This gives the ability to add the Data Domain system to any OU in the AD (Active Directory), not just thedefault OU, which is "Computers". Two commands are used together: Use "cifs option set" to set the desired OU. Then:use "cifs set authentication active-directory" to join the domain. EXAMPLE: cifs option set organizational-unit "Computers/Servers /ddsys units" cifs set authentication active-directory YourCompany.com Note If the Data Domain system machine account was already created and is already in the default "Computers" or in anOU, then when we join thedomain again, the computer account will not move to the OU that you specified, because it isalready in a different OU.

Allow Trusted Domain Users


To allow user access from domains that are trusted by the domain that includes the Data Domain system, use the cifs option set allowtrusteddomains command. The default is disabled. cifs option set allowtrusteddomains {enabled | disabled}

CIFS Management

319

Set CIFS Options

Allow Administrative Access for a Windows Domain Group


To allow administrative access to a Data Domain System from a Domain Group (a group that exists on a Windows domain controller), use the cifs option set dd admin group operation. You can use the operation to map a Data Domain System default group number to a Windows group name that is different than the default group name. cifs option set dd admin groupn [windows grp-name]

The default Data Domain System group dd admin group1 is mapped to the Windows group Domain Admins. The default Data Domain System group dd admin group2 is mapped to a Windows group named Data Domain that you create on a Windows domain controller. Access is through SSH, Telnet, and FTP. CIFS administrative access must be enabled with the adminaccess command.

Set CIFS Logging Levels


You can set the level of messages that go to the CIFS-related log files under /ddvar/log/windows. Use the cifs option set loglevel command. cifs option set loglevel value The value is an integer from 0 (zero) to 10 (ten). Zero is the default system value that sends the least-detailed level of messages. As an example, for more detailed messages: # cifs option set loglevel 3 Set "loglevel" to "3"

Increase Memory to Allow More User Accounts


When using domain or active directory mode authentication on a Data Domain system, adding 50,000 or more user accounts may cause memory allocation errors. Use the cifs option set command to increase memory available for user accounts. cifs option set winbindd-mem-limit value The value is an integer from 52428800 to 1073741824. The default is 52428800. For example, to double the space for user names: # cifs option set winbindd-mem-limit 104857600 Set winbindd-mem-limit" to "104857600"

320

Data Domain Operating System User Guide

Set CIFS Options

Set the Maximum Transmission Size


To set the maximum packet transmission size that is negotiated for Data Domain System reads and writes, use the cifs option set maxxmit command. cifs option set maxxmit value The value is an integer from 16384 to 65536. The default is 65536, which usually gives the best performance.

Control Anonymous User Connections


To allow or disallow anonymous user access from known clients, use the cifs option set restrict- anonymous command. The default is disabled, which allows anonymous users. cifs option set restrict-anonymous {enabled | disabled}

Increase Memory for SMBD Operations


To increase memory for SMBD operations, use the cifs option set smbd-mem-limit command. Some backup applications open more SMBD sessions and connections if the Data Domain system does not process SMBD operations (such as a large number of file deletions) as fast as expected. The new connections further slow down operations. Increasing memory for SMBD avoids such a loop. cifs option set smbd-mem-limit value The value is an integer from 52428800 to 1073741824. The default is 52428800.

Allow Certificate Authority Security


To allow a Data Domain System to work with an active-directory domain that is set up for secured LDAP sessions using TLS or with the Security Options, Domain Controller: LDAP server signing requirements Property set to Require signing, use the cifs option set start-tls operation after copying the CA certificate to a Data Domain System. cifs option set start-tls {enabled | disabled}

Reset CIFS Options


To reset a CIFS option to the default, use the cifs option reset command. cifs option reset name For example:
CIFS Management 321

Display

# cifs option reset loglevel

Display CIFS Options


To display the CIFS options that are available from the cifs command, use the cifs option show command. cifs option show

Display
Display CIFS Statistics
To display CIFS statistics for total operations, reads, and writes, use the cifs show stats operation. cifs show stats For example: # cifs show stats SMB total ops : SMB reads : SMB writes : 31360 165 62

Display Active Clients


To display Windows clients that are currently active, use the cifs show active operation. cifs show active The display is similar to the following and shows which shares are accessed from a client machine and what data transfer may be happening (Locked files). # cifs show active PID Username Group Machine ---------------------------------------------------------568 sysadmin admin srvr24 (192.168.1.5) 566 sysadmin admin srvr22 (192.168.1.6) Service pid machine Connected at --------------------------------------------------ddvar 566 srvr22 Tue Jan 13 12:11:03 2004 backup 568 srvr24 Tue Jan 13 12:09:44 2004
322 Data Domain Operating System User Guide

Display

IPC$ IPC$ backup

566 568 566

srvr22 srvr24 srvr22

Tue Jan 13 12:10:55 2004 Tue Jan 13 12:09:36 2004 Tue Jan 13 12:10:59 2004

Locked files: Pid DenyMode Access R/W Oplock Name ------------------------------------------------------------566 DENY_WRITE 0x20089 RDONLY NONE /loopback/setup.iso Tue Jan 13 12:11:53 2004 566 DENY_ALL 0x30196 WRONLY NONE /loopback/RH8/ psyche-i386-disc1.iso Tue Jan 13 12:12:23 2004

Display All Clients


The display of all Windows clients that have access to the default /backup data share and /ddvar administrative share lists the access path for each client. Each Windows backup server that will do backup and restore operations has a path starting with /backup. Each Windows client that will host an administrative user has the path of /ddvar. Use the cifs share show command to show client access information for custom shares. Display Use the cifs show clients operation or click CIFS in the left panel of the Data Domain Enterprise Manager to see all clients. cifs show clients The display is similar to the following: # cifs show clients path client ------- --------/backup all /backup srvr24.yourcompany.com /ddvar srvr24.yourcompany.com ------- ---------

Display the CIFS Configuration


The CIFS configuration display begins with the authentication mode, gives details unique to each mode, lists a WINS server if one is configured, and lists NetBIOS hostnames.

CIFS Management

323

Display

Display Use the cifs show config operation or click CIFS in the left panel of the Data Domain Enterprise Manager to display CIFS configuration details. cifs show config For example: # cifs show config -----------------Mode Workgroup WINS Server NB Hostname -----------------------------Workgroup WORKGROUP 192.168.1.7 server26 -------------

Display Detailed CIFS Statistics


To display statistics for each individual type of SMB operation, use the cifs show detailed-stats operation. cifs show detailed-stats

Display All IP Address/NetBIOS hostname Mappings


To display all IP address/NetBIOS hostname mappings in the lmhosts file, use the cifs hosts show operation. cifs hosts show The command output is similar to the following: # cifs hosts show Hostname Mappings: 192.168.10.25 -> srvr22

Display CIFS Users


To display a list of CIFS users, enter the cifs troubleshooting list-users operation. cifs troubleshooting list-users For example: # cifs troubleshooting list-users Username ------------------------------------324

UID GID ----- -----

Data Domain Operating System User Guide

Display

ddr4\sysadmin ddr4\jsmith -------------------------------------

100 50 101 100 ----- -----

Display CIFS Status


To display the status of CIFS access to the Data Domain System, use the cifs status operation. cifs status For example: # cifs status CIFS is enabled and running.

Display Shares
To display all shares or an individual share on a Data Domain system, use the cifs share show command. cifs share show [share-name]

Display CIFS Groups


To display a list of CIFS groups, enter the cifs troubleshooting list-groups operation. cifs troubleshooting list-groups For example: # cifs troubleshooting list-groups Groupname ---------------------------------------ddr4\admin ddr4\users ---------------------------------------GID ----50 100 -----

Display CIFS User Details


To display the details of a CIFS user, enter the cifs troubleshooting user operation. cifs troubleshooting user {name | uid | SID} For example:
CIFS Management 325

Procedure: Time Servers and Active Directory Mode

# cifs troubleshooting -------------------User User ID SID Group Group ID --------------------

user jsmith ------------------------------ddr4\jsmith 101 <NONE> ddr\user 100 ---------------------------------------

Display CIFS Group Details


To display the details of a CIFS group, enter the cifs troubleshooting group operation. cifs troubleshooting group {groupname | gid | SID} For example: # cifs troubleshooting -------------------Group Group ID SID -------------------group 100 ------------------------------------ddr4\users 100 <NONE> -------------------------------------

Procedure: Time Servers and Active Directory Mode


When using active directory mode for CIFS access, the Data Domain System clock time can be no more than five minutes different than the domain controller. Use the Data Domain System ntp command (see Time Servers and the NTP Command on page 83) to synchronize the clock with a time server. Note The ntp command cannot synchronize the Data Domain System with a time server if the time difference is greater than 1000 seconds. Before following either of the procedures below, manually set the clock on the Data Domain System to less than 1000 seconds difference.

Synchronizing from a Windows Domain Controller


When synchronizing through a Windows domain controller:

The domain controller must get time from an external source.

326

Data Domain Operating System User Guide

Procedure: Add a Share on the CIFS Client

NTP must be configured on the domain controller. To configure NTP, see the documentation for the Windows software version and service pack that is running on your domain controller. The following example is for Windows 2003 SP1 (use your ntp-server-name): C:\>w32tm /config /syncfromflags:manual /manualpeerlist: ntp-server-name C:\>w32tm /config /update C:\>w32tm /resync

After NTP is configured on the domain controller, run the following commands on the Data Domain System using your domain-controller-name: # ntp add timeserver domain-controller-name # ntp enable

Synchronizing from an NTP Server


When synchronizing directly from a standard NTP server, use the following commands on the Data Domain System. Substitute your ntp-server-name: # ntp add timeserver ntp-server-name # ntp enable

Procedure: Add a Share on the CIFS Client


Adding a share requires operations on the CIFS client and on the Data Domain System. The CIFS client could be a UNIX CIFS Client or a Windows CIFS Client.

Adding a Share on a UNIX CIFS Client

On the Data Domain System, add the list of clients that can access the share. For example: # cifs add /backup srvr24 srvr25

On a CIFS client, browse to \\ddr\backup and create the share directory, such as dir2. On the CIFS client, set share directory permissions or security options. On the Data Domain System, create the share and add users that will come from the clients added earlier. For example: DDOS# cifs share create dir2 path /backup/dir2 clients * users domain\user5,domain\user6

CIFS Management

327

Procedure: Add a Share on the CIFS Client

Adding a Share on a Windows CIFS Client (MMC)


The Windows client is called MMC (Microsoft Management Console). On the Data Domain system: Make sure CIFS is enabled with the cifs status command. On the Windows Client: (It may be useful to log on using Start...All Programs...Accessories...(Accessibililty)...Remote Desktop, as in the following figure.)

Figure 18(It may be useful to log in using Remote Desktop.)

328

Data Domain Operating System User Guide

Procedure: Add a Share on the CIFS Client

1. Log in as administrator

F1 g9 ie u r : Log in as administrator.

2. Go to My Computerl->Control Panel->Administrative Tools->Computer Management 3. Right click on 'Computer Management(Local)'

CIFS Management

329

Procedure: Add a Share on the CIFS Client

Figure 20: Computer Management

4. Select 'connect to another computer...' 5. Specify the name or ip address of a Data Domain System.

330

Data Domain Operating System User Guide

Procedure: Add a Share on the CIFS Client

Figure 21: Select Computer

6. From here one can see the shares, sessions, etc.

Figure 22: Shares

CIFS Management

331

Procedure: Add a Share on the CIFS Client

7. For example, create a share as read only:

Figure 23: New File Share

a. On the new window, right-click on 'Shares' ... 'New File Share...'.

332

Data Domain Operating System User Guide

Procedure: Add a Share on the CIFS Client

Figure 24: c:\backup\newshare

b. Enter path as c:\backup\newshare --> and click Next.

CIFS Management

333

Procedure: Add a Share on the CIFS Client

Figure 25: Select Administrators have full access.

c. Select "Administrators have full access; other users have read-only access".

334

Data Domain Operating System User Guide

Procedure: Add a Share on the CIFS Client

Figure 26: Click Finish.

d. Click "Finish".

CIFS Management

335

File Security With ACLs (Access Control Lists)

Figure 27: Newshare now appears..

e. The newshare folder now appears in the Computer Management screen. 8. Shared sessions and shared open files can be managed similarly, through the folders Sessions and Open Files in the left panel of the Computer Management screen.

File Security With ACLs (Access Control Lists)


Note It is important to understand that once NTFS ACLs are enabled, disabling them at a later time will require a lengthy meta-data conversion process. Therefore, unless there is any reason to believe or anticipate ACLs will be disabled at a later point it is recommended that you do not disable NTFS ACLs once enabled. When NTFS ACLs are disabled via the 'cifs option set ntfs-acls disabled' command, the Data Domain System will generate an ACL that approximates the UNIX permissions, regardless of the presence of a previously set NTFS ACL. If it is determined (by Data Domain Support or other) that NTFS ACLs must be disabled, then the data which has NTFS ACLs associated should be moved or copied (to remove the NTFS ACLs.) It is recommended that you contact Data Domain Support prior to disabling NTFS ACLs. The DDFS (Data Domain File System) has the ability to store granular and complex permissions (DACLs - Discretionary ACLs) that can be set on files and folders in the Windows filesystem.
336 Data Domain Operating System User Guide

File Security With ACLs (Access Control Lists)

The DDFS also supports storage and retrieval of audit ACLs (SACLs - Security ACLs). However, neither enforcing the audit ACL (SACL) nor generating audit events is implemented.

How to set ACL Permissions/Security


Granular and complex permissions (DACL)
Granular and complex permissions (DACL) can be set on any file or folder object within the DDFS file systems, either through using Windows OS commands such as cacls, xcacls, xcopy and scopy, or through the CIFS protocol using the Windows Explorer GUI (Properties -> Security -> Advanced -> Permissions), as shown in Figure 28 and Figure 29.

Figure 28: Windows Explorer GUI (Properties -> Security)

CIFS Management

337

File Security With ACLs (Access Control Lists)

Figure 29: Windows Explorer GUI (Properties -> Security -> Advanced -> Permissions

The DACL can be viewed through the CIFS protocol either through commands, or by using the Windows Explorer GUI.

Audit ACL (SACL)


Audit ACL (SACL) can be set on any object in the DDFS, either through commands, or through the CIFS protocol using the Windows Explorer GUI (Properties -> Security -> Advanced -> Auditing). This is shown in Figure 30.

338

Data Domain Operating System User Guide

File Security With ACLs (Access Control Lists)

Figure 30: Windows Explorer GUI (Properties -> Security -> Advanced -> Auditing

The SACL can be viewed through the CIFS protocol either through commands, or by using the Windows Explorer GUI.

Owner SID
The owner SID can be viewed through the CIFS protocol either through commands, or by using the Windows Explorer GUI (Properties -> Security -> Advanced -> Owner). This is shown in Figure 31.

CIFS Management

339

File Security With ACLs (Access Control Lists)

Figure 31: Windows Explorer GUI (Properties -> Security -> Advanced -> Owner

Windows-based backup/restore tools such as ntbackup can be used on DACL- and SACL-protected files, to backup those files to the Data Domain System and restore them from it. For more information on ACLs and their use, see the Windows Operating System documentation.

ntfs-acls and idmap-type


There are two new CIFS options that control ACL support and id (account) mapping behavior: ntfs-acls: This option has the possible values "enabled" and "disabled". The default value is "enabled". When the option is set to "enabled", ACL support will be enabled, otherwise it will be disabled. When the ACL support is disabled, the system presents limited ACL support as in prior releases of DDOS, where ACLs that can be represented in UNIX permission bits can be set. idmap-type: This option has the possible values "rid" and "none". The default value is "rid". When the option is set to "rid", the SAMBA idmap rid/tdb will be used, which is also the mapping scheme in 4.4. When the option is set to "none", all CIFS users are mapped to a local UNIX user named 'cifsuser' belonging to the local UNIX group 'users'. This option can only be set to "none" when ACL support is enabled.

340

Data Domain Operating System User Guide

File Security With ACLs (Access Control Lists)

Both options can only be set when CIFS is not enabled. If CIFS is running, CIFS services should be disabled first to set these options. Whenever the idmap type is changed, file system metadata conversion may need to be performed for correct file access. Without any conversion, the user may not be able to access the data. There is a tool available to perform the metadata conversion. The tool is obtained by using the following command on the Data Domain system: dd-aclutil -m <root directory where userid/groupid are to be changed> Note When CIFS ACLs are disabled via 'cifs option set ntfs-acls disabled', the Data Domain System will generate an ACL that approximates the UNIX permissions, regardless of the presence of a previously set CIFS ACL.

Procedure to Turn on ACLs:


(As of 4.5.1, ACLs is turned on automatically, and this procedure is no longer needed.)

If this is a new installation:


1. cifs disable (Block CIFS clients from connecting.) 2. cifs option set ntfs-acls enabled 3. cifs option set idmap-type none 4. cifs enable (Allow CIFS clients to connect.)

If this is an existing installation, with pre-existing CIFS data residing on the system:
1. cifs disable (Block CIFS clients from connecting.) 2. cifs option set ntfs-acls enabled 3. cifs enable (Allow CIFS clients to connect.) 4. Create ACLs on existing files, as explained under the section "How to set ACL Permissions/Security" above.

CIFS Management

341

File Security With ACLs (Access Control Lists)

342

Data Domain Operating System User Guide

Open STorage (OST)

23

The ost command allows a DDR (Data Domain system) to be a storage server for Symantecs NetBackup OpenStorage feature. OST stands for Open STorage. That is, DataDomains OST command set provides a user interface to Symantecs OpenStorage, which is itself an API between NetBackup and disk storage. NetBackup docs are available on the web at http://entsupport.symantec.com. The ost command allows the creation and deletion of logical storage units on the storage server, and the display of space utilization for the same. OpenStorage is a Data Domain licensed feature. There is one license for the "basic" OpenStorage feature of backing up and restoring image data. A replication license is also required for optimized duplication, for both the source and destination Data Domain systems. Definitions: LSU (Logical Storage Unit): The logical storage unit (LSU) represents an abstraction of physical storage. For Data Domain, an LSU is a ddfs directory. Storage Server: OpenStorage defines a storage server as an entity that writes data to and reads data from disk storage. For Data Domain, a Storage Server is a Data Domain system. Image: An OpenStorage image is an entire backup data set, a single fragment from a single backup data set, or multiple fragments from multiple backup data sets. The OpenStorage application writes an image to a single LSU on a single storage server. For Data Domains purposes, OpenStorage image data will be stored in a ddfs file. The OpenStorage API does not have the capability to create and delete LSUs. This functionality is available only via the Data Domain system. Hence the user interface includes CLIs to manage the LSUs. LSUs are created under the /backup/ost directory. The ost directory is flat namespace: all LSUs are created under this directory. The enable command creates the ost directory and exports this directory for the OpenStorage plugin.

343

Overview: steps to enable OST on the DDR

For performance and status monitoring, the Data Domain system also manages active OpenStorage or plugin connections. An OpenStorage connection between a plugin and DDR requires authentication. When enabling OpenStorage on the DDR, a user name must be supplied. The user name is created using the current user add command. All OST LSUs and images are created using this user's credentials (ie uid and gid). For performance reasons, the Data Domain system limits the number of active connections to 32. When OpenStorage is disabled on the Data Domain system, existing OpenStorage LSUs and their images remain. Image data can be accessed once OpenStorage is re-enabled. If OpenStorage is disabled, an error will be returned to subsequent OpenStorage operations. Any active operation already in the pipeline will continue until completion. There may be certain circumstances when a customer may want to remove all LSUs and images, for which purpose the ost destroy command exists. This command asks the user for a sysadmin password, otherwise it will not be carried out.

Overview: steps to enable OST on the DDR


For detailed descriptions of the commands, see subsequent sections. 1. Add the OST license. 2. Add the OST user. 3. Enable the OST feature. 4. Create Logical Storage Units.

Add the OST license.


Add the OST license using the license add command: license add license-code The code for each license is a string of 16 letters with dashes. Include the dashes when entering the license code. Administrative users only. For example: # license add ABCD-BCDA-CDAB-DABC License ABCE-BCDA-CDAB-DABC added. Further details on licenses can be found in the chapter on Configuration Management.

344

Data Domain Operating System User Guide

Add the ost user - set the ost user to user-name

Add the ost user - set the ost user to user-name


To set the ost user to user-name , use the ost set user-name user-name command. (This command can be executed while ost is enabled.) Note This is the Username/Password that will be used as the NetBackup credentials to connect to this DDR. These credentials must be added to each NetBackup media server that connects to this DDR. Refer to the OpenStorage chapter in the NetBackup Shared Storage Guide for this step. ost set user-name user-name For example: # ost set user-name ost OST user set to ost. Previous user: none set

Reset the ost user back to the default (no user set)
To reset the ost user back to the default (no user set), use the ost reset user-name command. (This command can be executed while ost is enabled.) ost reset user-name

Display the current ost user


To show the current ost user, use the ost show user-name command. ost show user-name

Enable the OST feature


To allow storage server capabilities for the Data Domain system, use the ost enable command. Note This command requires a valid user account. Before doing an ost enable, an ost user must be set using the ost set user-name user-name command. If no user is set, ost is not enabled, and an error message appears. The ost enable command creates and exports the /backup/ost directory. Administrative users only. ost enable # ost enable
Open STorage (OST) 345

Disable the OST feature

OST enabled. If the user changes, it takes effect at the next 'ost enable'. If uid and gid change, all images and LSUs are changed at the next 'ost enable'.

Disable the OST feature


To disable storage server capabilities for the Data Domain system, use the ost disable command. This command requires a valid user account. The ost disable command creates and exports the /backup/ost directory. Administrative users only. ost disable

Show the current status (enabled or disabled) for ost


The ost status operation shows the current status (enabled or disabled) for ost. For example: # ost status OST status: enabled

Create an LSU (logical storage unit) with the given LSU-name


The ost lsu create lsu-name operation creates the logical storage unit with the given lsu-name. Administrative users only. ost lsu create lsu-name After a "filesys destroy", an "ost disable" and an "ost enable" should be done before doing an "ost lsu create", otherwise an error will result. [Caution: the 'filesys destroy' command irrevocably destroys all data in the '/backup' data collection, including all virtual tapes, and creates a newly initialized (empty) file system.]

Delete an LSU
346 Data Domain Operating System User Guide

Delete all images and LSUs on the Data Domain system

The ost lsu delete lsu-name operation deletes all images in the logical storage unit with the given lsu-name. Corresponding NetBackup Catalog entries must be manually removed (expired). A prompt asks for the sysadmins password, which must be entered in order to proceed. Administrative users only. ost lsu delete lsu-name For example, to empty the lsu lsu66 of all its contents: # ost lsu delete lsu66

Delete all images and LSUs on the Data Domain system


To delete all images and LSUs on the Data Domain system, use the ost destroy command. Corresponding NetBackup Catalog entries must be manually removed (expired). A prompt asks for a sysadmin password, which must be entered in order to proceed. Administrative users only. ost destroy

Display LSU / or all the LSUs on the Data Domain system


Use the ost lsu show command to display all the logical storage units. If an lsu-name is given, display all the images in the logical storage unit. If compression is specified, the logical storage unit or images original, globally compressed and locally compressed sizes will also be displayed. ost lsu show [compression] [lsu-name] Note Note: Use ctrl-c to interrupt any of the ost lsu show commands, whose output can be very long. Examples: # ost lsu show List of LSUs: LSU LSU_NBU2 LSU_NBU LSU_NBU1 LSU_NBU2 LSU_NBU3 LSU_NBU_OPT_DUP
Open STorage (OST) 347

Display LSU / or all the LSUs on the Data Domain system

LSU_NBU_ARCHIVE LSU_TM1 TEST # ost lsu show LSU_NBU1 List of images in LSU_NBU1:
zion.datadomain.com_1184350349_C1_HDR:1184350349:jp1_policy1:4:1:: zion.datadomain.com_1184350349_C1_F1:1184350349:jp1_policy1:4:1::

[ rest not shown ... ] SE@jp1## ost lsu show compression List of LSUs and their compression info: LSU_NBU1: Total files: 4; bytes/storage_used: 206.6 Original Bytes: 437,850,584 Globally Compressed: 2,149,216 Locally Compressed: 2,113,589 Meta-data: 6,124 LSU_NBU2: Total files: 57; bytes/storage_used: 168.6 Original Bytes: 69,198,492,217 Globally Compressed: 507,018,955 Locally Compressed: 409,057,135 Meta-data: 1,411,828 [ rest not shown ... ] SE@jp1## ost lsu show compression LSU_NBU1 List of images in LSU_NBU1 and their compression info:
zion.datadomain.com_1184350349_C1_HDR:1184350349:jp1_policy1:4:1::: Total files: 1; bytes/storage_used: 9.1 Original Bytes: 8,872 Globally Compressed: 8,872 Locally Compressed: 738 Meta-data: 236 zion.datadomain.com_1184350349_C1_F1:1184350349:jp1_policy1:4:1::: Total files: 1; bytes/storage_used: 1.0 Original Bytes: 114,842,092 Globally Compressed: 114,842,092 348 Data Domain Operating System User Guide

Show ost statistics for the Data Domain system Locally Compressed: Meta-data: 112,106,468 382,576

[ rest not shown ... ] Note Note: Use ctrl-c to interrupt the above command, whose output can be very long.

Show ost statistics for the Data Domain system


The ost show stats operation shows ost statistics for the Data Domain system. ost show stats [interval seconds] Note This command is different from the ost show stats interval seconds command, which shows a different set of ost stats. This command displays the OST stats. The ost show stats operation shows the the following ost statistics for the Data Domain system since the last ost enable command: Output of an earlier show stats command Number of bytes written to ost images contained in logical storage units Number of bytes read from ost images contained in logical storage units Number of ost images created in logical storage units Number of ost images deleted from logical storage units

For each statistic displayed, the number of errors encountered for that operation is displayed next to it in brackets. Example: # ost show stats
07/23 12:01:05 OST statistics: OSTGETATTR OSTLOOKUP OSTACCESS OSTREAD OSTWRITE OSTCREATE OSTREMOVE OSTREADDIR OSTFSSTAT FILECOPY_START Open STorage (OST) : : : : : : : : : : 4 13 0 0 329 2 0 0 20 0 [0] [9] [0] [0] [0] [0] [0] [0] [0] [0] 349

Show ost statistics for the Data Domain system over an interval FILECOPY_ABORT FILECOPY_STATUS OSTQUERY OSTGETPROPERTY : : : : Count ---------2 0 10,756,096 0 ---------0 0 11 14 Errors -----0 0 0 0 0 -----[0] [0] [0] [0]

------------------Image creates Image deletes Total bytes written Total bytes read Other -------------------

Show ost statistics for the Data Domain system over an interval
The ost show stats interval seconds operation shows various ost statistics for the Data Domain system. ost show stats interval seconds This command displays OST statistics, namely, the number of Kibibytes read and written per the given interval of time. Note This command is different from the ost show stats command, which shows a different set of ost stats. For example: # ost show stats interval 1 07/23 12:03:35 Write KB/s Read KB/s ---------- --------87,925 0 69,474 0 84,080 0 76,410 0 4,339 0 2,380 0 17,281 0 21,854 0 27,018 0 26,682 0 21,899 0
350 Data Domain Operating System User Guide

Display an ost histogram for the Data Domain system

11,667 0 25,236 0 21,898 0 25,700 0 12,972 0 07/23 12:03:54 Write KB/s Read KB/s ---------- --------15,796 0 27,414 0 27,893 0 18,388 0 3,245 0 27,194 0

Display an ost histogram for the Data Domain system


The ost show histogram operation shows an ost histogram for the Data Domain system. ost show histogram This command displays the OST stats and histogram. This is for performance analysis: latencies of ost operations. Example: # ost show stats
Operation mean-ms stddev max-s <10ms 100ms 1s 10s >10s --------------------------------------------------------------------------------OSTGETATTR OSTLOOKUP OSTACCESS OSTREAD OSTWRITE OSTCREATE OSTREMOVE OSTREADDIR OSTFSSTAT FILECOPY_START FILECOPY_ABORT FILECOPY_STATUS OSTQUERY OSTGETPROPERTY 0.0 0.1 0.4 0.0 0.0 1.0 0.0 0.3 0.0 0.0 0.0 0.0 0.0 0.8 0.0 0.0 0.1 0.0 0.0 0.1 0.0 0.2 0.0 0.0 0.0 0.0 0.0 3.9 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.2 46 88 8 0 0 14 0 17 5011 0 0 0 2710 2713 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

Open STorage (OST)

351

Clear all ost statistics

Clear all ost statistics


To clear all ost statistics, use the ost reset stats command. Administrative users only. (This command can be executed while ost is enabled.) ost reset stats

Display ost connections


To display the maximum number of allowed connections and the list of current active connections, use the ost show connections command. ost show connections For example: # ost show connections Max connections: 32 Active clients: ------zion.datadomain.com

Display statistics on active optimized duplication operations


To Show the status of all the current active inbound and outbound optimized duplication operations, use the ost show image-duplication active command. ost show image-duplication active If active operations exist, the following information is displayed:

Name of the file. Total number of logical bytes to transfer. Number of logical bytes already transferred. Number of real bytes transferred.

For example: # ost show image-duplication active


07/24 18:11:54 Inbound image name 352 Data Domain Operating System User Guide

Sample workflow sequence: zion.datadomain.com_1184802025_C2_F1:1184802025:jp1_policy1:4:1:: Logical bytes received 1,800,000 Real bytes received 900,000 Outbound image name zion.datadomain.com_1184802025_C1_F1:1184802025:jp1_policy1:4:1:: Logical bytes to transfer 4,000,000 Logical bytes already transferred 2,000,000 Real bytes transferred 1,000,000

Sample workflow sequence:


As an illustration, the following shows a sequence of commands and their outputs which might be seen by a typical user: # license add ABCD-BCDA-CDAB-DABC License ABCE-BCDA-CDAB-DABC added. SE@jp1## ost set user-name ost OST user set to ost. Previous user: none set SE@jp1## ost show user-name OST user: ost SE@jp1## ost enable OST enabled. SE@jp1## ost status OST status: enabled SE@jp1## ost show connections Max connections: 32 Active clients: ------zion.datadomain.com SE@jp1## ost lsu create LSU_NBU3 Created LSU LSU_NBU3 SE@jp1## ost lsu show List of LSUs: LSU_NBU2 LSU_NBU1 LSU_NBU1 LSU_NBU2
Open STorage (OST) 353

Sample workflow sequence:

LSU_NBU3 LSU_NBU_OPT_DUP LSU_NBU_ARCHIVE SE@jp1## ost lsu show LSU_NBU1 List of images in LSU_NBU1:
zion.datadomain.com_1184350349_C1_HDR:1184350349:jp1_policy1:4:1:: zion.datadomain.com_1184350349_C1_F1:1184350349:jp1_policy1:4:1::

[ rest not shown ... ] SE@jp1## ost lsu show compression List of LSUs and their compression info: LSU_NBU2: Total files: 57; bytes/storage_used: 168.6 Original Bytes: 69,198,492,217 Globally Compressed: 507,018,955 Locally Compressed: 409,057,135 Meta-data: 1,411,828 LSU_NBU1: Total files: 54; bytes/storage_used: 49.5 Original Bytes: 24,647,055,768 Globally Compressed: 1,441,351,596 Locally Compressed: 493,870,761 Meta-data: 4,536,592 [ rest not shown ... ] SE@jp1## ost lsu show compression LSU_NBU2
List of images in LSU_NBU2 and their compression info: zion.datadomain.com13542_1182889273_C1_HDR:1182889273:PrequalPolicy:4:1::: Total files: 1; bytes/storage_used: 11.5 Original Bytes: 17,064 Globally Compressed: 17,064 Locally Compressed: 1,218 Meta-data: 264 zion.datadomain.com13542_1182889273_C1_F1:1182889273:PrequalPolicy:4:1::: Total files: 1; bytes/storage_used: 993.8 Original Bytes: 4,227,773,676 Globally Compressed: 12,917,108 Locally Compressed: 4,219,441 Meta-data: 34,508

[ rest not shown ... ]

354

Data Domain Operating System User Guide

Sample workflow sequence:

SE@jp1## ost lsu delete LSU_NBU2 Please enter sysadmin password to confirm this command: The 'ost lsu delete' command will delete all images in the lsu. Are you sure? (yes|no|?) [no]: y ok, proceeding. LSU LSU_NBU2 destroyed. SE@jp1## ost lsu delete LSU_NBU_ARCHIVE Please enter sysadmin password to confirm this command: LSU LSU_NBU_ARCHIVE destroyed.

Open STorage (OST)

355

Sample workflow sequence:

356

Data Domain Operating System User Guide

Virtual Tape Library (VTL) - CLI

24

The Data Domain VTL features are divided into two chapters. This chapter covers the CLI (Command Line Interface). For information on the GUI (Graphical User Interface), see the other chapter, entitled Virtual Tape Library (VTL) - GUI. The Data Domain VTL feature allows backup applications to connect to and manage a Data Domain System as though the Data Domain System were a stand-alone tape library. All of the functionality supported with tape is available with a Data Domain System. Also, as with a physical stand-alone tape library, the movement of data from a system using VTL to a physical tape must be managed by backup software, not by the Data Domain system. Virtual tape drives are accessible to backup software in the same fashion as physical tape devices. Devices appear to backup software as SCSI tape drives. A virtual tape library appears to software as a SCSI robotic device accessed through standard driver interfaces. The VTL feature:

Communicates between a backup server and a Data Domain System through a Fibre Channel interface. The Data Domain System must have a Fibre Channel interface card in the PCI card array. Is compatible with all Data Domain DD400 and above (DD500, DD600, etc.) series Data Domain Systems. Supports the tape drive model IBM LTO-1. Supports the tape library personalities StorageTek L180 and RESTORER-L180. See the Data Domain Technical Note for your backup software for the model name that you should use.

Note Use tape and library drivers that are supplied by your backup software vendor and that support the IBM LTO-1 drive and StorageTek L180 library. The RESTORER-L180 works with the same drivers as the StorageTek L180.

The number of recommended concurrent virtual tape drive instances is platform dependent and is the same as the number of recommended streams between a Data Domain system and a backup server. Note that the number is system-wide and includes all streams from all sources, such as VTL, NFS, and CIFS. See Data Streams Sent to a Data Domain system on page 5 for platform limits.

357

Supports 16 libraries (16 concurrently active virtual tape library instances). Access to VTLs and tape drives can be managed with the Access Grouping feature. See Access Groups (for VTL Only) on page 377. Supports up to 64 tape drives (64 concurrently active virtual tape drive instances). Supports up to 100,000 tapes (cartridges) of up to 800 GiB for an individual tape (Gibibytes, the base 2 equivalent of Gigabytes). Includes a pool feature for replication of tapes by defined pools. See Pools on page 387 and the VTL command output examples in this chapter. See Replicating VTL Tape Cartridges and Pools on page 252 for replication details. Includes internal Data Domain system data structures for each virtual data cartridge. The structures have a fixed amount of space that is optimized for records of 16 KiB (Kibibytes, the base 2 equivalent of Kilobytes) or larger. Smaller records use the space at the same rate per record as larger records, leading to a virtual cartridge marked as full when the amount of data is less than the defined size of the cartridge.

Note Data Domain strongly recommends that backup software be set up to use a minimum record (block) size of 64 KiB or larger. Larger sizes usually give faster performance and better data compression. If you change the size after initial configuration, data written with the original size becomes un-readable.

Supports replication between Data Domain Systems. A source Data Domain System exports received virtual tapes (each tape is seen as a file) into a virtual vault and leaves the tapes in the vault. On the destination, each tape (file) is always in a virtual vault. Does not protect virtual tapes from a Data Domain System filesys destroy command. The command deletes all virtual tapes. Handles data received by a Data Domain System during a power loss so that backup software sees the data in the same way as with tape drives in a power loss situation. The strategy your backup software uses to protect data during a loss of power to tape drives gives the same results with a loss of power to a Data Domain System. Responds to the mtx status command from a 3rd-party physical storage system in the same way as would a tape library. If the Data Domain System virtual library has registered any change since the last contact from the 3rd-party physical storage system, the first use of the mtx status command returns incorrect results. Use the command a second time for valid results. Supports simultaneous use of tape library and file system (NFS/CIFS/OST) interfaces. Is a licensed feature for a Data Domain System. Contact your Data Domain representative for licensing details.

358

Data Domain Operating System User Guide

Compatibility Matrix

Compatibility Matrix
For specific backup software and hardware configurations tested and supported by Data Domain, see the VTL matrices at the Data Domain Support web site: https://support.datadomain.com/compat_matrix.php

Enable VTLs
To start the VTL process and enable all libraries and drives, use the vtl enable option. Administrative users only. vtl enable

Create a VTL
To create a virtual tape library, use the vtl add operation. The VTL process must be enabled (use the vtl enable command) to allow the creation of a library. Administrative users only. vtl add vtl_name [model model] [slots num_slots] [caps num_caps] If incorrect values are entered for any of the command variables, a list of valid values is displayed.

vtl_name A name of your choice. model Is a tape library model name. The current supported model names are L180 and RESTORER-L180. See the Data Domain Technical Note for your backup software for the model name that you should use. If using RESTORER-L180, your backup software may require an update. num_slots The number of slots in the library. The number of slots must be equal to or greater than the number of drives. The maximum number of slots for all VTLs on a Data Domain System is 10000. The default is 20 slots. num_caps is the number of cartridge access ports. The default is 0 (zero) and the maximum is 10 (ten).

To create a virtual tape library, use the vtl add operation. The VTL process must be enabled (use the vtl enable command) to allow the creation of a library. Administrative users only. vtl add vtl_name [model model] [slots num_slots] [caps num_caps] For example, to create a VTL with 25 slots and two cartridge access ports: # vtl add VTL1 model L180 slots 25 caps 2
Virtual Tape Library (VTL) - CLI 359

Delete a VTL

After adding a VTL, client systems may not see the VTL. To make an unseen VTL visible, try the following:

Do a rescan operation on the client. This is the least disruptive action. Use the vtl reset hba command on the Data Domain System. Active backup sessions may be disrupted and fail. Use the vtl disable and vtl enable commands on the Data Domain System. Disabling and enabling take longer than the vtl reset hba command, so active backup sessions are very likely to fail. Reboot the Data Domain System or the client or both. Active backup sessions fail.

Delete a VTL
To remove a previously created virtual tape library, use the vtl del option. If the library name is not valid, a list of valid library names is displayed. Administrative users only. vtl del vtl_name

Disable VTLs
To disable all VTL libraries and shutdown the VTL process, use the vtl disable option. Administrative users only. vtl disable

Broadcast new VTLs and VTL Changes


When new VTLs or changes to VTLs are not seen by clients, the vtl reset hba command is one way to alert clients. (In some cases, new or changed LUNs are not seen.) The command is faster (less disruptive) than using the vtl disable and vtl enable commands (which also alert clients about new VTLs and changes), but also may cause active backup sessions to fail. A rescan operation on the client is recommended when multiple clients access the Data Domain System. vtl reset hba

360

Data Domain Operating System User Guide

Create New Drives

Create New Drives


To create a new virtual drive for a VTL, use the vtl drive add option. The maximum number of total drives for all VTLs on a Data Domain System is 64. Administrative users only.

Number of Drives The number of tape drives in the library. The maximum number of drives for all VTLs on a Data Domain System is 64, no matter how many VTLs it has. model Is a tape library model name. The current supported model names are L180 and RESTORER-L180. Note: the maximum number of libraries possible is 16. vtl drive add vtl_name [count num_drives] [model model]

Remove Drives
Use the vtl drive del option to remove drives from a VTL. Administrative users only.

drive_number is the first drive to delete. num_to_delete allows you to delete more than one drive at a time, starting with drive_number. vtl drive del vtl_name drive drive_number [count num_to_del]

Use a Changer
Each VTL Library has exactly 1 media changer, although it can have several tape drives. The word device refers to changers and tape drives. A Changer has a Model Name (for example, L180). Each changer can have a maximum of 1 LUN (Logical Unit Number). The following CLI commands use changers or display information about them: # vtl group create # vtl group del # vtl group modify # vtl group use # vtl group show

Virtual Tape Library (VTL) - CLI

361

Display a Summary of All Tapes

Display a Summary of All Tapes


To display a summary of all tapes on a Data Domain system, use the vtl tape show all summary option. vtl tape show all summary The display for the summary option gives the following types of information with values appropriate for your system: # vtl tape show all summary ... processing tapes... VTL Tape Summary ---------------Total number of tapes: 5 Total pools: 1 Total size of tapes: 500.0 GiB Total space used by tapes: 113.7 GiB Average Compression: 20x Total number of tapes is the number of tapes configured in the scope that was requested in the command, be it a system, a pool, etc. Total pools is the number of default and user-defined tape pools. A Data Domain system always has one default tape pool. Total size of tapes is the total capacity of all configured tapes in GiB (Gibibytes, the base 2 equivalent of GB, Gigabytes). Total space used by tapes is the amount of data sent to all tapes (before compression). Average compression is the average of the compression value for all tapes that hold data. If data is stored elsewhere on the Data Domain system and then identical data is stored on tapes, the tape compression value can be very high as the data on the virtual tapes takes up no new disk space.

To display a summary of all tapes on a Data Domain system, use the vtl tape show all summary option. vtl tape show all summary To display a summary of information on a particular device, use the vtl tape show <device> summary option. vtl tape show pool pool-name summary vtl tape show vault vtl-name summary

362

Data Domain Operating System User Guide

Create New Tapes

Create New Tapes


To create new tapes, use the vtl tape add option. All new tapes go into the virtual vault. Administrative users only. Note On a destination Data Domain System, manually creating a tape is not permitted. vtl tape add barcode [capacity capacity][count count] [pool pool]

barcode The 8-character barcode must start with six numeric or upper-case alphabetic characters (i.e. from the set {0-9, A-Z}), and end in a two-character tag of L1, LA, LB, or LC for the supported LT0-1 tape type, where: L1 represents a tape of 100 GiB capacity, LA represents a tape of 50 GiB capacity, LB represents a tape of 30 GiB capacity, and LC represents a tape of 10 GiB capacity.

(These capacities are the default sizes used if the capacity option is not included when creating the tape cartridge. If capacity is included, then that is used and it overrides the two-character tag.) The numeric characters immediately to the left of L set the number for the first tape created. For example, a barcode of ABC100L1 starts numbering the tapes at 100. A few representative sample barcodes: 000000L1 creates tapes of 100 GiB capacity and can accept a count of up to 1,000,000 tapes (from 000000 to 999999). AA0000LA creates tapes of 50 GiB capacity and can accept a count of up to 10,000 tapes (from 0000 to 9999). AAAA00LB creates tapes of 30 GiB capacity and can accept a count of up to 100 tapes (from 00 to 99). AAAAAALC creates one tape of 10 GiB capacity. You can only create one tape with this name and not increment. AAA350L1 creates tapes of 100 GiB capacity and can accept a count of up to 650 tapes (from 350 to 999). 000AAALA creates one tape of 50 GiB capacity. You can only create one tape with this name and not increment. 5M7Q3KLB creates one tape of 30 GiB capacity. You can only create one tape with this name and not increment. Note GiB = Gibibyte, the base 2 equivalent of GB, Gigabyte.
Virtual Tape Library (VTL) - CLI 363

Import Tapes

To make use of automatic incrementing of the barcode when creating more than one tape, we do the following: Start at the 6th character position, just before L. If a digit then increment it. If an overflow occurs, 9 to 0, then move one position to the left. If a digit then increment that. If alpha stop. Data Domain recommends only creating tapes with unique bar codes. Duplicate bar codes in the same tape pool create an error. Although no error is created for duplicate bar codes in different pools, duplicate bar codes may cause unpredictable behavior in backup applications and can lead to operator confusion.

capacity The number of gigabytes of size for each tape (overrides the barcode capacity designation). The upper limit is 800. For the efficient reuse of Data Domain System disk space after data is obsolete, Data Domain recommends setting capacity to 100 or less. count The number of tapes to create. The default is 1 (one). pool Put the tapes into a pool. The pool is Default if none is given. A pool must already exist to use this option. Use the vtl pool add command to create a pool. # vtl tape add TST010L1 count 5

For example, to create 5 tapes starting with a barcode of TST010L1:

Import Tapes
To move existing tapes from the vault to a slot, drive, or cartridge access port (CAP), use the vtl import option. Administrative users only. Rules for number of tapes imported: The number of tapes that you can import at one time is limited by:

The number of empty slots. (In no case can you import more tapes than--at a maximum--the number of currently empty slots.) The number of slots that are empty and that are not reserved for a tape that is currently in a drive. If a tape is in a drive and the tape origin is known to be a slot, the slot is reserved. If a tape is in a drive and the tape origin is unknown (slot or CAP), a slot is reserved. A tape that is known to have come from a CAP and that is in a drive does not get a reserved slot. (The tape returns to the CAP when removed from the drive.)

To sum up: The number of tapes you can import equals: (the number of empty slots, minus the number of tapes that came from slots, minus the number of tapes of unknown origin).
# of empty slots

364

Data Domain Operating System User Guide

Import Tapes - # of tapes that came from slots (we reserve the slot of each) - # of tapes of unknown origin (we reserve a slot for each) ------------------------= # of tapes you can import

If a tape is in a pool, you must use the pool option to identify the tape. Use the vtl tape show vtl-name command to display currently available slots. The same command can be used to display the slots that are currently used. Use the vtl tape show vault command to display barcodes for all tapes in the vault. Use backup software commands from the backup server to move VTL tapes to and from drives. vtl import vtl_name barcode barcode [count count] [pool pool] [element {slot | drive | cap}] [address addr] For example, to import 5 tapes starting with a barcode of TST010L1 into the library VTL1:
# vtl import VTL1 barcode TST010L1 count 5

Default values: The default value of element=slot. The default value of address=1. Therefore the above command is equivalent to:
# vtl import VTL1 barcode TST010L1 count 5 element slot address 1

Examples of importing:
Import 3 tapes to a CAP:
# vtl import vtl2 barcode HHH000L1 count 3 element cap address 1 ... imported 3 tape(s)... Processing Barcode -------HHH000L1 HHH001L1 HHH002L1 -------Comp ---0x 0x 0x ---tapes.... Pool ------Default Default Default -------

Location ----------vtl2 cap 1 vtl2 cap 2 vtl2 cap 3 -----------

Type ----LTO-1 LTO-1 LTO-1 -----

Size ------100 GiB 100 GiB 100 GiB -------

Used (%) --------------0.0 GiB (0.00%) 0.0 GiB (0.00%) 0.0 GiB (0.00%) ---------------

ModTime ------------------2007/10/08 14:28:55 2007/10/08 14:28:55 2007/10/08 14:28:55 -------------------

# vtl import vtl2 barcode HHH000L1 count 2 element slot address 31 imported 2 tape(s)...

Import from vault to slots 31 and 32 then display only those two barcodes:
Virtual Tape Library (VTL) - CLI 365

Export Tapes

vtl tape show vtl2 barcode HHH00*L1 Processing tapes.... Barcode Pool Location ------------------------HHH000L1 Default vtl2 slot 31 HHH001L1 Default vtl2 slot 32 ------------------------Comp ---0x 0x ---ModTime ------------------2007/10/08 14:28:55 2007/10/08 14:28:55 -------------------

count 2 Type ----LTO-1 LTO-1 ----Size ------100 GiB 100 GiB ------Used (%) --------------0.0 GiB (0.00%) 0.0 GiB (0.00%) ---------------

VTL Tape Summary ---------------Total number of tapes: Total pools: Total size of tapes: Total space used by tapes:

2 1 200 GiB 0.0 GiB

Export Tapes
Remove tapes from a slot, drive, or cartridge access port. Use the vtl tape show vtl-name command to match slots and barcodes. The removed tapes revert to the vault. Address is the number of the slot, drive, or cartridge access port. To export tapes, use the command: vtl export vtl_name {slot | drive | cap} address [count count] For example, to export 5 tapes starting from slot 1 from the library VTL1: # vtl export VTL1 slot 1 count 5

Remove Tapes
To remove one or more tapes from the vault and delete all of the data in the tapes, use the vtl tape del option. The tapes must be in the vault, not in a VTL. Use the vtl tape show vault command to display barcodes. If count is used, remove that number of tapes in sequence starting at barcode.

If a tape is in a pool, you must use the pool option to identify the tape. If count is used, remove that number of tapes in sequence starting at barcode. After a tape is removed, the physical disk space used for the tape is not reclaimed until after a file system clean operation.

366

Data Domain Operating System User Guide

Move Tape

Note On a destination Data Domain System, manually removing a tape is not permitted. vtl tape del barcode [count count] [pool pool] For example, to remove 5 tapes starting with a barcode of TST010L1: # vtl tape del TST010L1 count 5

Move Tape
Only one tape can be moved at a time, from one slot/drive/cap to another. To move a tape, use the vtl tape move command: vtl tape move vtl-name source {slot|drive|cap} src-address destination {slot|drive|cap} dest-address

Search Tapes
The VTL GUI user can search for tapes using the Search Tapes window. This is reached from anywhere the Search Tapes button appears, for example Virtual Tape Libraries...VTL Service...Libraries...click Search Tapes button. A window appears, allowing the user to search for tapes by Location, Pool, and/or Barcode. Count gives the number of tapes from a given starting tape the user wishes to view, and only makes sense when the Barcode field is filled in.

Set a Private-Loop Hard Address


Some backup software requires all private-loop targets to have a hard address (loop ID) that does not conflict with another node. Use the vtl option set loop-id command to set a hard address for a Data Domain system. The range for value is 0 - 125. For a new value to take effect, disable and enable VTL or reboot the system. vtl option set loop-id value For example, to set a value of 5 and have the value take effect: # vtl option set loop-id 5 # vtl disable # vtl enable

Virtual Tape Library (VTL) - CLI

367

Reset a Private-Loop Hard Address

Reset a Private-Loop Hard Address


To reset the private-loop hard address to the system default of 1 (One), use the vtl option reset loop-id command. vtl option reset loop-id

Enable Auto-Eject
Use the vtl option enable auto-eject operation to cause any tape that is put into a cartridge access port to automatically move to the virtual vault, unless the tape came from the vault, in which case the tape stays in the cartridge access port (CAP). vtl option enable auto-eject Note With auto-eject enabled, a tape moved from any element to a CAP will be ejected to the vault unless an ALLOW_MEDIUM_REMOVAL was issued to the library to prevent the removal of the medium from the CAP to the outside world.

Enable Auto-Offline
Backup software and some diagnostic tools may sometimes not move a tape to the state of offline before trying to move the tape out of a drive. The backup or diagnostic operations can then hang. If your site experiences such behavior, you can use the vtl option enable auto-offline command to automatically offline a tape when a move operation is generated. vtl option enable auto-offline

Disable Auto-Eject
Use the vtl disable auto-eject operation to allow a tape in a cartridge access port to remain in place. vtl option disable auto-eject

Disable Auto-Offline
Use the vtl option disable auto-offline command to disable automatically offlining a tape when a move operation is generated. vtl option disable auto-offline
368 Data Domain Operating System User Guide

Display the Auto-Offline Setting

Display the Auto-Offline Setting


To display the current setting for the auto-offline option, use the vtl option show auto-offline operation. vtl option show auto-offline

Display the Private-Loop Hard Address Setting


To display the most recent setting of the loop ID value (which may or may not be the current in-use value), use the vtl option show loop-id command. vtl option show loop-id

Display VTL Status


To display the status of the VTL process, use the vtl status option. vtl status The display is similar to the following: # vtl status VTL admin_state: enabled, process_state: running VTL admin_state Can be enabled or disabled. process_state Can be: running The system is enabled and active. starting The vtl enable command is bringing up the VTL process. stopping The vtl disable command is shutting down the VTL process. stopped The VTL process is disabled. timing out The VTL process crashed and is attempting an automatic restart. stuck After a number of VTL process automatic restarts failed, the process was not able to shut down normally and attempts to kill the process failed.

Display VTL Configurations


To display configuration details for all or a single virtual tape library, use the vtl show config option. vtl show config [vtl_name]
Virtual Tape Library (VTL) - CLI 369

Display All Tapes

The display is similar to the following: # vtl show config Library Name Library Model -----------------------VTL1 10001 -----------------------Drive Model ----------1 ----------Slots/Caps ----120 -----

Display All Tapes


To display information about tapes on a Data Domain system, use the vtl tape show option. The Used(%) column shows the amount of data sent to the tape by the backup client, not the amount of actual disk space used by compressed data. To display information about tapes on a Data Domain system: vtl tape show {all | vault | vtl-name | pool pool} [summary] [count count] [barcode barcode] [sort-by {barcode | modtime | capacity | usage | percentfull} [ascending | descending]] The display is similar to the following: # vtl tape show all ... processing tapes... Barcode Pool Location -------- ------- -----------A00000L1 Default VTL1 drive 1 A00001L1 Default VTL1 drive 2 A00002L1 Default vault A00003L1 Default VTL1 drive 3 A00004L1 Default VTL1 drive 4 Comp ---20x 22x 0x 18x 0x ModTime ------------------2007/04/16 13:15:43 2007/04/16 13:15:43 2007/04/16 13:15:43 2007/04/16 13:15:43 2007/04/16 13:15:43 The Pool column displays which pool holds the tape. The Default pool holds all tapes that are not assigned to a user-created pool. The Location column displays whether tapes are in a user-created library (and which drive number) or in the virtual vault.

Type ----LTO-1 LTO-1 LTO-1 LTO-1 LTO-1

Size --------100.0 GiB 100.0 GiB 100.0 GiB 100.0 GiB 100.0 GiB

Used(%) ---------------35.9 GiB (35.9%) 35.8 GiB (35.8%) 0.0 GiB (0.0%) 42.0 GiB (42.0%) 0.0 GiB (0.0%)

370

Data Domain Operating System User Guide

Display Tapes by VTL

The Size column displays the configured data capacity of the tape in GiB (Gibibytes, the base 2 equivalent of GB, Gigabytes). The Used(%) column displays the amount of data sent to the tape (before compression) and the percent of space used on the virtual tape. The Comp column displays the amount of compression done to the data on a tape. The ModTime column gives the most recent modification time.

Display Tapes by VTL


To display information about all tapes in a VTL, use the vtl tape show vtl_name option. vtl tape show vtl_name The display for the vtl-name option includes a slot number in the Location column. The Size and Used columns show the amount of data sent to the tape by the backup client, not the amount of actual disk space used by compressed data. # vtl tape show VTL1 ... processing tapes... Barcode Pool Location -------- ------- -----------A00000L1 Default VTL1 drive 1 A00001L1 Default VTL1 drive 2 A00002L1 Default vault A00003L1 Default VTL1 drive 3 A00004L1 Default VTL1 drive 4 Comp ---20x 22x 0x 18x 0x ModTime ------------------2007/04/16 13:15:43 2007/04/16 13:15:43 2007/04/16 13:15:43 2007/04/16 13:15:43 2007/04/16 13:15:43 The Pool column displays which pool holds the tape. The Default pool holds all tapes that are not assigned to a user-created pool. The Location column displays whether tapes are in a user-created library (and which drive number) or in the virtual vault. The Size column displays the configured data capacity of the tape in GiB (Gibibytes, the base 2 equivalent of GB, Gigabytes).

Type ----LTO-1 LTO-1 LTO-1 LTO-1 LTO-1

Size --------100.0 GiB 100.0 GiB 100.0 GiB 100.0 GiB 100.0 GiB

Used(%) ---------------35.9 GiB (35.9%) 35.8 GiB (35.8%) 0.0 GiB (0.0%) 42.0 GiB (42.0%) 0.0 GiB (0.0%)

Virtual Tape Library (VTL) - CLI

371

Display All Tapes in the Vault

The Used(%) column displays the amount of data sent to the tape (before compression) and the percent of space used on the virtual tape. The Comp column displays the amount of compression done to the data on a tape. The ModTime column gives the most recent modification time.

Display All Tapes in the Vault


To display all tapes that are in the virtual vault, use the vtl tape show vault option. vtl tape show vault When using count and barcode together, you can use wildcards in the barcode to have the count be valid. An asterisk (*) matches any character in that position and all further positions. A question mark (?) matches any character in that position. For example, the following command displays three tapes starting with barcode ABC00. # vtl tape show vault count 3 barcode ABC00*L1

Display Tapes by Pools


To display information about tapes in pools, use the vtl tape show pool name option. vtl tape show pool pool-name The display is similar to the following: # vtl tape show pool pl22 ... processing tapes... Barcode Pool Location -------- ------- -----------A00000L1 pl22 VTL1 drive 1 A00001L1 pl22 VTL1 drive 2 A00002L1 pl22 vault A00003L1 pl22 VTL1 drive 3 A00004L1 pl22 VTL1 drive 4 Comp ---20x 22x ModTime ------------------2007/04/16 13:15:43 2007/04/16 13:15:43

Type ----LTO-1 LTO-1 LTO-1 LTO-1 LTO-1

Size --------100.0 GiB 100.0 GiB 100.0 GiB 100.0 GiB 100.0 GiB

Used(%) ---------------35.9 GiB (35.9%) 35.8 GiB (35.8%) 0.0 GiB (0.0%) 42.0 GiB (42.0%) 0.0 GiB (0.0%)

372

Data Domain Operating System User Guide

Display VTL Statistics

0x 18x 0x -

2007/04/16 13:15:43 2007/04/16 13:15:43 2007/04/16 13:15:43 The Pool column displays which pool holds the tape. The Default pool holds all tapes that are not assigned to a user-created pool. The Location column displays whether tapes are in a user-created library (and which drive number) or in the virtual vault. The Size column displays the configured data capacity of the tape in GiB (Gibibytes, the base 2 equivalent of GB, Gigabytes). The Used(%) column displays the amount of data sent to the tape (before compression) and the percent of space used on the virtual tape. The Comp column displays the amount of compression done to the data on a tape. The ModTime column gives the most recent modification time.

Display VTL Statistics


To display statistics for all or a single virtual tape library, use the vtl show stats option. The statistics are updated every two seconds. Use the <Ctrl> c key combination to stop the command. The vtl_name variable is case sensitive. vtl show stats <vtl> [drive {<drive-list> | all}] [port {<port-list> | all}] [interval <secs>] [count <count>] If the optional drive list and port list are both absent, the command output is the total traffic stats of all the devices on all the VTL ports. If the drive list and/or port list is specified, the command output is the detailed stats information of the specified devices that are accessible on the specified VTL ports. The default drive list and port list is all. To show a summary use: vtl show stats vtl-name To show the detailed stats information for all those devices that are accessible. vtl show stats vtl-name drive all port all The display is similar to the following: # vtl show stats VTL1 04/17 14:41:27 Drive Port ops/s
Virtual Tape Library (VTL) - CLI

Read KiB/s

Write KiB/s

Soft Errors
373

Display Tapes using sorting and wildcard

----1 2

---1a 1b 1b

----250 0 76

---------112972 0 9150

---------75493 0 76490

----------2 0 0

Hard Errors ----------0 0 1 Note KiB = Kibibyte, the base 2 equivalent of KB, Kilobyte. The Drive column gives a list of the drives by name. The name is of the form Drive #, where # is a number between 1 and n that represents the address or location of the drive in the list of drives. The Port column gives a list of the ports on the drive, by port number, where the port number is a number followed by a lowercase alphabetic character, for example 3a. The ops/s column gives the number of operations per second currently or recently being achieved by the port. The Read KiB/s column gives the number of KibiBytes per second read by the port. The Write KiB/s column gives the number of KibiBytes per second written by the port. The Soft Errors column gives the number of errors that the system recovered from. Nothing needs to be done about these. No preventative measures or maintenance actions are necessary. If there are thousands of soft errors in a short period of time such as an hour, while they are being recovered from, the only cause for concern is that performance may be being affected. The Hard Errors column gives the number of errors that the system was unable to recover from. The user shouldn't be seeing hard errors. In case of a hard error, the user should view the logs to determine whether any action needs to be taken, and if so, what action is appropriate. To view the logs, see the Data Domain Enterprise Manager GUI for the system, click the link "Log Files" in the left menu bar, and click the file vtl.info to open and view it. In addition, it may be helpful to view the files kern.info and kern.error through the CLI (see the chapter Log File Management).

Display Tapes using sorting and wildcard


# vtl tape show vault barcode AAA00*L1 sort-by percentfull
Processing Barcode -------AAA001L1 AAA002L1 AAA003L1 AAA004L1 tapes.... Pool ------Default Default Default Default Location -------vault vault vault vault Type ----LTO-1 LTO-1 LTO-1 LTO-1 Size ----1 GiB 1 GiB 1 GiB 1 GiB Used (%) Comp ------------------1.0 GiB (95.31%) 1x 1.0 GiB (95.31%) 1x 1.0 GiB (95.31%) 2x 1.0 GiB (95.31%) 6x ModTime -----------------2007/09/22 18:45:28 2007/09/22 18:46:40 2007/09/22 18:48:04 2007/09/22 18:48:39

374

Data Domain Operating System User Guide

Procedure: Manually Export a Tape


AAA005L1 Default vault LTO-1 AAA006L1 Default vault LTO-1 AAA007L1 Default vault LTO-1 AAA008L1 Default vault LTO-1 AAA009L1 Default vault LTO-1 AAA000L1 Default vault LTO-1 ------------------------VTL Tape Summary ---------------Total number of tapes: 10 Total pools: 1 Total size of tapes: 10 GiB Total space used by tapes: 8.7 GiB Average Compression: 4.5x 1 GiB 1 GiB 1 GiB 1 GiB 1 GiB 1 GiB ----1.0 GiB (95.31%) 1.0 GiB (95.31%) 1.0 GiB (95.31%) 1.0 GiB (95.31%) 1.0 GiB (95.31%) 0.1 GiB (13.10%) ---------------9x 2x 2x 1x 2x 19x ---2007/09/22 18:49:17 2007/09/22 18:49:56 2007/09/22 18:51:02 2007/09/22 18:53:52 2007/09/22 18:56:38 2007/09/27 18:00:18 ------------------

Procedure: Manually Export a Tape


To manually export a tape, use the vtl tape show library-name command to display the drives in use for a library, and then export a tape from a drive using the command: vtl export library-name drive drive-name vtl tape show library-name vtl export library-name drive drive-name For example: # vtl tape show libr01 Barcode Pool Location -------- ------- -----------NNN000L1 Default vtl2 drive 1 -------- ------- -----------Comp ---0x ---ModTime ------------------2007/04/04 08:42:27 ------------------Type ----LTO-1 ----Size --------100.0 GiB --------Used(%) --------------0.0 GiB (0.00%) ---------------

VTL Tape Summary ---------------Total number of tapes: Total pools: Total size of tapes: Total space used by tapes: Average Compression:

1 1 100.0 GiB 0.0 GiB 0.0x

# vtl export libr01 drive 1 ... exported 1 tapes... Note GiB = Gibibyte, the base 2 equivalent of GB, Gigabyte.
Virtual Tape Library (VTL) - CLI 375

Procedure: Retrieve a Replicated Tape from a Destination

Procedure: Retrieve a Replicated Tape from a Destination


Replicating tapes from a source to a destination requires a replication license on both systems. Visualize the retrieving of a replicated tape from a destination system as physically removing the tape from the source systems VTL and moving the tape to the destination systems VTL. One tape physically cannot be in two places at the same time from the point of view of backup software.

Backup application behavior for handling replicated tapes varies. To minimize unexpected behavior or error conditions, virtual tapes should remain imported in the destination libraries only for as long as needed. After importing a replicated tape at the destination, follow your backup applications procedures to utilize the replicated tape and then export the tape from the destination library. The objective is to ensure that at any time, only one instance of a replicated tape is visible to the backup application. The following generic procedure allows you to configure a VTL for replication and retrieve data from a virtual tape that was replicated to a destination Data Domain System. See Replicating VTL Tape Cartridges and Pools on page 252 for further replication detail and consult your backup application documentation for specific backup procedures. 1. On the source Data Domain system, create the VTL and tapes. Use the vtl add command. 2. Perform and verify one or more backups to the source Data Domain system. 3. Configure replication for the pool to be replicated (for example: /backup/vtc/Default or /backup/vtc/pool-name) using the replication add command. 4. Verify that any tapes targeted for replication from the destination reside in the vault and not in a library. Use the vtl tape show command. 5. Initialize replication for the targeted pool using the replication initialize command. Wait for initialization to complete. 6. As required, perform additional backups to the source. Wait for outstanding backups to complete. 7. Identify the tapes that you need to retrieve from the destination system and have the list available at the destination location. 8. On the source, enter the command replication sync for the target pool to ensure that the source tape and destination tape are consistent. Wait for the command to complete. 9. If the replicated tapes to be retrieved at the destination are still accessible at the source, export the tapes from the source system and, using the backup application, inventory the source VTL.

376

Data Domain Operating System User Guide

Access Groups (for VTL Only)

10. On the destination, create a VTL if one does not already exist. Use the vtl add command. The destination VTL configuration does not have to match the library on the source Data Domain System. 11. Import the tape or tapes to the library using the vtl import command. The replicated tapes should now reside in the destination VTL. From the backup application, inventory the destination VTL. For some configurations or backup application versions, you may need to import the catalog (the backup application database) to use replicated tapes. 12. Read the tapes from the destination systems VTL in the same way that you would read tapes from a library on the source and perform required backup application operations such as cloning to physical tape. 13. After using the replicated tapes, export the tapes from the destination using the vtl export command. 14. If necessary, import the replicated tapes from the source system using the vtl import command. The replicated tapes should now reside in the source systems VTL. 15. From the backup application, inventory the destination VTL.

Access Groups (for VTL Only)


A VTL Access Group (a.k.a. group, VTL group, or Access Group) is a collection of initiator WWPNs or aliases (see VTL Initiator) and the devices they are allowed to access. The Data Domain VTL Access Groups feature allows clients to access only selected LUNs (devices, which are media changers or virtual tape drives) on a Data Domain system. A client that is set up for Access Groups can access only devices in those groups for the client. Groups:

A GROUP is a container which consists of initiators and devices (drives or media changer). An initiator can be a member of only one GROUP. A GROUP can contain multiple initiators. A device can be a member in as many groups as desired. But a device cannot be a member of the same GROUP more than once. GROUP names are case-insensitive, can be 256 characters in length and consist of characters from the range A-Za-z0-9_-. The names: Default, TapeServer, all, summary and vtl are reserved and cannot be created, deleted, or have initiators or devices assigned to them. A GROUP can contain 92 initiators.
377

Virtual Tape Library (VTL) - CLI

Access Groups (for VTL Only)

A maximum of 128 GROUPS is allowed. A GROUP can be renamed.

Devices:

A Device can be a member of as many GROUPs as needed/wanted, but it occurs only once in a given GROUP. It is the Device name (or id) that is used to determine membership in a GROUP, not the LUN assigned. A device may have a different LUN assigned in each GROUP it is a member of. When adding a device to a group, the FC Ports that the device should be visible on can also be specified. Port names are two characters, namely: a digit representing the physical slot the HBA resides in and a character representing the port on the HBA. 3a would be port a on the HBA in slot 3. Acceptable port names are: none, all or a list of port names separated by commas (3a,4b for example).

To use Access Grouping:


Create a VTL on the Data Domain system. See Create a VTL on page 359. Enable the VTL with the vtl enable command. Add a group with the vtl group add command (see below). Add an initiator with the vtl initiator set alias command (see below). Map a client as an Access Grouping initiator (see below). Create an Access Group. See the commands in this section and Procedure: Create an Access Group on page 384.

Note Avoid making Access Grouping changes on a Data Domain system during active backup or restore jobs. A change may cause an active job to fail. The impact of changes during active jobs depends on a combination of backup software and host configurations.

The vtl group Command (Access Group)


Note Group = vtl group = vtl Access Group = Access Group. Anything in VTL with group in it means a vtl Access Group.

378

Data Domain Operating System User Guide

Access Groups (for VTL Only)

A vtl Access Group (a.k.a. group, vtl group, or Access Group) is a collection of initiator wwpns or aliases (see VTL Initiator) and the devices they are allowed to access. This set of commands deal with the group container. Populating the container with initiators and devices is done with VTL Initiator and VTL group. When setting up Access Groups on a Data Domain system:

A given device may appear in more than one group when using features such as Shared Storage Option (SSO) etc.

Create a Access Group


To create an Access Group, use the command: vtl group create group_name # vtl group create moe Creates a group container of name group_name. Group_name must be unique, must not be longer then 256 characters and can only contain the characters "0-9a-zA-Z_-". Up to 128 groups may be created. TapeServer, all and summary are reserved and cannot be used as group names. (TapeServer is reserved for the functionality in a future release and is currently unused.)

Remove an Access Group


vtl group destroy group_name # vtl group destroy moe Removes the group container group_name. Group_name must be empty, see vtl initiator reset alias and vtl group del.

Rename an Access Group


vtl group rename src_group_name dst_group_name # vtl group rename moe curly Allows renaming a group without going through the laborious process of first deleting and then re-adding all initiators and devices. Dst_group_name must not exist and must also conform to the name restrictions under VTL Group Add. Of course, src_group_name must exist. A rename will not interrupt any active sessions.

Virtual Tape Library (VTL) - CLI

379

Access Groups (for VTL Only)

Add to an Access Group


Use the vtl group add command to create an Access Group. Each instance of the command can create a group for one device. To group multiple devices for a single group, use the command once for each device. vtl group add vtl-name {all | changer | drive drive-list} group group_name [lun lun] [primary-port {all | none | port-list}] [secondary-port {all | none | port-list}] The vtl-name is one of the libraries that you have created. Use the vtl show config command to display all library names. The all option adds all devices in the vtl-name. (The drive-name is a virtual tape drive as reported to an initiator. Use the vtl group show command on the Data Domain system to list drive names (which include a space between the word drive and the number).) A drive-list is now accepted, instead of accepting just a drive name or drive number. The drive-list is a comma separated list of drive numbers. The drive number can be a single number, or a range of two numbers separated by a "-". The drive numbers are integers starting from 1. If the drive-list contains more than one drive, we will use the "lun", if specified, as the starting lun and then increment it for each drive. If we encounter a lun that has been used, we will continue with the next one. A group is a collection of initiator wwpns or aliases (see VTL Initiator) and the devices they are allowed to access. The optional lun is the LUN number that the Data Domain system returns to the initiator. The maximum LUN number accepted when creating an Access Group is 255. A LUN number can be used only once for an individual group. The same LUN number can be used with multiple groups.

The option primary-port specifies a set of ports that the device will be visible on. We call them primary ports. If the option is omitted, the device is visible on all ports. If all is provided the device is visible on all ports. If none is provided the device is visible on none of the ports.

The option secondary-port allows the user to specify a second set of ports this device is visible on when the vtl group use secondary command is executed. Of course there is vtl group use primary to fall back to the primary port list. (See also the VTL group use section below in this chapter.) If vtl secondary-port is not specified it will default to the value of "port".

380

Data Domain Operating System User Guide

Access Groups (for VTL Only)

The port-list is a comma-separated list of physical port numbers. A port number is a string in the form of "<numeric_number><alphabet>", Where the "numeric_number" denotes the PCI-slot and "alphabet" denotes the port on a PCI-card. Examples are "1a", "1b", or "2a", "2b". It is illegal to provide a port number that does not currently exist on the system. Now that the command accepts a list of virtual devices, it may fail before completing in its entirety. In this case, we undo the changes on the devices that have been processed. All other rules remain the same. (The group must first be created by a "vtl group add", no duplicate LUNs can be assigned to a group, etc.) The new Access Groups are saved in the registry. For example, the following two commands add groups for the group group22 for drive 3 and drive 4 (note the space in each name) with a LUN number of 22 for drive 4. # vtl group add vtl01 drive drive 3 group group22 # vtl group add vtl01 drive drive 4 group group22 lun 22

Delete from an Access Group


Use the vtl group del command to delete one, all or a list of devices from an individual group. The drive-list is a comma-separated list. vtl group del vtl-name {all | changer | drive drive-list} group group_name The vtl-name is one of the libraries that you have created. Use the vtl show config command to display all library names. The all option deletes all devices in the vtl-name that are grouped for the initiator. The drive-name is a virtual tape drive as reported to an initiator. Use the vtl group show command on the Data Domain system to list drive names and the groups grouped to each drive. A group is a collection of initiator wwpns or aliases (see VTL Initiator) and the devices they are allowed to access.

Modify an Access Group


Use the vtl group modify command to modify one or all Access Groups for an individual initiator. vtl group modify vtl-name {all | changer | drive drive-list} [lun lun] [primary-port {all | none | port-list}] [secondary-port {all | none | port-list}]

Virtual Tape Library (VTL) - CLI

381

Access Groups (for VTL Only)

The vtl-name is one of the libraries that you have created. Use the vtl show config command to display all library names. The all option modifies all devices in the vtl-name that are grouped for the group. The drive-list is a comma-separated list of virtual tape drives as reported to an initiator. Use the vtl group show command on the Data Domain system to list drive names and the groups grouped to each drive. The initiator is a Data Domain system client that you have mapped as an initiator on the Data Domain system. Use the vtl initiator show command to list known initiators. The changeable fields are LUN assignment, primary ports, and secondary ports. If any field is omitted the current value remains unchanged. If the LUN assignment is to be modified, it only applies to a a single drive. Proving "all" or a list of drives is illegal. Some changes can result in the current Access Group being removed from the system causing the loss of any current sessions and a new Access Group being created. The registry will be updated with the changed Access Groups.

Display Access Group information


Use the vtl group show command to display configured Access Groups by VTL name, by group name, or all. Note that the syntax is slightly different in the case of vtl, where the keyword vtl is needed. vtl group show {all | vtl vtl-name | group-name} The output of show reflects the use of groups rather than initiators. # vtl group show vtl ccm2a Device Primary Secondary In-use Ports Ports Ports ------------- ----- --- ------- --------- -----ccm2a changer Moe 6 1a,1b 2b 1a,1b ccm2a changer Larry 4 1a,1b 2b 1a,1b ccm2a drive 5 Curry 6 1a,1b 2a 1a ------------- ----- --- ------- --------- -----The output of vtl show group-name reflects the use of groups rather than initiators. # vtl group show Moe Group ----Moe Device ------------ccm2a changer ccm2b changer LUN --6 7 Primary Ports ------1a,1b 2a Secondary Ports --------2b 1b In-use Ports -----1a,1b 1b Group LUN

382

Data Domain Operating System User Guide

Access Groups (for VTL Only)

-----

ccm2c drive 1 -------------

0 ---

1a -------

1a ---------

1a ------

The output of vtl group show all is even more different: # vtl group show all Group: curly Initiators: None Devices: None Group: group2 Initiators: Initiator Alias --------------moe --------------Devices: Device Name -----------VTL1 changer VTL1 drive 1 -----------UPGRADE NOTE: If, on startup, the VTL process discovers initiator entries in the registry, but no group entries, it is assumed the system has been recently upgraded. In this case a group is created with the same name as each initiator and that initiator is added to the newly created group. In release 4.4.x or later, the LUN masking feature from 4.3 and earlier is replaced by the access groups feature. If LUN masking was configured, the upgrade process from 4.3 to 4.4 converts the LUN masking configuration to an access group that is applied to all VTL Fibre Channel ports and that has the initiators WWNN as a member. In the same way, the default LUN mask in 4.3 is no longer available in 4.4 and later. For devices in the default mask in 4.3, you must create an access group in 4.4 and move the devices into the group for initiators to see the targets. LUN --0 1 --Initiator WWPN ----------------------00:00:00:00:00:00:00:04 ----------------------Primary Ports ------------all all ------------Secondary Ports --------------all all --------------In-use Ports -----------all all ------------

Switch Virtual Devices between Primary & Secondary Port List


The vtl group use command can be used to switch between the primary and secondary port list in a VTL library.

Virtual Tape Library (VTL) - CLI

383

Access Groups (for VTL Only)

vtl group use <group-name> vtl vtl-name {all | changer | drive drive-list} {primary | secondary} vtl group use group group-name {primary | secondary} The vtl-name is one of the libraries that you have created. Use the vtl show config command to display all library names. The all option modifies all devices in the vtl-name that are grouped for the group. The drive-list is a comma-separated list of virtual tape drives as reported to an initiator. Use the vtl group show command on the Data Domain system to list drive names and the groups grouped to each drive. The port list that the virtual device is visible on is the in-use port list, no matter whether it is the primary or secondary port list. The lists are persistently saved in the registry so that after a DDR reboot or VTL crash/restart this configuration can be restored. A group is a collection of initiator wwpns or aliases (see VTL Initiator) and the devices they are allowed to access.

Procedure: Create an Access Group


1. Start the VTL process and enable all libraries and drives. # vtl enable 2. Create a virtual tape library. For example, to create a VTL called VTL1 with 25 slots and two cartridge access ports: # vtl add VTL1 model L180 slots 25 caps 2 3. Create a new virtual drive for the tape library VTL1. As the first drive assigned to library VTL1, the system will assign the drive the name VTL1 drive 1. # vtl drive add VTL1

4. Broadcast VTL changes so they are visible to clients. (Warning: may cause active backup sessions to fail, so it is best to do this when there are no active backup sessions.) # vtl reset hba 5. Create an empty group group2 as a container . # vtl group create group2 6. Give the initiator 00:00:00:00:00:00:00:04 the convenient alias moe.
384 Data Domain Operating System User Guide

Access Groups (for VTL Only)

# vtl initiator set alias moe wwpn 00:00:00:00:00:00:00:04 7. Put the initiator moe into the group group2. # vtl group add group2 initiator moe 8. List the Data Domain systems known clients and world-wide node names (WWNNs). The WWNN is for the Fibre Channel port on the client. # vtl initiator show
Initiator ---------------------moe 01:01:01:01:01:01:01:01 ---------------------Group -----group2 group2 n/a -----WWNN ----------------------00:00:00:00:00:00:00:04 00:00:00:00:00:00:00:05 21:00:00:e0:8c:11:33:04 00:00:00:00:00:00:7a:bf ----------------------Port Status ---1a 1b 1a 1b ---------Online Online Online Offine -------

Initiator Product Vendor / ID / Revision ----------------------- -----------------------------------moe Emulex LP10000 FV1.91A5 DV8.0.16.27 01:01:01:01:01:01:01:01 Emulex LP10000 FV1.91A5 DV8.0.16.27 ----------------------- ------------------------------------

9. Create an Access Group. This Access Group puts VTL1 drive 1 in group2. By doing so, it allows any initiator in group2 to see VTL1 drive 1. # vtl group add VTL1 drive 1 group group2 10. Use the vtl group show command to display VTLs and device numbers. # vtl group show vtl ccm2a Device Group LUN Primary Ports ------------- ----- --- ------ccm2a drive 1 Moe 6 1a,1b ------------- ----- --- ------Secondary Ports --------1a,1b --------In-use Ports -----1a,1b ------

The vtl initiator Command


An initiator is any Data Domain system clients HBA world-wide port name (WWPN). Initiator-name is an alias that maps to a clients world-wide port name (WWPN). Add an initiator alias before adding a VTL Access Group that ties together the VTL devices and client.

After mapping a client as an initiator and before adding an Access Group for the client, the client cannot access any data on the Data Domain system.

Virtual Tape Library (VTL) - CLI

385

Access Groups (for VTL Only)

After adding an Access Group for the initiator/client, the client can access only the devices in the Access Group. A client can have Access Groups for multiple devices. A maximum of 128 initiators can be configured.

Add an Initiator (= add WWPN = set alias)


Use the vtl initiator set alias command to give a client an initiator name on a Data Domain system. vtl initiator set alias initiator-name wwpn wwpn Sets the alias initiator_name for the wwpn wwpn. An alias is optional but much easier to use than a full wwpn. If an alias is already defined for the provided wwpn it is over-written. The creation of an alias has no affect on any groups the wwpn may already be assigned to. An initiator_name may be up to 256 characters long, contain only those characters from the set "0-9a-zA-Z_-" and must be unique among the set of aliases. A total of 128 aliases are allowed.

The initiator-name is an alias that you create for Access Grouping. The name can have up to 256 characters. Data Domain suggests using a simple, meaningful name. The wwpn is the world-wide port name of the Fibre Channel port on the client system. Use the vtl initiator show command on the Data Domain system to list the Data Domain systems known clients and WWPNs. The wwpn must use colons ( : ).

The following example uses the client name and port number as the alias to avoid confusion with multiple initiators that may have multiple ports: # vtl initiator set alias client22_2a wwpn 21:00:00:e0:8c:11:33:04

Delete an Initiator (reset alias)


Use the vtl initiator reset alias command to delete a client initiator alias from the Data Domain system. All Access Groups for the initiator must be deleted before deleting the initiator. vtl initiator reset alias initiator-name Resets (deletes) the alias initiator_name from the system. Deleting the alias does not affect any groups the initiator may have been assigned to. (To remove an initiator from a group use vtl group del.)

386

Data Domain Operating System User Guide

Access Groups (for VTL Only)

For example: # vtl initiator reset alias client22

Display Initiators
Use the vtl initiator show command to list one or all named initiators and their WWPNs. vtl initiator show [initiator initiator-name | port port_number] For example: # vtl initiator show

Initiator ----------------------21:00:00:e0:8b:9d:3a:a5

Group

Status

WWNN

WWPN ----------------------21:00:00:e0:8b:9d:3a:a5 21:00:00:e0:8b:9d:3a:a5 -----------------------

Port ---6a 6b ----

------ ------- ----------------------group2 Online 20:00:00:e0:8b:9d:3a:a5

Offline 20:00:00:e0:8b:9d:3a:a5 ----------------------Initiator ----------------------21:00:00:e0:8b:9d:3a:a5 --------------------------------------------- ------- ----------------------Symbolic Port Name ------------------

Pools
The Data Domain pool feature for VTL allows replication by groups of VTL virtual tapes. The feature also allows for the replication of VTL virtual tapes from multiple replication originators to a single replication destination. For replication details, see Replicating VTL Tape Cartridges and Pools on page 252.

A pool name can be a maximum of 32 characters. A pool name with the restricted names all, vault, or summary cannot be created or deleted. A pool can be replicated no matter where individual tapes are located. Tapes can be in the vault, a library, or a drive. You cannot move a tape from one pool to another. Two tapes in different pools on one Data Domain system can have the same name. A pool sent to a replication destination must have a pool name that is unique on the destination. Data Domain system pools are not accessible by backup software.

Virtual Tape Library (VTL) - CLI

387

The vtl port Command

No VTL configuration or license is needed on a replication destination when replicating pools. Data Domain recommends only creating tapes with unique bar codes. Duplicate bar codes in the same tape pool create an error. Although no error is created for duplicate bar codes in different pools, duplicate bar codes may cause unpredictable behavior in backup applications and can lead to operator confusion.

Add a Pool
Use the vtl pool add command to create a pool. The pool-name cannot be all, vault, or summary. Max of 32 characters. vtl pool add pool-name

Delete a Pool
Use the vtl pool del command to delete a pool. The pool must be empty before the deletion. Use the vtl tape del command to empty the pool. vtl pool del pool-name

Display Pools
Use the vtl pool show command to display pools. vtl pool show {all | pool-name} For example, to display the tapes in pl22: # vtl pool show pl22 ... processing tapes... Barcode Pool Location -------- ------- -------A00000L1 pl22 VTL1 A00004L1 pl22 VTL1 A00001L1 pl22 VTL1 A00003L1 pl22 VTL1

Type ----LTO-1 LTO-1 LTO-1 LTO-1

Size -------100.0 GB 100.0 GB 100.0 GB 100.0 GB

Used Compression -------- ----------100.0 GB 20x 0.0 GB 0x 100.0 GB 10x 0.0 GB 0x

The vtl port Command


The vtl port commands allow the user to enable or disable all the Fibre-Channel ports in port-list, or to show various VTL information in a per-port format.

388

Data Domain Operating System User Guide

The vtl port Command

Enable HBA ports


vtl port enable port-list Enable all the Fibre-Channel ports in port-list.

Disable HBA ports


vtl port disable port-list Disable all the Fibre-Channel ports in port-list. It is not an error to disable a currently disabled port, or to enable a currently disabled port. It is an error to include a non-existent port in port-list.

Show VTL information in per-port format


The Data Domain vtl port show commands show VTL information in a per-port format. # vtl port show summary Shows the following information.

Port -- the physical port number. Connection Type Link Speed Port ID Enabled -- the port operational state. Status -- shows whether the port is up and capable of handling traffic.

The output is similar to:


Port ---6a 6b ---Connection Type ---------Loop N-Port -----------------Link Speed -----4 Gbps Port ID ---e8 ------Yes Yes ------------Online Offline ------Enabled Status

Note GiBps = Gibibytes per second, the base 2 equivalent of GBps, Gigabytes per second.

# vtl port show hardware


Virtual Tape Library (VTL) - CLI 389

The vtl port Command

Shows the following information.


Model Firmware WWNN WWPN

The output is similar to: Port ---1a 1b ---Model ------QLE2462 ------Firmware -------3.03.19 3.03.19 -------WWNN ----------------------21:00:00:e0:8b:1b:dc:10 21:01:00:e0:8b:3b:dc:10 -----------------------

WWPN ----------------------20:00:00:e0:8b:1b:dc:10 20:01:00:e0:8b:3b:dc:10 ----------------------# vtl port show stats [ port { <port list> | all } ] [ interval <secs> ] [ count <count> ] This command shows a summary of the statistics of all the drives in all the VTLs on all the ports where the drives are visible. If the optional port list is absent, the command output is the total traffic stats of all the devices on all the VTL ports. If the port list is specified, the command output is the detailed stats information of the devices that are accessible on the specified VTL ports.

# vtl port show stats port all This command shows detailed stats information for all the drives in all the VTLs on all the ports where the drives are visible. # vtl port show detailed-stats Shows the following information.
390

Control Commands -- non read/write commands Write Commands -- number of WRITE commands Read Commands -- number of READ commands In -- number of megabytes written Out -- number of megabytes read Link Failures -- count of link failures LIP count -- number of LIPs
Data Domain Operating System User Guide

The vtl port Command

Sync Losses -- number of times sync loss was detected Signal Losses -- number of times loss of signal was detected. Prim Seq Proto Errors -- count of errors in primitive sequence protocol Invalid Tx Words -- number of invalid tx words Invalid CRCs -- number of frames received with bad CRC

The output is similar to: Port Control Write Read In (KiB) Out (KiB) Commands Commands Commands ---- -------- -------- -------- -------- --------1a 32 10 5 1024 1024 1b 42 10 5 1024 1024 ---- -------- -------- -------- --------------Link LIP Sync Signal Prim Seq Proto Invalid Failures Count Losses Losses Errors Tx Words -------- ----- ------ ------ -------------- -------0 2 0 0 0 0 0 0 0 0 0 0 -------- ----- ------ ------ -------------- -------Invalid CRCs ------0 0 ------Note KiB = KibiByte, the base 2 equivalent of KB, KiloByte.

Virtual Tape Library (VTL) - CLI

391

The vtl port Command

392

Data Domain Operating System User Guide

Backup/Restore Using NDMP

25

The NDMP (Network Data Management Protocol) feature allows direct backup and restore operations between an NDMP Version 2 data server (such as a Network Appliance filer with the ndmpd daemon turned on), and a Data Domain System. NDMP software on the Data Domain System acts, through the command line interface, to provide Data Management Application (DMA) and NDMP server functionality for the filer. The ndmp command on the Data Domain System manages NDMP operations.

Add a Filer
To add to the list of filers available to the Data Domain System, use the ndmp add operation. The user name is a user on the filer and is used by the Data Domain System when contacting the filer. The password is for the user name on the filer. With no password, the command returns a prompt for the password. Note that any add operation for a filer name that already exists replaces the complete entry for that filer name. A password can include any printable character. Administrative users only. ndmp add filer_name user username [password password] For example, to add a filer named toaster5 using a user name of back2 with a password of pw1212: # ndmp add toaster5 user back2 password pw1212

Remove a Filer
To remove a filer from the list of servers available to the Data Domain System, use the ndmp delete operation. Administrative users only. ndmp delete filer_name For example, to delete a filer named toaster5: # ndmp delete toaster5

393

Backup from a Filer

Backup from a Filer


To backup data from a filer to a file on a Data Domain System, use the ndmp get operation. Administrative users only. ndmp get [incremental level] filer_name:src_path dst_path filer_name The name of the filer that holds the information for the backup operation. src_path The directory to backup from the filer. dst_path The destination file for the backup data on the Data Domain System. incremental level The numeric level for an incremental backup using a number between 0 (zero) and 9. Using any level greater than 0 backs up only changes since the latest previous backup of the same src_path with a lower numbered level. Using the get operation without the incremental option is the same as a level 0, or full, backup. For example, the following command opens a connection to a filer named toaster5 and returns all data under the directory /vol/vol0. The data is stored in a file located at /backup/toaster5/week0 on the Data Domain System. # ndmp get toaster5:/vol/vol0 /backup/toaster5/week0 The following incremental backup backs up changes since the last full backup. # ndmp get incremental 1 toaster5:/vol/vol0 \ /backup/toaster5/week0.day1

Restore to a Filer
To restore data from a Data Domain System to a filer, use one of the ndmp put operations. Note that a filer may report a successful restore even when one or more files failed restoration. For details, always review the LOG messages sent by the filer. Administrative users only. ndmp put src_file filer_name:dst_path ndmp put partial src_file subdir filer_name:dst_path partial Restore a particular directory or file from within a backup file on the Data Domain System. Give the path to the file or subdirectory. src_file The file on the Data Domain System from which to do a restore to a filer. The src_file argument must always begin with /backup. filer_name The NDMP server to which to send the restored data.

394

Data Domain Operating System User Guide

Remove Filer Passwords

dst_path The destination for the restored data on the NDMP server. Some filers require that subdir be relative to the path used during the ndmp get that created the backup. For example, if the get operation was for everything under the directory /a/b/c in a tree of /a/b/c/d/e, then the put partial subdirectory argument should start with /d. On some filers, dst_path must end with subdir. The following command restores data from the Data Domain System file /backup/toaster5/week0 to /vol/vol0 on the filer toaster5. # ndmp put /backup/toaster5/week0 toaster5:/vol/vol0 The following command restores the file .../jsmith/foo from the week0 backup. # ndmp put partial jsmith/foo /backup/toaster5/week0 toaster5:/vol/vol0/jsmith/foo

Remove Filer Passwords


To remove all filer entries, including the associated user names and passwords stored on the Data Domain System, and to write zeros to the disk areas that held them, use the ndmp reset filers operation. Administrative users only. ndmp reset filers

Stop an NDMP Process


To stop an NDMP process on the Data Domain System, use the ndmp stop operation. The pid is the PID (process ID) number shown for the process in the ndmp status display. A stopped process is cancelled. To restart a process, begin the process again with the get or put commands. Administrative users only. ndmp stop pid

Stop All NDMP Processes


To stop all NDMP processes on a Data Domain System, use the ndmp stop all operation. Administrative users only. ndmp stop all

Backup/Restore Using NDMP

395

Check for a Filer

Check for a Filer


To check that a filer is known, use the ndmp test operation to display a filer authentication token. ndmp test filer

Display Known Filers


To display all filers available to the Data Domain System, use the show filers operation. Administrative users only. ndmp show filers For example: # ndmp show filer filer name:password ------------------filer1 root:****** filer2 root:****** toaster root:******

Display NDMP Process Status


To display the status of current NDMP processes on the Data Domain System, use the ndmp status operation. The operation labels each process with an identification number. Administrative users only. ndmp status The display looks similar to the following and shows the process ID, the command that is currently running, and the total number of megabytes transferred. The following example shows the command entered twice in a row. Note that MiB Copied shows the progress of the operation. # ndmp status
PID MiB Copied --- -------715 3267 Command ------------------------------------------------get filer1:/vol/vol0/etc /backup/filer1/dumpfile1 Command ------------------------------------------------get filer1:/vol/vol0/etc /backup/filer1/dumpfile1

# ndmp status
PID MiB Copied --- --------715 4219

Note MiB = Mebibytes = the binary equivalent of Megabytes.


396 Data Domain Operating System User Guide

SECTION 7: GUI - Graphical User Interface

397

This page intentionally left blank.

398

Data Domain Operating System User Guide

Enterprise Manager
Graphical User Interface

26

Through the browser-based Data Domain Enterprise Manager graphical user interface, you can do the initial system configuration, make a limited set of configuration changes, and display system status, statistics, and settings. The supported browsers for web-based access are Netscape 7 and above, Microsoft Internet Explorer 6.0 and above, FireFox 0.9.1 and above, Mozilla 1.6 and above, and Safari 1.2.4. The console first asks for a login and then displays the Data Domain System Summary page (see Figure 32 on page 400). Some of the individual displays on various pages have a Help link to the right of the display title. Click on the link to bring up detailed online help about the display. To bring up the interface: 1. Open a web browser. 2. Enter a path such as http://rstr01/ for Data Domain System rstr01 on a local network. 3. Enter a login name and password.

399

Graphical User Interface

Figure 32: Summary screen

On the Data Domain System Summary screen:


The bar at the top displays the Data Domain System host name. The grey bar immediately below the host name displays the file system status, the number of current alerts, and the system uptime. The Current Status and Space Graph tabs toggle the display. Figure 32 shows Current Status. See Display the Space Graph on page 402 for the Space Graph display and explanation. The left panel lists the pages available in the interface. Click on a link to display a page. Below the list, find the current login, a logout button, and a link to Data Domain Support. The main panel shows current alerts and the space used by Data Domain System file system components. A line at the bottom of the page displays the Data Domain System software release and the current date.
Data Domain Operating System User Guide

400

Graphical User Interface

The page links in the left panel display the output from Data Domain System commands that are detailed throughout this manual. Configuration Wizard gives the same system configuration choices as the config setup command. See Login and Configuration on page 14. System Stats Opens a new window and displays continuously updated graphs showing system usage of various resources. See Display Detailed system Statistics on page 73. Group Manager Opens a window that allows basic system monitoring for multiple Data Domain Systems. See Monitor Multiple Data Domain Systems on page 405. Autosupport shows current alerts, the email lists for alerts and autosupport messages, and a history of alerts. See Display Current Alerts on page 131, Display the Email List on page 132, Display the Autosupport Email List on page 137, and Display the Alerts History on page 132. Admin Access lists every access service available on a Data Domain System, whether or not the service is enabled, and lists every hostname allowed access through each service that uses a list. See Display Hosts and Status on page 113. CIFS displays CIFS configuration choices and the CIFS client list. Disks shows statistics for disk reliability and performance and lists disk hardware information. See Display Disk Reliability Details on page 185, Display Disk Performance Details on page 183, and Display Disk Type and Capacity Information on page 178. File System displays the amount of space used by Data Domain System file system components. See Display File system Space Utilization on page 215. Licenses shows the current licenses active on the Data Domain System. See Display Licenses on page 125. Log Files displays information about each system log file. Network displays settings for the Data Domain System Ethernet ports. See Display Interface Settings on page 101 and Display Ethernet Hardware Information on page 102. NFS lists client machines that can access the Data Domain System. See Display Allowed Clients on page 305. SNMP displays the status of the local SNMP client and SNMP configuration information. Support allows you to create a support bundle of log files and lists existing bundles. See Collect and Send Log Files on page 139. System shows system hardware information and status. Replication lists configured replication pairs and replication statistics. Users lists the users currently logged in and all users that are allowed access to the system. See Display Current Users on page 117 and Display All Users on page 118.

Enterprise Manager

401

Display the Space Graph

Display the Space Graph


The Data Domain Enterprise Manager displays a graph of data from the spacelog file.

Data Collection The total amount of disk storage in use on the Data Domain System. Look at the left vertical axis of the graph. Data Collection Limit The total amount of disk storage available for data on the Data Domain System. Look at the left vertical axis of the graph. Pre-compression The total amount of data sent to the Data Domain System by backup servers. Pre-compressed data on a Data Domain System is what a backup server sees as the total un-compressed data held by a Data Domain System-as-storage-unit. Look at the left vertical axis of the graph. Compression factor The amount of compression the Data Domain System has done with all of the data received. Look at the right vertical axis of the graph for the compression ratio.

Two activity boxes below the graph allow you to change the data displayed on the graph. The vertical axis and horizontal axis change as you change the data set.

The activity box on the left below the graph allows you to choose which data shows on the graph. Click the check boxes for Data Collection, Data Collection Limit, Pre-compression, or Compression factor to remove or add data. The activity box on the right below the graph allows you to change the number of days of data shown on the graph.

Display When first logging in to the Data Domain Enterprise Manager or when you click on the Home link in the left panel of the Data Domain Enterprise Manager, the Space Graph tab is on the far right of the right panel. Click the words Space Graph to display the graph. Figure 33 shows an example of the display with all four types of data included. In the example, the Data Collection and Data Collection Limit values show as constants because of the relatively large scale needed for Pre-compression on the left axis.

402

Data Domain Operating System User Guide

Display the Space Graph

Figure 33: Space graph

Removing one or more types of data can give useful information as the axis scales change. For example, Figure 34 shows the graph for the same Data Domain System and the same data collection as in Figure 33 on page 403. The difference is that the Pre-compression check box in the left-side activity box at the bottom of the display was clicked to remove pre-compression data from the graph. (The scale of Compression Factor at right remains unchanged.)

Enterprise Manager

403

Display the Space Graph

Figure 34: Graph without pre-compression data

The left axis scale in Figure 34 on page 404 is such that the Data Collection and Data Collection Limit give useful information. Also, comparing each of the three lines with the other two lines gives information. Data Collection (the amount of disk space used) at one point goes nearly to the Data Collection Limit, which means that the system was running out of disk space. A file system cleaning operation on about May 30 (see the scale along the bottom of the graph) cleared enough disk space for operations to continue.

404

Data Domain Operating System User Guide

Monitor Multiple Data Domain Systems

The Data Collection line rises with new data written to the Data Domain System and falls steeply with every file system clean operation. Note that the Compression factor line falls with new data and rises with clean operations. The graph also displays a vertical grey bar for each time the system runs a file system cleaning process. The minimum width of the bar on the X axis is six hours. If the cleaning process runs for more than six hours, the width increases to show the total time used by the process.

Monitor Multiple Data Domain Systems


The Group Manager feature of the Data Domain Enterprise Manager displays information for multiple Data Domain Systems. In the left panel of the Data Domain Enterprise Manager, click on Group Manager. See Figure 35.

Figure 35: Group Manager link

The Group Manager display gives information about multiple Data Domain Systems. Figure 36 on page 406 is an example. See Figure 37 on page 407 for adding systems to the display.

Enterprise Manager

405

Monitor Multiple Data Domain Systems

Figure 36: Multi-monitor window

Manage Hosts Click to bring up a screen that allows adding Data Domain Systems to or deleting Data Domain Systems from the display. See Figure 37 on page 407 for details. The Total Pre-compression and Total Data amounts are the combined amounts of data for all displayed systems (five Data Domain Systems in the example). Update Now Click to update the main table of information and the status for each Data Domain System displayed. Status Displays OK in green or the number of alerts in red for each Data Domain System. Restorer Displays the name of each Data Domain System monitored. Click on a name to see more information about a Data Domain System. See Figure 38 on page 408 for an example. Pre-compression GiB The amount of data sent to the Data Domain System by backup software. Data GiB The amount of disk space used on the Data Domain System. % Used A bar graph of the amount of disk space used for compressed data. Compression The amount of compression achieved for all data on the Data Domain System.

406

Data Domain Operating System User Guide

Monitor Multiple Data Domain Systems

Figure 37 shows the Manage Hosts window for adding and deleting systems from the main display. Enter either hostnames or IP addresses for the Data Domain Systems that you want to monitor.

Click the Save button to save changes. Click the Cancel button to return to the main display with no changes.

Figure 37: Add to or delete from the display

Enterprise Manager

407

Monitor Multiple Data Domain Systems

Figure 38 shows the display after clicking on a name in the Data Domain System column. Connect to GUI brings up the login screen for the monitored system if the GUI is enabled on the monitored system. Whichever protocol the current GUI (the one hosting the display) is using, HTTP or HTTPS, is also used to connect to the GUI on the monitored system.

Figure 38: System details

408

Data Domain Operating System User Guide

Virtual Tape Library (VTL) - GUI


The figure below shows the VTL GUI Main Page.

27

For general information on VTL or VTL CLI, see the chapter Virtual Tape Library (VTL) - CLI.

Figure 39. VTL GUI Main Page

From the main DDR GUI page, click on the VTL link at lower left in the sidebar to bring up the VTL GUI. The VTL GUI main page is shown in Figure 39.
409

Virtual Tape Libraries

The VTL GUI gives the user the advantage of approaching tape storage from four different points of view:

Virtual Tape Libraries Access Groups Physical Resources Pools.

These are the Stack Menu choices at left in the Side Panel, and they are visible at all times. (The Stack Menu is so called because it is like a stack of individual menus, any one of which can be brought to the top and made visible by clicking on it.) The panel at right is called the Main Panel or Information Panel. This panel displays information about whatever menu item in the tree menu in the Side Panel at left is selected. The Action Buttons are actions that can be performed on the objects selected either in the Main Panel or the Side Panel. The Refresh button in the top bar (the icon is two arrows) can be used if changes were made (for example through the CLI) that are not showing up in the GUI. The button is always visible. The Help button in the top bar (the icon is a question mark) can be clicked from any screen to give context-sensitive online help about that screen. The Logout button in the top bar (the icon is a padlock) can be clicked to logout from the Data Domain system. Note For a step-by-step example of how to create and use a VTL Library, see the section near the middle of this chapter entitled Procedure: Use a VTL Library / Use an Access Group. Note Context-sensitive online help can be reached by clicking the question mark (?) icons. The online help also has a Table of Contents button that allows the user to view the TOC and content of the entire User Guide.

Virtual Tape Libraries


The following sections approach tape storage from the point of view of Virtual Tape Libraries.

Enable VTLs
To start the VTL process and enable all libraries and drives, navigate as follows: MenuVirtual Tape LibrariesVTL Service...Virtual Tape Library Service pulldown... choose Enable. Enabling VTL Service may take a few minutes. When service is enabled, the pulldown says Enabled. (Clicking it allows you to choose Disable.)

410

Data Domain Operating System User Guide

Virtual Tape Libraries

Administrative users only.

Disable VTLs
To disable all VTL libraries and shutdown the VTL process: Menu Virtual Tape LibrariesVTL ServiceVirtual Tape Library Service pulldown... choose Disable. Disabling VTL Service may take a few minutes. When service is disabled, the pulldown says Disabled. (Clicking it allows you to choose Enable.) Administrative users only.

Create a VTL
To create a virtual tape library, do as follows:

Menu Virtual Tape LibrariesVTL Service LibrariesCreate Library button. Enter the following: Library Name A name of your choice. Must be between 1 and 32 characters long. (This field is required.) Number of Drives Valid values are between 0 and 64. (This field is optional.) Number of Slots The number of slots in the library. The number of slots must be equal to or greater than the number of drives, and must be at least 1. The maximum number of slots for all VTLs on a Data Domain System is 10000. The default is 20 slots. (This field is optional.) Number of CAPs is the number of cartridge access ports. The default is 0 (zero) and the maximum is 10 (ten). (This field is optional.) Changer Model Name Choose from drop-down menu. This s a tape library model name. The current supported model names are L180 and RESTORER-L180. See the Data Domain Technical Note for your backup software for the model name that you should use. If using RESTORER-L180, your backup software may require an update. (This field is optional.)

After the above choices are made, click the OK button.

The VTL process must be enabled (see Enable VTLs just above) to allow the creation of a library. Administrative users only.

Delete a VTL
To remove a previously created virtual tape library, navigate as follows:

Virtual Tape Library (VTL) - GUI

411

Virtual Tape Libraries

Menu Virtual Tape LibrariesVTL Service LibrariesDelete Library button. On the popup box, choose which library/libraries to delete by checking the boxes. Select Library (This field is required.)

Click OK. A popup will ask you to confirm. Click OK on the popup.

VTL Drives
The VTL Drives page has columns of information on Drive, Vendor, Product, Revision, Serial #, and Status.

Drive This column gives a list of the drives by name. The name is of the form Drive #, where # is a number between 1 and n that represents the address or location of the drive in the list of drives. Vendor Manufacturer/Vendor of the drive, for example IBM. Product The Product name of the drive, for example ULTRIUM-TD1. Revision The Revision number of the drive product, for example 4561. Serial # The Serial Number of the drive product, for example 6666660001. Status If there is a tape loaded, this column shows the barcode of the loaded tape. If there is no tape loaded in this drive, the Status is shown as empty.

When you click on an individual drive, additional Drive Statistics are provided on each Port of that drive, namely: ops/s, Read KiB/s, Write KiB/s, Soft Errors, and Hard Errors.

Port This column gives a list of the ports on the drive, by port number, where the port number is a number followed by a lowercase alphabetic character, for example 3a. ops/s The number of operations per second currently or recently being achieved by the port. Read KiB/s The number of KibiBytes per second read by the port.

Note KiB = Kibibyte, the base 2 equivalent of KB, Kilobyte.


Write KiB/s The number of KibiBytes per second written by the port. Soft Errors This column gives the number of errors that the system recovered from. Nothing needs to be done about these. No preventative measures or maintenance actions are necessary. If there are thousands of soft errors in a short period of time such as an hour, while they are being recovered from, the only cause for concern is that performance may be being affected.

412

Data Domain Operating System User Guide

Virtual Tape Libraries

Hard Errors This column gives the number of errors that the system was unable to recover from. The user shouldn't be seeing hard errors. In case of a hard error, the user should view the logs to determine whether any action needs to be taken, and if so, what action is appropriate. To view the logs, see the Data Domain Enterprise Manager GUI for the system, and click the link "Log Files" in the left menu bar. The log files to view are vtl.info, kern.info and kern.error.

In addition, a count (Port Count) of the total number of ports on that drive is given.

Create New Drives


To create a new virtual drive for a VTL, navigate as follows : Menu Virtual Tape LibrariesVTL Service LibrariesSelect a library by clicking it...Expand the library by clicking the + sign to the left of it...Drives...Create Drive button...choose a VTL from the Location pulldown, enter a number of drives, click OK. The maximum number of total drives for all VTLs on a Data Domain System is 64. Administrative users only. Enter the following information: Location Choose from drop-down menu. The name of the library. Between 1 and 32 characters. (This field is required.) Number of drives The number of tape drives in the library. The maximum number of drives for all VTLs on a Data Domain System is 64. A Data Domain System with three VTLs could have a maximum of 64 drives. (This field is required.) Model Name Choose from drop-down menu. This is a drive model name. IBM-LTO-1 is a valid choice. (This field is optional.) Note: the maximum number of libraries possible is 16.

Remove Drives
Administrative users only. To remove drives, navigate as follows: Menu Virtual Tape LibrariesVTL Service LibrariesSelect a library by clicking it...Expand the library by clicking the + sign to the left of it...Drives...Delete Drive button...check which drives to delete. You can use the links to select All or None. Click OK. Click OK again to confirm. Select Drives - Check the boxes for the drives to delete. (This field is required.) - Select All - None - "All" checks the boxes for all drives. "None"unchecks all the boxes.

Virtual Tape Library (VTL) - GUI

413

Virtual Tape Libraries

Use a Changer
Each VTL Library has exactly 1 media changer, although it can have several tape drives. The word device refers to changers and tape drives. A Changer has a Model Name (for example, L180). Each changer can have a maximum of 1 LUN (Logical Unit Number). Changers can be navigated to in the VTL GUI as follows: Menu Virtual Tape LibrariesVTL Service LibrariesSelect a library by clicking it...Expand the library by clicking the + sign to the left of it...Changer.

Display a Summary of All Tapes


To display a summary of all tapes on a Data Domain system, VTL stack menuVirtual Tape Libraries...VTL ServiceLibraries. The Libraries Summary display shows information about Libraries and about Tapes. Libraries: Tapes: The Location column gives the name of each pool. The Default pool holds all tapes that are not assigned to a user-created pool. The # of Tapes column gives the number of tapes in each pool. The Total Size column gives the total configured data capacity of the tapes in that pool in GiB (Gibibytes, the base 2 equivalent of GB, Gigabytes). The Total Space Used column displays the amount of space used on the virtual tapes in that pool. The Average Compression column displays the average amount of compression achieved on the data on the tapes in that pool. The Library column shows the name of the library being viewed. The # of Drives column shows the number of drives in the library as currently configured. The # of Slots column shows the number of slots in the library as currently configured. The # of CAPs column shows the number of CAPs in the library as currently configured.

Information at different levels is found by clicking different levels of the menu hierarchy: VTL Service, Libraries, Changer, Drives, Tapes, Vault, Pools, etc.

414

Data Domain Operating System User Guide

Virtual Tape Libraries

Create New Tapes


To create new tapes, Menu Virtual Tape LibrariesVTL ServiceVault...Create Tapes button. After entering the desired values below, click OK. All new tapes go into the virtual vault. Administrative users only. Note If replication is configured, then on a destination Data Domain System, manually creating a tape is not permitted. Pool Name Choose from drop-down menu. This is the pool that the tapes will be put into. The pool is Default if none is given. A pool must already exist to use this option. If necessary, see Add A Pool. Valid names are between 1 and 32 characters long. (This field is required, however Default is an acceptable value.) Number of Tapes The number of tapes to create. The default is 1 (one). Creating a large number of tapes causes the system to take a long time to carry out this action. (This field is required.) Starting Barcode Barcode influences number of tapes, and tape capacity (unless a Tape Capacity is given, in which case the Tape Capacity overrides the Barcode). See Barcode below. (This field is required.) Tape Capacity The number of gigabytes of size for each tape (overrides the barcode capacity designation). Valid values are between 1 and 800. For the efficient reuse of Data Domain System disk space after data is obsolete, Data Domain recommends setting capacity to 100 or less. (This field is optional.)

Note If Tape Capacity is specified, it overrides Barcode.

Barcode
Barcode influences the number of tapes and tape capacity (unless a Tape Capacity is given, in which case the Tape Capacity overrides the Barcode), as follows.

barcode The 8-character barcode must start with six numeric or upper-case alphabetic characters (i.e. from the set {0-9, A-Z}), and end in a two-character tag of L1, LA, LB, or LC for the supported LT0-1 tape type, where: L1 represents a tape of 100 GiB capacity, LA represents a tape of 50 GiB capacity, LB represents a tape of 30 GiB capacity, and LC represents a tape of 10 GiB capacity.

Virtual Tape Library (VTL) - GUI

415

Virtual Tape Libraries

(These capacities are the default sizes used if the capacity option is not included when creating the tape cartridge. If capacity is included, then that is used and it overrides the two-character tag.) The numeric characters immediately to the left of L set the number for the first tape created. For example, a barcode of ABC100L1 starts numbering the tapes at 100. A few representative sample barcodes: 000000L1 creates tapes of 100 GiB capacity and can accept a count of up to 1,000,000 tapes (from 000000 to 999999). AA0000LA creates tapes of 50 GiB capacity and can accept a count of up to 10,000 tapes (from 0000 to 9999). AAAA00LB creates tapes of 30 GiB capacity and can accept a count of up to 100 tapes (from 00 to 99). AAAAAALC creates one tape of 10 GiB capacity. You can only create one tape with this name and not increment. AAA350L1 creates tapes of 100 GiB capacity and can accept a count of up to 650 tapes (from 350 to 999). 000AAALA creates one tape of 50 GiB capacity. You can only create one tape with this name and not increment. 5M7Q3KLB creates one tape of 30 GiB capacity. You can only create one tape with this name and not increment. Note GiB = Gibibyte, the base 2 equivalent of GB, Gigabyte. To make use of automatic incrementing of the barcode when creating more than one tape, we do the following: Start at the 6th character position, just before L. If a digit then increment it. If an overflow occurs, 9 to 0, then move one position to the left. If a digit then increment that. If alpha stop. Data Domain recommends only creating tapes with unique bar codes. Duplicate bar codes in the same tape pool create an error. Although no error is created for duplicate bar codes in different pools, duplicate bar codes may cause unpredictable behavior in backup applications and can lead to operator confusion.

Import Tapes
Move existing tapes from the vault into a slot, drive, or cartridge access port. If a tape is in a pool, you must use the pool option to identify the tape. Administrative users only. Note The number of tapes you can import is limited--see Rules for number of tapes imported, immediately below this Note.

416

Data Domain Operating System User Guide

Virtual Tape Libraries

Rules for number of tapes imported:


The number of tapes that you can import at one time is limited by: The number of empty slots. (In no case can you import more tapes than--at a maximum--the number of currently empty slots.) The number of slots that are empty and that are not reserved for a tape that is currently in a drive. If a tape is in a drive and the tape origin is known to be a slot, the slot is reserved. If a tape is in a drive and the tape origin is unknown (slot or CAP), a slot is reserved. A tape that is known to have come from a CAP and that is in a drive does not get a reserved slot. (The tape returns to the CAP when removed from the drive.)

To sum up: The number of tapes you can import equals: (the number of empty slots, minus the number of tapes that came from slots, minus the number of tapes of unknown origin). # of empty slots - # of tapes that came from slots (we reserve the slot of each) - # of tapes of unknown origin (we reserve a slot for each) ------------------------= # of tapes you can import The pool option is required if the tapes are in a pool. Use the vtl tape show <vtl-name> command to display the total number of slots for a VTL. Use the same command vtl tape show <vtl-name> to display the slots that are currently used. Use backup software commands from the backup server to move VTL tapes to and from drives. Note: element=slot and address=1 are defaults, therefore: vtl import VTL1 barcode TST010L1 count 5 is equivalent to: vtl import VTL1 barcode TST010L1 count 5 element slot address 1 To move existing tapes from the vault to a slot, drive, or cartridge access port, navigate as follows: Menu Virtual Tape LibrariesVTL Service LibrariesSelect a library by clicking it...Expand the library by clicking the + sign to the left of it...Tapes...Import Tape button. At this point: A list of available tapes appears. (If no tapes appear, you may need to Create Tapes, or search for tapes using Location, Pool, Barcode or Count (where Count is the number of tapes returned by the search).

Virtual Tape Library (VTL) - GUI

417

Virtual Tape Libraries

Check the checkboxes for the tapes to be imported. Click the OK Button. Click the OK Button again to confirm.

The fields are: Pool Choose from the drop-down menu. The pool is Default if none is given. A pool must already exist to use this option. If necessary, see Add A Pool. Valid names are between 1 and 32 characters long. (This field is required, however Default is an acceptable value.) Barcode Barcode is for searching. (This field is optional.) Count The number of tapes returned by the Search. (This field is optional.) Select tapes Using Checkboxes. (This field is required.) Select - All - None - "All" checks the boxes for all drives. "None" unchecks all the boxes. Device Slot, Drive, or CAP. (This field is required.) Tapes Per Page. (This field is the number of results on the search page.) Start Address (This field is optional.)

Export Tapes
To export tapes, navigate as follows: Menu Virtual Tape LibrariesVTL Service LibrariesSelect a library by clicking it...Expand the library by clicking the + sign to the left of it...Tapes...Export Tape button. The dialog box for Export tapes is similar to that for Import Tapes, but without the Select Destination fields at the bottom of the screen. At this point: A list of available tapes appears. (If no tapes appear, you may need to search for tapes using Location, Pool, Barcode or Count (where Count is the number of tapes returned by the search). Check the checkboxes for the tapes to be exported. Click OK. Click OK again to confirm.

The fields are: Pool Choose from the drop-down menu. The pool is Default if none is given. A pool must already exist to use this option. If necessary, see Add A Pool. Valid names are between 1 and 32 characters long. (This field is required, however Default is an acceptable value.) Barcode For searching. (This field is optional.)
Data Domain Operating System User Guide

418

Virtual Tape Libraries

Count The number of tapes returned by the Search. (This field is optional.) Select tapes Using Checkboxes. (This field is required.) Select - All - None - "All" checks the boxes for all drives. "None" unchecks all the boxes. Device Slot, Drive, or CAP. (This field is required.) Tapes Per Page. (This field is the number of results on the search page.) Start Address (This field is optional.)

Export Tapes can also be reached by selecting a specific library.

Remove Tapes
To remove one or more tapes from the vault and delete all of the data in the tapes, Menu Virtual Tape LibrariesVTL ServiceVault...Delete Tapes button...check the boxes of the tapes you want to delete...click OK...click OK again to confirm. (The screen for Move tapes is effectively the same as that for Export Tapes.)

Count is used only for the number of tapes returned by a search. In order to delete the tapes, their boxes must be checked. The tapes must be in the vault, not in a VTL. If a tape is in a pool, you may have to use the pool to identify the tape. After a tape is removed, the physical disk space used for the tape is not reclaimed until after a file system clean operation.

Note In the case of replication, on a destination Data Domain System, manually removing a tape is not permitted. The fields are: Pool Choose from the drop-down menu. The pool is Default if none is given. A pool must already exist to use this option. If necessary, see Add A Pool. Valid names are between 1 and 32 characters long. (This field is required, however Default is an acceptable value.) Barcode For searching. (This field is optional.) Count The number of tapes returned by the Search. (This field is optional.) Select tapes Using Checkboxes. (This field is required.) Select - All - None - "All" checks the boxes for all drives. "None" unchecks all the boxes. Tapes Per Page. (This field is the number of results on the search page.)

Virtual Tape Library (VTL) - GUI

419

Virtual Tape Libraries

Move Tape
Only one tape can be moved at a time, from one slot/drive/cap to another. (The screen for Move tapes is effectively the same as that for Import Tapes.) To move a tape, Menu Virtual Tape LibrariesVTL ServiceLibrarieschoose a libraryclick Move Tape buttonselect which tape to move using the check boxes...Choose a destination Drive, Slot, or CAP....Enter a destination Start Address...click OK.

Start Address is the number of the Drive, Slot, or CAP. Valid values are numbers. (This field is required.) -

The fields are: Pool Choose from the drop-down menu. The pool is Default if none is given. A pool must already exist to use this option. If necessary, see Add A Pool. Valid names are between 1 and 32 characters long. (This field is required, however Default is an acceptable value.) Barcode For searching. (This field is optional.) Count The number of tapes returned by the Search. (This field is optional.) Select one tape Using Checkbox. (This field is required.) Device Slot, Drive, or CAP. (This field is required.) Tapes Per Page. (This field is the number of results on the search page.) Start Address (This field is optional.)

Search Tapes
The VTL GUI user can search for tapes using the Search Tapes window. This is reached from anywhere the Search Tapes button appears, for example: Virtual Tape Libraries...VTL Service...Libraries...click Search Tapes button. The Search Tapes dialog box appears, allowing the user to search for tapes by Location, Pool, and/or Barcode. The fields are: Location Choose from the drop-down menu. The pulldown allows the user to specify a the vault, or a particular library. (This field is optional. The Default is All.) Pool Choose from the drop-down menu. The pool is Default if none is given. A pool must already exist to use this option. If necessary, see Add A Pool. Valid names are between 1 and 32 characters long. (This field is required, however Default is an acceptable value.) Barcode For searching. (This field is optional.)
Data Domain Operating System User Guide

420

Virtual Tape Libraries

Count The number of tapes returned by the Search. (This field is optional.) Tapes Per Page. This field is the number of results on the search page. (This field is optional.)

The asterisk wild-card character can be used in Barcode at the beginning or end of a string to search for a range of tapes.

Set Option/Reset Option (loop-id and auto-eject)


The Set Option and Reset Option buttons allow the user to set loop-id, reset loop-id, display loop-id, enable auto-eject, and disable auto-eject. This is explained further in the following paragraphs.

Set loop-id (a Private-Loop Hard Address)


Some backup software requires all private-loop targets to have a hard address (loop ID) that does not conflict with another node. To set a hard address for a Data Domain system, VTL stack menu...Virtual Tape Libraries...VTL ServiceSet Option...set loop-id to desired value...Click Set Options. The range for value is 0 - 125. For a new value to take effect, it may be necessary to disable and enable VTL or reboot the Data Domain system. (This field is optional.)

Reset loop-id (a Private-Loop Hard Address)


To reset the private-loop hard address to the Data Domain system default of 1 (One), VTL stack menu...Virtual Tape Libraries...VTL ServiceVTL stack menu...Virtual Tape Libraries...VTL ServiceReset Option...check the loop-id box...Click Reset Options. The range for value is 0 - 125. For a new value to take effect, it may be necessary to disable and enable VTL or reboot the system. (This field is optional.)

Display loop-id (the Private-Loop Hard Address Setting)


Display the most recent setting of the loop ID value (which may or may not be the current in-use value), as follows: VTL stack menu...Virtual Tape Libraries...VTL ServiceSet Option. The top box shows the current value of loop-id. Loop ID. Hard address that does not conflict with another node. The range for Loop ID is 0 - 125.

Virtual Tape Library (VTL) - GUI

421

Virtual Tape Libraries

Set/Enable Auto-Eject
Enable Auto-Eject to cause any tape that is put into a cartridge access port to automatically move to the virtual vault, unless the tape came from the vault, in which case the tape stays in the cartridge access port (CAP). VTL stack menu...Virtual Tape Libraries...VTL ServiceSet Option...change auto-eject to enabled...click Set Options. Note With auto-eject enabled: A tape moved from any element to a CAP will be ejected to the vault unless an ALLOW_MEDIUM_REMOVAL was issued to the library to prevent the removal of the medium from the CAP to the outside world.

Reset/Disable Auto-Eject
Disable Auto-Eject to allow a tape in a cartridge access port to remain in place, as follows: VTL stack menu...Virtual Tape Libraries...VTL ServiceSet Option...change auto-eject to disabled...click Set Options. Alternatively, you can reset Auto-Eject to its default value of disabled, as follows: VTL stack menu...Virtual Tape Libraries...VTL ServiceVTL stack menu...Virtual Tape Libraries...VTL ServiceReset Option...check the loop-id box...Click Reset Options.

Display VTL Status


Display the status of the VTL process as follows: VTL stack menu...Virtual Tape Libraries...VTL Service. At the top of the screen, see the status in the Virtual Tape Library Service pulldown menu. VTL admin_state - Can be enabled or disabled. process_state - Can be any of the following: running - The system is enabled and active. starting - The VTL process is being started. stopping - The VTL process is being shut down. stopped - The VTL process is disabled. timing out - The VTL process crashed and is attempting an automatic restart. stuck - After a number of VTL process automatic restarts failed, the process was not able to shut down normally and attempts to kill the process failed.

422

Data Domain Operating System User Guide

Virtual Tape Libraries

Display All Tapes


To display information about tapes on a Data Domain system, there are two methods: VTL stack menu...Virtual Tape Libraries...VTL ServiceLibraries...Search Tapes, or: VTL stack menu...Virtual Tape Libraries...VTL ServiceLibraries...(Choose a Library)...Tapes. Both methods return the same information: The Barcode column identifies each tape by its barcode. The Pool column displays which pool holds the tape. The Default pool holds all tapes that are not assigned to a user-created pool. The Location column displays whether tapes are in a user-created library (and which drive, CAP, or slot number) or in the virtual vault. The Type column displays the type of tape being used (for example, LTO-1). The Size column displays the configured data capacity of the tape in GiB (Gibibytes, the base 2 equivalent of GB, Gigabytes). The Used(%) column displays the amount of data sent to the tape (before compression) and the percent of space used on the virtual tape. The Compression column displays the amount of compression done to the data on a tape. The Last Modified column gives the most recent modification time.

Display Summary Information About Tapes in a VTL


To display summary information about all tapes in a VTL, VTL stack menuVirtual Tape Libraries...VTL ServiceLibraries...(Choose a Library). The display for a given VTL shows information about the Library and about Tape Distribution. Library: The Library column shows the name of the library being viewed. The # of Drives column shows the number of drives in the library as currently configured. The # of Slots column shows the number of slots in the library as currently configured. The # of CAPs column shows the number of CAPs in the library as currently configured.

Tape Distribution: The Device column labels the row information as referring to Drives, Slots, and CAPs.
423

Virtual Tape Library (VTL) - GUI

Virtual Tape Libraries

The # of Loaded column shows the number of Drives, Slots, and CAPs that are loaded. The # of Empty column shows the number of Drives, Slots, and CAPs that are empty. The Total column shows the number of Drives, Slots, and CAPs that there are in total.

Display Summary Information About the Tapes in a Vault


To display summary information about all tapes that are in the virtual vault, VTL stack menu...Virtual Tape Libraries...VTL ServiceVault. For the Vault, infor mation is shown on: Total Tape Count, Total Size of Tapes, Total Tape Space Used, Average Compression, Pool Names, and Pool count (number of Pools). The display for a given VTL shows information about Tapes and Pools. Tapes: Pool: The Pool Name of each pool in the vault. The Pool count of the total number of pools in the vault. The Total Tape Count. column shows the name of the library being viewed. The Total Size of Tapes. In GiB (GibiBytes, the binary equivalent of GigaBytes). The Total Tape Space Used. In GiB. The Average Compression.

Display All Tapes in a Vault


The VTL GUI user can display all tapes in the Vault using the Search Tapes dialog box: Virtual Tape Libraries...VTL Service...Vault...click Search Tapes button. The Search Tapes dialog box appears. Without entering search criteria, simply click the Search button, and it will search for all tapes in the vault. The fields are: Location The pulldown allows the user to specify a the vault, or a particular library. (This field is optional. The Default is All.) Pool The pool is Default if none is given. A pool must already exist to use this option. If necessary, see Add A Pool. Valid names are between 1 and 32 characters long. (This field is required, however Default is an acceptable value.) Barcode For searching. (This field is optional.)
Data Domain Operating System User Guide

424

Access Groups (for VTL Only)

Count The number of tapes returned by the Search. (This field is optional.) Tapes Per Page. This field is the number of results on the search page. (This field is optional.)

Access Groups (for VTL Only)


Note Group = VTL group = VTL Access Group = Access Group. Anything in VTL with group in it means a VTL Access Group. A VTL Access Group is a collection of initiator WWPNs or aliases (see VTL Initiator) and the devices they are allowed to access. The Data Domain VTL Access Groups feature allows clients to access only selected LUNs (devices, which are media changers or virtual tape drives) on a system. Stated more simply, an Access Group is a group of initiators and devices that can see each other and access each other. The initiators are identified by their WWPNs or aliases. The devices are drives and changers. A client that is set up for Access Groups can access only devices that are in its Access Groups. To use Access Grouping:

Create a VTL on the system. See Create a VTL on page 411. Enable the VTL.. Add a group (see below). Add an initiator (see below). Map a client as an Access Grouping initiator (see below). Create an Access Group. See Create an Access Group and Procedure: Use an Access Group below.

Note Avoid making Access Grouping changes on a Data Domain system during active backup or restore jobs. A change may cause an active job to fail. The impact of changes during active jobs depends on a combination of backup software and host configurations. This set of actions deals with the group container. Populating the container with initiators and devices is done with VTL Initiator and VTL group. When setting up Access Groups on a Data Domain system:

Usually each Data Domain System device (media changer or drive) can have amaximum of 1 Access Group, however, with multi-initiator, devices may appear in more than one group when using features such as Shared Storage Option (SSO) etc.

Virtual Tape Library (VTL) - GUI

425

Access Groups (for VTL Only)

Create an Access Group


To create an Access Group: VTL Stack Menu...Access Groups...Groups...Create Group. Creates a group container of name group_name. Group_name must be unique, must not be longer than 32 characters and can only contain the characters "0-9a-zA-Z_-". Up to 128 groups may be created. Group Name. 1-32 characters. (This field is required.)

TapeServer, all and summary are reserved and cannot be used as group names. (TapeServer is reserved for the functionality in a future release and is currently unused.)

Remove an Access Group


To remove an Access Group entirely, VTL Stack Menu...Access Groups...Groups...Delete Groups...Check a group to delete...click OK...click OK again to confirm. Removes (i.e. deletes) the group container group_name. Group. (This field is required.)

To remove/delete a group, you must first empty it. See Delete From an Access Group below.

Rename an Access Group


To rename an Access Group: VTL Stack Menu...Access Groups...Groups...click on a group...click Rename Group button...enter a new group name...click OK. Group Name. 1-32 characters. (This field is required.)

TapeServer, all and summary are reserved and cannot be used as group names. (TapeServer is reserved for the functionality in a future release and is currently unused.) Allows renaming a group without going through the laborious process of first deleting and then re-adding all initiators and devices. New Group Name must not exist and must also conform to the name restrictions under VTL Group Add. A rename will not interrupt any active sessions.

Add to an Access Group


To add to an Access Group,

426

Data Domain Operating System User Guide

Access Groups (for VTL Only)

VTL Stack Menu...Access Groups...Groups...click on a group...click Add Initiators or Add LUNs...check the boxes for the things you want to add...click OK...click OK again to confirm. Add Initiators: Group. Choose from the drop-down menu. (This field is optional.) Select Initiator. (This field is required.)

Add LUNs: Group. Choose from drop-down menu. (This field is optional.) Library Name. Choose from drop-down menu. (This field is optional.) Starting LUN. A device address. The maximum number (LUN) is 255. A LUN can be used only once within a group, but can be used again within another group. VTL devices added to a group must use contiguous LUN numbers. (This field is optional.) Devices. (This field is required.) Primary Ports. The primary ports on which the device is visible. (This field is optional.) The last checkbox is for None. Secondary Ports. The secondary ports on which the device is visible. (This field is optional.) The last checkbox is for None.

Usually primary and secondary ports are different. For example, typical usage might be to make 5a and 6a primary ports, and 5b and 6b secondary ports.

Delete from an Access Group


To delete from an Access Group, VTL Stack Menu...Access Groups...Groups...click on a group...click Remove Initiators or Delete LUNs...check the boxes of the things you want to delete...click OK...click OK to confirm. Remove Initiators: Group. Choose from the drop-down menu. (This field is optional.) Select Initiator. (This field is required.)

Delete LUNs: Group. Choose from the drop-down menu. (This field is optional.) Library Name. Choose from the drop-down menu. (This field is optional.) Device. (This field is required.) Select - All - None - "All" checks the boxes for all drives. "None" unchecks all the boxes.

Virtual Tape Library (VTL) - GUI

427

Access Groups (for VTL Only)

Modify an Access Group


To modify an Access Group, VTL Stack Menu...Access Groups...Groups...click on a group...click Modify LUNs...choose modifications to make...click OK...click OK to confirm. At least one device must be selected. The changeable fields are LUN assignment, primary ports, and secondary ports. If any field is omitted the current value remains unchanged. Some changes can result in the current Access Group being removed from the system causing the loss of any current sessions and a new Access Group being created. The registry will be updated with the changed Access Groups. Modify LUNs: Group. Choose from the drop-down menu. (This field is optional.) Library Name. Choose from the drop-down menu. (This field is optional.) Starting LUN. A device address. The maximum number (LUN) is 255. A LUN can be used only once within a group, but can be used again within another group. VTL devices added to a group must use contiguous LUN numbers. (This field is optional.) Devices. (This field is required.) Select - All - None - "All" checks the boxes for all drives. "None" unchecks all the boxes. Primary Ports. The primary ports on which the device is visible. (This field is optional.) The last checkbox is for None Secondary Ports. The secondary ports on which the device is visible. (This field is optional.) The last checkbox is for None.

Display Access Group information


To display Access Group information, VTL Stack Menu...Access Groups...Groups...click on a group. Information displayed covers LUNs and Initiators. LUNs - for each LUN, the following is shown: LUN. A device address. The maximum number (LUN) is 255. A LUN can be used only once within a group, but can be used again within another group. VTL devices added to a group must use contiguous LUN numbers. Library. Shows the name of the library being viewed. Device. Devices are Changers and Drives.
Data Domain Operating System User Guide

428

Access Groups (for VTL Only)

In-Use Ports. This shows which ports are currently in use and which are secondary for the Access Group. Primary Ports. The primary ports on which the devices are visible to initiators within the group. Secondary Ports. The secondary ports which the devices within the group may be visible on after using the Set In-Use Ports button. Secondary ports provide a quick means for administrators to apply Access Group access to secondary ports in the event of primary port(s) failure; this may be done without permanently modifying the Access Group.

A LUN count of the total number of LUNs is also shown. Initiators - for each initiator, the following is shown: The initiator-name is an alias that you create for Access Grouping. The WWPN is the World-Wide Port Name of the Fibre Channel port in the media server(s).

An Initiator count of the total number of initiators is also shown.

UPGRADE NOTE:
If, on startup, the VTL process discovers initiator entries in the registry, but no group entries, it is assumed the system has been recently upgraded. In this case a group will be created with the same name as each initiator and that initiator added to the newly created group. After upgrading to 4.4.x from 4.3.x or earlier, the LUN masking configuration will no longer work. As a result, the initiator will not see any LUNs from the Restorer. In release 4.4.x or later, the LUN MASKING feature is replaced by the ACCESS GROUPS feature. If LUN masking configuration was configured, the upgrade process will create an access group which has the initiators WWNN as a member without any LUNs. Thus, the solution is to add all LUNs to this access group so that the initiator and LUNS can see each other. This can be done via either the GUI with any browser or the command line. [In the same way, the Default LUN mask in 4.3.x is no longer available in 4.4.x. If devices are in the Default mask, once an upgrade to 4.4.x happens the Default LUN mask disappears and a new access group must be created in order for the initiators to now see the targets.]

Switch Virtual Devices between Primary & Secondary Port List


To switch virtual devices between the primary & secondary port list: VTL Stack Menu...Access Groups...Groups...click on a group...click Set In-Use Ports...choose a device by checking its box...change which of the two radio buttons is selected: Primary Ports or Secondary Ports...click OK.

Virtual Tape Library (VTL) - GUI

429

Access Groups (for VTL Only)

Notice that Port listed in the In-Use Ports column has changed to the Secondary Port (or Primary if that was the one selected). (The error At least one value must be selected refers to devices: choose a device by checking its box.) Group. Choose from the drop-down menu. (This field is optional.) Library Name. Choose from the drop-down menu. (This field is optional.) Devices. (This field is required.) Select - All - None - "All" checks the boxes for all drives. "None" unchecks all the boxes. Primary Ports or Secondary Ports. (This field is required.)

Procedure: Use a VTL Library / Use an Access Group


1. Start the VTL process and enable all libraries and drives. MenuVirtual Tape LibrariesVTL Service...Virtual Tape Library Service pulldown...choose Enable. Enabling VTL Service may take a few minutes. When service is enabled, the pulldown says Enabled. 2. Create a virtual tape library. For example, create a VTL called VTL1 with 32 slots, 4 drives and two cartridge access ports: Menu Virtual Tape LibrariesVTL Service LibrariesCreate Library button. Enter the following: Library Name VTL1. Number of Drives - 4. Number of Slots - 32. Number of CAPs - 2. Changer Model Name - L180.

After the above choices are made, click the OK button. 3. Create a new virtual drive for the tape library VTL1.. Menu Virtual Tape LibrariesVTL Service LibrariesSelect a library by clicking it...Expand the library by clicking the + sign to the left of it...Drives...Create Drive button. Enter the following information: 430

Location - VTL1. Number of Drives - 4.


Data Domain Operating System User Guide

Access Groups (for VTL Only)

Model Name - IBM-LTO-1.

After the above choices are made, click the OK button. 4. Create an empty group group2 as a container . VTL Stack Menu...Access Groups...Groups...Create Group. Enter the following: Group Name. - group2.

Click the OK button. 5. Give the initiator 00:00:00:00:00:00:00:04 the convenient alias moe. VTL Stack MenuPhysical ResourcesPhysical ResourcesInitiatorsClick the Set Initiator Alias button at top right. Enter the following: WWPN - 00:00:00:00:00:00:00:04. Alias - moe.

Click the OK button. 6. Put the initiator moe into the group group2. VTL Stack Menu...Access Groups...Groups...click on a group...click Add Initiators. Enter the following: Group - choose group2 from the pulldown menu. Alias - Check the box for moe. Click the OK button.

7. View the initiator moe, in order to view the systems known clients and world-wide node names (WWNNs). The WWNN is for the Fibre Channel port on the client. VTL Stack Menu...Physical Resources...Physical Resources...Initiators...moe. 8. Add LUNs to the Access Group group2. Put VTL1 drive 1 through drive 4 and changer in group2. This allows any initiator in group2 to see VTL1 drive 1 through drive 4, and the changer. VTL Stack Menu...Access Groups...Groups...click on group group2...click Add LUNs. Enter the following: Group - choose group2 from the pulldown menu. Library Name - choose vtl1 from the pulldown menu.
431

Virtual Tape Library (VTL) - GUI

Physical Resources

Select Devices - Check the boxes for drive 1, drive 2, drive 3, drive 4, and the changer. Click the OK button. Click OK again to confirm.

9. View the changes to group2. VTL Stack Menu...Access Groups...Groups...click on group group2.

Physical Resources
Initiators
Note The terms initiator name and initiator alias mean exactly the same thing and are used interchangeably. An Initiator is any Data Domain system clients HBA world-wide port name (WWPN). The name of the initiator is an alias that maps to a clients world-wide port name (WWPN). For convenience, optionally add an initiator alias before adding a VTL Access Group that ties together the VTL devices and client.

Until you add an Access Group for the client, the client cannot access any data on the Data Domain system. After adding an Access Group for the initiator/client, the client can access only the devices in the Access Group. A client can have Access Groups for multiple devices. A maximum of 128 initiators can be configured.

Add an Initiator (= Add WWPN = Set Initiator Alias)


To give a client an initiator name on a Data Domain system, do the following: VTL Stack MenuPhysical ResourcesPhysical ResourcesInitiatorsClick the Set Initiator Alias button at top right...Add a WWPN and an alias for it. Sets the alias for the wwpn. An alias is optional but much easier to use than a full wwpn. If an alias is already defined for the provided wwpn it is over-written. The creation of an alias has no effect on any groups the wwpn may already be assigned to. An initiator name may be up to 32 characters long, contain only characters from the set "0-9a-zA-Z_-" and must be unique among the set of aliases. A total of 128 aliases are allowed. WWPN - the world-wide port name of the Fibre Channel port on the client system. The wwpn must use colons ( : ). The alias of an initator can be changed.

432

Data Domain Operating System User Guide

Physical Resources

Alias - an alias that you create for Access Grouping. The name can have up to 32 characters. Data Domain suggests using a simple, meaningful name.

Change an Existing Initiator Alias


To change an existing initator alias, do: VTL Stack MenuPhysical ResourcesPhysical ResourcesInitiatorsClick an initiator...Click the Set Initiator Alias button at top right...enter a new Alias...click OK. Alias. (This field is required.)

Delete an Initiator (Reset Initiator Alias)


To delete a client initiator alias from the Data Domain system, do the following: VTL Stack MenuPhysical ResourcesPhysical ResourcesInitiatorsClick an initiator...Click the Reset Initiator Alias button at top right...click OK to clear the alias thereby deleting the initiator. Alias. (This field is required.)

This removes the alias. The initiator can now be referred to only by its WWPN. That is, this resets (deletes) the alias initiator_name from the system. Deleting the alias does not affect any groups the initiator may have been assigned to. Note All Access Groups for the initiator must be deleted before deleting the initiator.

Display Initiators
To list one or all named initiators and their WWPNs, navigate as follows: VTL Stack Menu...Physical Resources...Physical Resources. Information is shown on Initiators and Ports: Initiators: Ports: Port - the physical port number. Port ID. initiator-name is the alias that you create for Access Grouping. wwpn is the world-wide port name of the Fibre Channel port on the client system. Online Ports - each port is shown as Online or Offline.

Virtual Tape Library (VTL) - GUI

433

Physical Resources

Enabled -- the port operational state, that is, whether Enabled or Disabled. Status -- whether Online or Offline, that is, whether or not the port is up and capable of handling traffic.

Add an Initiator to an Access Group


To add an initiator to an Access Group, navigate as follows: VTL Stack Menu...Physical Resources...Physical Resources...Initiators...click on an initiator to select it...click the Set Group button...choose an initiator by clicking the corresponding radio button...click OK. Group. (This field is required.)

Remove an Initiator from an Access Group


To remove an initiator from an Access Group, navigate as follows: VTL Stack Menu...Physical Resources...Physical Resources...Initiators...click on an initiator to select it...click the Set Group button...click the radio button for None...click OK. None. (This field is required.)

HBA Ports
VTL HBA Ports allow the user to enable or disable all the Fibre-Channel ports in port-list, or to show various VTL information in a per-port format.

Enable HBA ports


Enable Fibre-Channel ports: VTL Stack Menu...Physical Resources...Physical Resources...HBA Ports...Enable Ports button. Check the boxes for the ports you want to enable. Click OK. Click OK again. Ports to Enable. (This field is required.)

You may see no ports that can be enabled, which may mean that all your ports are enabled already. To check a list of the ports that are Enabled, click Disable Ports. You can then Cancel out of Disable Ports.

Disable HBA ports


Disable Fibre-Channel ports:

434

Data Domain Operating System User Guide

Physical Resources

VTL Stack Menu...Physical Resources...Physical Resources...HBA Ports...Disable Ports button. Check the boxes for the ports you want to disable. Click OK. Click OK again. Ports to Disable. (This field is required.)

You may see no ports that can be disabled, which may mean that all your ports are disabled already. To check a list of the ports that are Disabled, click Enable Ports. You can then Cancel out of Enable Ports.

Show VTL information on all ports


To show VTL information on all ports, navigate as follows: VTL Stack Menu...Physical Resources...Physical Resources. A list of ports and information on them appears below the list of initiators. This shows the following information. Port -- the physical port number. Port ID Enabled -- the port operational state, that is, whether Enabled or Disabled. Status -- whether Online or Offline, that is, whether or not the port is up and capable of handling traffic.

Show more detailed information on all ports


To show slightly more detailed info on all ports, VTL Stack Menu...Physical Resources...Physical Resources...HBA Ports. This shows information on Port Hardware and on Ports. Under Port Hardware, the following information is shown: Port -- the physical port number. Model -- the model number of the port. Firmware -- the relative IP Address of the port. WWNN -- the World Wide Node Name of the port. WWPN -- the World Wide Port Name of the port.

Under Ports, the following information is shown: Port -- the physical port number. Connection Type

Virtual Tape Library (VTL) - GUI

435

Physical Resources

Link Speed Port ID Enabled -- the port operational state. Status -- shows whether the port is up and capable of handling traffic.

Note Gbps = Gigabits per second.

Show very detailed information on a single port


To show very detailed information on a single port, do: VTL Stack Menu...Physical Resources...Physical Resources...HBA Ports...click on a single port. This shows information about that single port in four groups: Port Hardware, Port, Port Statistics, and Port Detailed Statistics: Under Port Hardware, the following information is shown: Port -- the physical port number. Model -- the model number of the port. Firmware -- the relative IP Address of the port. WWNN -- the World Wide Node Name of the port. WWPN -- the World Wide Port Name of the port.

Under Port, the following information is shown: Port -- the physical port number. Connection Type Link Speed State -- Enabled or Disabled -- the port operational state. Status -- Online or Offline -- shows whether the port is up and capable of handling traffic.

Port Statistics: Port Detailed Statistics: 436

Port number # of Control Commands -- non read/write commands # of Read Commands -- number of READ commands # of Write Commands -- number of WRITE commands
Data Domain Operating System User Guide

Pools

In (MiB) -- number of MebiBytes written Out (MiB) -- number of MebiBytes read # of Error PrimSeqProtocol -- count of errors in Primitive Sequence Protocol # of Link Fail -- count of link failures # of Invalid CRC -- number of frames received with bad CRC # of Invalid TxWord -- number of invalid tx words # of LIP -- number of LIPs # of Loss Signal -- number of times loss of signal was detected. # of Loss Sync -- number of times sync loss was detected

Note MiB = MebiByte, the base 2 equivalent of MB, MegaByte. Note KiB = KibiByte, the base 2 equivalent of KB, KiloByte.

Pools
The Data Domain pools feature for VTL allows replication by pools of VTL virtual tapes. The feature also allows for the replication of VTL virtual tapes from multiple replication originators to a single replication destination. For replication details, see the chapter on replication and its section Replicating VTL Tape Cartridges and Pools on page 252.

A pool name can be a maximum of 32 characters. A pool name with the restricted names all, vault, or summary cannot be created or deleted. A pool can be replicated no matter where individual tapes are located. Tapes can be in the vault, a library, or a drive. You cannot move a tape from one pool to another. Two tapes in different pools on one Data Domain system can have the same name. A pool sent to a replication destination must have a pool name that is unique on the destination. Data Domain system pools are not accessible by backup software. No VTL configuration or license is needed on a replication destination when replicating pools. Data Domain recommends only creating tapes with unique bar codes. Having duplicate bar codes in the same tape pool creates an error. Although no error is created for duplicate bar codes in different pools, duplicate bar codes may cause unpredictable behavior in backup applications and can lead to operator confusion.

Virtual Tape Library (VTL) - GUI

437

Pools

Add a Pool
To create a pool, navigate as follows: VTL stack menu Virtual Tape LibrariesVTL ServiceVaultCreate Pool...enter a Pool Name...click OK. Pool-name cannot be all, vault, or summary. Max of 32 characters. (This field is required.)

You can also create a pool under Pools, as follows: VTL stack menu PoolsPoolsCreate Pool...enter a Pool Name...click OK.

Delete a Pool
To delete a pool, do the following: First, the pool must be empty before the deletion. To empty the pool: VTL stack menu Virtual Tape LibrariesVTL ServiceVaultClick on the pool you want to empty...click Delete Tapes. Click Select: All or Select all items found. Click OK. Click OK again. Now, to delete the pool, do: VTL stack menu Virtual Tape LibrariesVTL ServiceVaultClick on the pool you want to empty...click Delete Pool. Click OK. Click OK again. Select a Pool. (This field is required.)

Display Pools
To display pools: VTL stack menu Pools. Or, as an alternative: VTL stack menu Virtual Tape LibrariesVTL ServiceVault. The Location column gives the name of each pool. The Default pool holds all tapes that are not assigned to a user-created pool. The # of Tapes column gives the number of tapes in each pool. The Total Size column gives the total configured data capacity of the tapes in that pool in GiB (Gibibytes, the base 2 equivalent of GB, Gigabytes). The Total Space Used column displays the amount of space used on the virtual tapes in that pool.

438

Data Domain Operating System User Guide

Pools

The Average Compression column displays the average amount of compression achieved on the data on the tapes in that pool.

Note GiB = Gibibyte, the base 2 equivalent of GB, Gigabyte.

Display Summary Information about a Single Pool


To display a single pool: VTL stack menu Pools...Pools...select a pool by clicking on it. The Total Tape Count gives the number of tapes in that pool. The Total Size of Tapes gives the total configured data capacity of the tapes in that pool in GiB (Gibibytes, the base 2 equivalent of GB, Gigabytes). The Total Tape Space Used displays the amount of space used on the virtual tapes in that pool. The Average Compression displays the average amount of compression achieved on the data on the tapes in that pool.

Note GiB = Gibibyte, the base 2 equivalent of GB, Gigabyte.

Display All Tapes in a Pool


The VTL GUI user can display all tapes in a Pool using the Search Tapes dialog box: Virtual Tape Libraries...VTL Service...Libraries...click Search Tapes button. The Search Tapes dialog box appears. Use the Pool pulldown menu to choose the pool, then click the Search button, and it will search for all tapes in that pool. The fields are: Location The pulldown allows the user to specify a the vault, or a particular library. (This field is optional. The Default is All.) Pool The pool is Default if none is given. A pool must already exist to use this option. If necessary, see Add A Pool. Valid names are between 1 and 32 characters long. (This field is required, however Default is an acceptable value.) Barcode For searching. (This field is optional.) Count The number of tapes returned by the Search. (This field is optional.) Tapes Per Page. This field is the number of results on the search page. (This field is optional.)

Virtual Tape Library (VTL) - GUI

439

Pools

440

Data Domain Operating System User Guide

Replication - GUI

28

For general information on Replication or Replication CLI commands, see the chapter Replication - CLI. The figure below shows the Replication GUI Main Page.

Figure 40 Replication GUI Main Page

441

Key to figure: Replication GUI Main Page 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. Performance Panel Overview Bar Open/Close Refresh Toggle Configuration Panel Bar Title Overview Box Sort Pairs Replication Pairs Bars Opened Status Panel Help Button Collection Replication Icon Directory Replication Icon Statuses are color-coded

From the main DDR GUI page, click on the Replication link at lower left in the sidebar to bring up the Replication GUI. The Replication GUI main page is shown in Figure 40. Note Context-sensitive online help can be reached by clicking the question mark (?) icons that appear in various places, for instance on the Status and Configuration boxes. The online help also has a Table of Contents button that allows the user to view the TOC and content of the entire User Guide. In unexpanded form, the boxes appear as bars. To expand them into boxes, click on the plus sign at the left end of the bar. To go from expanded back to unexpanded, click on the minus sign at the left end of the bar. The Overview box has four sections: Title Bar, Topology Panel (a graphic with an arrow for each replication pair), Performance Panel, and Configuration Panel.

The Title Bar appears at the top of the box. The left end of the Title Bar is a Control Bar, with three buttons. The leftmost button (+ or -) is an Expand/Unexpand button. Clicking plus (+) causes the bar to expand into a box. Clicking minus (-) causes the box to return to its unexpanded form, a bar. The middle button (two arrows circling each other) is a Refresh button. Note that while refreshing is in progress, a spinning daisy-shaped wheel appears on the topology panel near the arrow of the replication pair that has a refresh in progress. The third button on the Control Bar (the icon looks like a gear) is the Configuration Button. Clicking it causes the Configuration panel to toggle between open and closed.

442

Data Domain Operating System User Guide

The right end of the Title Bar is a Status Bar, indicating how many replication pairs are in normal, warning or error state. Note the colors (green for normal, yellow for warning, red for error, light gray for zero value).

The Topology Panel at left is a graphic showing the topology or configuration of the overall network related to the selected Data Domain system. It shows the various nodes involved in replication, with arrows between them. A Link (or arrow) represents one or more replication pairs. It can be one actual pair, or one folder that contains multiple directory replication pairs. Depending on its status, it is displayed as normal (green), warning (yellow), or error (red). Users can access the pair either by double-clicking the arrow, or by right-clicking it and selecting from the dropdown menu. The Performance Panel displays three historical charts: pre-compressed written, post-compressed replicated, and post-compressed remaining. Unlike performance graphs of a replication pair, they present statistics per the selected Data Domain system. This means aggregated statistics including all replication pairs related to this Data Domain system. The duration (x-axis) is 8 days by default. The y-axis is in GibiBytes or MebiBytes (the binary equivalents of GigaBytes and MegaBytes). The Configuration Panel: Less frequently used information such as configuration can be accessed by clicking the Configuration Button (the icon looks like a gear) from the Title Bar. The Configuration Panel contains throttle settings, bandwidth, and network delay. The Throttle, Bandwidth, and Configuration settings are applicable only to the replication pairs whose source is the selected Data Domain system. The Configuration Button appears only for actual collection or directory replication pairs.

The Replication Pairs displayed in the Topology Panel are all represented below it as bars. The Replication Pairs Boxes have almost the same sections as the Overview Box (Title Bar, Performance Panel, and Configuration Panel), except that the effect of the Expand (+) button differs: a Replication Bar shows either sub-bars or a Status Panel.

Effect of the Expand (+) Button: Parent Bar (with children under it): expands to show its child bars. Leaf Bar (has no children under it): expands to show the Status Panel.

That is, a Replication Bar shows either sub-bars or a Status Panel, reached by expanding it with the plus (+) button. Note The icon for collection replication looks like a light gray cylindrical stack of disks. Note The icon for directory replication looks like a yellow folder. The Configuration, Status, and General Configuration screens are explained more fully below in the sections Configuration on page 446, Status on page 447, and General Configuration on page 449.
Replication - GUI 443

Distinction Between Overview Bar/Box and Replication Pair Bar/Boxes


The replication GUI consists of two main sections: Overview Bar/Box and Replication Pair Bars/Boxes. It is important to understand the difference between the two. In Figure 41, the upper expanded box is an Overview Box and the lower expanded box is a Replication Pair Box. The Overview Bar or Box shows aggregated information about the selected Data Domain system, that is, summary information about: all of that systems inbound replication pairs taken as a whole, and all of that systems outbound replication pairs taken as a whole. The focus is the Data Domain system itself and the inputs to and outputs from it. The Replication Pair Bar or Box, by contrast, shows the behavior of that Replication Pair, as opposed to the behavior of the individual Data Domain system. Notice that there is a difference between the Overview Performance Panel and the Replication Pair Performance Panel: the Overview Performance Panel has ReplIn, ReplOut, and DataIn, whereas the Replication Pair Performance Panel has DataIn, Replicated, and Remaining.

Figure 41 Overview Versus Replication Pair

444

Data Domain Operating System User Guide

In order to understand the values referred to in the Performance panels in the figure Overview Versus Replication Pair on page 444, compare it with the figure Data Domain system Versus Replication Pair on page 445. The Overview Performance Panel in the screenshot describes the system dlh6, and refers to the cross-hatched items on the diagram: dlh6, DataIn, ReplIn, and ReplOut. The Replication Pair Panel in the screenshot describesthe replication pair ccm31-dlh6, and refers to the solid dark gray items on the diagram: the pair ccm31-dlh6, DataIn, Replicated, and Remaining.

Figure 42 Data Domain system Versus Replication Pair

Pre-Compression and Post-Compression Data


Some replication data is post-compression, and some is pre-compression, as shown in Table 8 and Table 9.
Table 8 Replication Pair Pre- and Post-Compression Data.

Replication Pair ccm31 - dlh6 Data In Replicated Remaining

Pre- or Post-Compression Collection Replication Pre Post Post

Pre- or Post-Compression Directory Replication Pre Post Pre

Table 9 Data Domain system Pre- and Post-Compression Data.

Data Domain system dlh6 Data In ReplIn ReplOut

Pre- or Post-Compression Pre Post Post

Replication - GUI

445

Configuration
This screen monitors and shows the configuration of the system (rather than controlling it). This screen is reached by clicking the Configuration button (symbol: a gear) on the Overview bar.

Throttle Settings
Throttle Settings throttle back or restrict the bandwidth at which data goes over the network, to prevent replications using up all of the systems resources. The default network bandwidth used by replication is unlimited. Temporary Override: If an override has been set, it shows here. Permanent Schedule: The rate includes a number or the word unlimited. The number can include a tag for bits or bytes per second. The default rate is bits per second. In the rate variable:

bps or b equals raw bits per second Kibps, Kib, or K equals 1024 bits per second Bps or B equals bytes per second KiBps or KiB equals 1024 bytes per second

Note Kib=Kibibits, the base 2 equivalent of Kb or Kilobits. KiB=Kibibytes, the base 2 equivalent of KB or Kilobytes. The rate can also be 0 (the zero character), or disabled. In each case, replication is stopped until the next rate change. As an example, replication could be limited to 20 kibibytes per second starting on Mondays and Thursdays at 6:00 a.m. Replication runs at the given rate until the next scheduled change or until new throttle commands force a change. The default rate with no scheduled changes is to run as fast as possible at all times. Note The system enforces a minimum rate of 98,304 bits per second (12 KiB). For more information on Throttle Settings, see the Replication - CLI chapter, under Add a Scheduled Throttle Event on page 259.

Bandwidth
The value is the actual bandwidth of the underlying network used for replication. This is used to set the internal tcp buffer size for replication socket. Coupled with "option set delay", the tcp buffer size is calculated and set as "bandwidth * delay / 1000 * 1.25".
446 Data Domain Operating System User Guide

The rate is an integer of bytes/second. For more information on Bandwidth, see the Replication - CLI chapter, under Procedure: Set Replication Bandwidth and Network Delay on page 263.

Network Delay
This is the actual network delay value for the system. Useful when a wide-area-network has long delays in the round-trip time between the replication source and destination. The value is an integer in milliseconds. For more information on Network Delay, see the Replication - CLI chapter, under Procedure: Set Replication Bandwidth and Network Delay on page 263.

Listen Port
The default listen-port for a destination Data Domain System is 2051. This is the port to which the source sends data. A destination can have only one listen port. If multiple sources use one destination, each source must send to the same port. For more information on the listen-port, see the chapter Replication - CLI, under the heading Change a Destination Port on page 258.

Status
The Status Panel only shows for leaf nodes (which have no sub-pairs underneath them). It is reached by expanding a leaf-node Replication Bar using the Expand (+) button.

Current State
Four states/statuses need to be distinguished from one another: Current State, Status, Local Filesystem Status, and Replication Status. Current State is the Replication Pair State. Possible Current States are: Initializing, Replicating, Recovering, Resynching, Migrating, Unititialized, and Disconnected. Status is as follows: For the first five Current States, the Status is Normal (or Warning in the case of unusual delay). For Unititialized, the Status is Warning. For Disconnected, the Status is Error.

The table below Current State shows Local Filesystem Status and Replication Status.

Replication - GUI

447

Local Filesystem Status is the filesystem status for the Source and Destination Data Domain systems. It can take the values: Enabled, N/A, or Disabled. Replication Status is the status for that Replication Context, for the Source and Destination Data Domain systems. It can take the values: Enabled, N/A, or Disabled.

Synchronized as of
Sync-as-of Time The source automatically runs a replication sync operation every hour and displays the time local to the source. If the source and destination are in different time zones, the Sync-as-of Time may be earlier than the time stamp in the Time column. A value of unknown appears during replication initialization. For more information on Synchronized as of, see the chapter Replication - CLI, under the heading Display Replication History on page 266.

Backup Replication Tracker


Using the Backup Replication Tracker, users can track down the status of their backup replication. When the user enters a "Backup Completion Time" and clicks the Track button, the Replication Manager gives the replication completion time for the particular backup. If the replication is not finished, the estimated completion time is given instead. This is useful for finding out the status of each individual backup replication. The default value is 06 am, today. This is the most common backup completion time, but users can change the value. There are three dropdown boxes for changing the time, and the following shows the contents of them.

Day Dropdown Box: Today, Yesterday, 2 days ago, , 7 days ago. Hour Dropdown Box: 01, , 12. am/pm Dropdown Box: am, pm.

The modified value will be saved after the track button is clicked. This backup completion time will be automatically used for replication status the next time a user logs in or the Refresh button is clicked. Note UI behavior: When an invalid time is specified in Backup Completion Time, the value of Replication Completion Time is "Not available". (Today 06 am is specified for the backup time when the current time is 3am).

448

Data Domain Operating System User Guide

General Configuration
Less frequently used information such as configuration can be found for any Replication Bar that is a leaf node (has no child bars), by clicking on the Configuration Button (gear symbol) on the Control Bar and expanding the box. This Configuration - General Panel displays source Data Domain system and directory (for directory replication), target Data Domain system and directory (for directory replication), and connection host and port.

Replication - GUI

449

450

Data Domain Operating System User Guide

Appendix A: Time Zones


Africa

Africa/Abidjan Africa/Bamako Africa/Brazzaville Africa/Dakar Africa/Gaborone Africa/Kigali Africa/Luanda Africa/Maseru Africa/Ndjamena Africa/Sao_Tome

Africa/Accra Africa/Bangui Africa/Bujumbura

Africa/Addis_Ababa Africa/Banjul Africa/Cairo

Africa/Algiers Africa/Bissau Africa/Casablanca Africa/Douala Africa/Kampala Africa/Libreville Africa/Malabo Africa/Monrovia Africa/Ouagadougou Africa/Tunis

Africa/Asmera Africa/Blantyre Africa/Conakry Africa/Freetown Africa/Khartoum Africa/Lome Africa/Maputo Africa/Nairobi Africa/Porto-Novo Africa/Windhoek

Africa/Dar_es_Salaam Africa/Djibouti Africa/Harare Africa/Kinshasa Africa/Lumumbashi Africa/Mbabane Africa/Niamey Africa/Timbuktu Africa/Johannesburg Africa/Lagos Africa/Lusaka Africa/Mogadishu Africa/Nouakchott Africa/Tripoli

America

America/Adak America/Asuncion America/Boise America/Cayman America/Curacao

America/Anchorage America/Atka America/Buenos_Aires America/Chicago America/Dawson

America/Anguilla America/Barbados America/Caracas America/Cordoba

America/Antigua America/Belize America/Catamarca America/Costa_Rica

America/Aruba America/Bogota America/Cayenne America/Cuiaba America/Detroit 451

America/Dawson_Creek America/Denver

America/Dominica America/Fortaleza America/Grenada America/Halifax America/Iqaluit America/La_Paz America/Managua America/Menominee America/Montserrat America/Noronha

America/Edmonton America/Glace_Bay America/Guadeloupe America/Havana America/Jamaica America/Lima America/Manaus America/Mexico_City America/Nassau America/Panama

America/El_Salvador America/Godthab America/Guatemala America/Indiana America/Jujuy America/Los_Angeles America/Martinique America/Miquelon America/New_York America/Pangnirtung

America/Ensenada America/Goose_Bay America/Guayaquil America/Indianapolis America/Juneau America/Louisville America/Mazatlan America/Montevideo America/Nipigon America/Paramaribo America/Puerto_Rico America/Santiago America/St_Johns

America/Fort_Wayne America/Grand_Turk America/Guyana America/Inuvik America/Knox_IN America/Maceio America/Mendoza America/Montreal America/Nome America/Phoenix America/Rainy_River America/Santo_Domingo America/St_Kitts

America/Port_of_Spain America/Port-au-Prince America/Porto_Acre America/Rankin_Inlet America/Sao_Paulo America/St_Lucia America/Thule America/Virgin America/Regina America/Scoresbysund America/St_Thomas America/Thunder_Bay America/Whitehorse America/Rosario America/Shiprock America/St_Vincent America/Tijuana America/Winnipeg

America/Swift_Current America/Tegucigalpa America/Tortola America/Yakutat America/Vancouver America/Yellowknife

Antarctica

Antarctica/Casey Antarctica/Palmer

Antarctica/DumontDUrville Antarctica/Mawson Antarctica/South_Pole

Antarctica/McMurdo

Asia

Asia/Aden

Asia/Alma-Ata

Asia/Amman

Asia/Anadyr

Asia/Aqtau

452

Data Domain Operating System User Guide

Asia/Aqtobe Asia/Bangkok Asia/Chungking Asia/Dushanbe Asia/Ishigaki Asia/Kabul Asia/Krasnoyarsk Asia/Magadan Asia/Omsk Asia/Riyadh Asia/Taipei Asia/Thimbu Asia/Vientiane

Asia/Ashkhabad Asia/Beirut Asia/Colombo Asia/Gaza Asia/Istanbul Asia/Kamchatka Asia/Kuala_Lumpur Asia/Manila Asia/Phnom_Penh Asia/Saigon Asia/Tashkent Asia/Tokyo Asia/Vladivostok

Asia/Baghdad Asia/Bishkek Asia/Dacca Asia/Harbin Asia/Jakarta Asia/Karachi Asia/Kuching Asia/Muscat Asia/Pyongyang Asia/Seoul Asia/Tbilisi Asia/Ujung_Pandang Asia/Yakutsk

Asia/Bahrain Asia/Brunei Asia/Damascus Asia/Hong_Kong Asia/Jayapura Asia/Kashgar Asia/Kuwait Asia/Nicosia Asia/Qatar Asia/Shanghai Asia/Tehran Asia/Ulan_Bator Asia/Yekaterinburg

Asia/Baku Asia/Calcutta Asia/Dubai Asia/Irkutsk Asia/Jerusalem Asia/Katmandu Asia/Macao Asia/Novosibirsk Asia/Rangoon Asia/Singapore Asia/Tel_Aviv Asia/Urumqi Asia/Yerevan

Atlantic

Atlantic/Azores Atlantic/Jan_Mayen Atlantic/Stanley

Atlantic/Bermuda Atlantic/Madeira

Atlantic/Canary Atlantic/Reykjavik

Atlantic/Cape_Verde

Atlantic/Faeroe

Atlantic/South_Georgia Atlantic/St_Helena

Australia

Australia/ACT Australia/Darwin Australia/Melbourne

Australia/Adelaide Australia/Hobart Australia/NSW

Australia/Brisbane Australia/LHI Australia/North

Australia/Broken_Hill Australia/Canberra Australia/Lindeman Australia/Perth Australia/Lord Howe Australia/Queensland

453

Australia/South Australia/Yancowinna

Australia/Sydney

Australia/Tasmania

Australia/Victoria

Australia/West

Brazil

Brazil/Acre

Brazil/DeNoronha

Brazil/East

Brazil/West

Canada

Canada/Atlantic Canada/Mountain Canada/Yukon

Canada/Central Canada/Newfoundland

Canada/East-Saskatchewan Canada/Pacific

Canada/Eastern Canada/Saskatchewan

Chile

Chile/Continental

Chile/EasterIsland

Etc

Etc/GMT Etc/GMT+4 Etc/GMT+9 Etc/GMT-0 Etc/GMT-5 Etc/GMT-10 Etc/Greenwich

Etc/GMT+0 Etc/GMT+5 Etc/GMT+10 Etc/GMT-1 Etc/GMT-6 Etc/GMT-11 Etc/UCT

Etc/GMT+1 Etc/GMT+6 Etc/GMT+11 Etc/GMT-2 Etc/GMT-7 Etc/GMT-12 Etc/Universal

Etc/GMT+2 Etc/GMT+7 Etc/GMT+12 Etc/GMT-3 Etc/GMT-8 Etc/GMT-13 Etc/UTC

Etc/GMT+3 Etc/GMT+8 Etc/GMT0 Etc/GMT-4 Etc/GMT-9 Etc/GMT-14 Etc/Zulu

454

Data Domain Operating System User Guide

Europe

Europe/Amsterdam Europe/Berlin Europe/Chisinau Europe/Istanbul Europe/London Europe/Monaco Europe/Riga Europe/Skopje Europe/Vaduz Europe/Zagreb

Europe/Andorra Europe/Bratislava Europe/Copenhagen Europe/Kiev Europe/Luxembourg Europe/Moscow Europe/Rome Europe/Sofia Europe/Vatican Europe/Zurich

Europe/Athens Europe/Brussels Europe/Dublin Europe/Kuybyshev Europe/Madrid Europe/Oslo Europe/San_Marino Europe/Stockholm Europe/Vienna

Europe/Belfast Europe/Bucharest Europe/Gibraltar Europe/Lisbon Europe/Malta Europe/Paris Europe/Sarajevo Europe/Tallinn Europe/Vilnius

Europe/Belgrade Europe/Budapest Europe/Helsinki Europe/Ljubljana Europe/Minsk Europe/Prague Europe/Simferopol Europe/Tirane Europe/Warsaw

GMT

GMT GMT+5 GMT+10 GMT-2 GMT-7 GMT-12

GMT+1 GMT+6 GMT+11 GMT-3 GMT-8

GMT+2 GMT+7 GMT+12 GMT-4 GMT-9

GMT+3 GMT+8 GMT+13 GMT-5 GMT-10

GMT+4 GMT+9 GMT-1 GMT-6 GMT-11

Indian (Indian Ocean)

Indian/Antananarivo Indian/Kerguelen

Indian/Chagos Indian/Mahe

Indian/Christmas Indian/Maldives

Indian/Cocos Indian/Mauritius

Indian/Comoro Indian/Mayotte

455

Indian/Reunion

Mexico

Mexico/BajaNorte

Mexico/BajaSur

Mexico/General

Miscellaneous

Arctic/Longyearbyen Egypt GB Iceland Kwajalein Navajo PRC Turkey W-SU

CET Eire GB-Eire Iran Libya NZ PST8PDT UCT Zulu

CST6CDT EST Greenwich Israel MET NZ-CHAT ROC Universal

Cuba EST5EDT Hongkong Jamaica MST Poland ROK UTC

EET Factory HST Japan MST7MDT Portugal Singapore WET

Pacific

Pacific/Apia Pacific/Enderbury Pacific/Gambier Pacific/Kiritimati Pacific/Midway Pacific/Pago_Pago

Pacific/Auckland Pacific/Fakaofo Pacific/Guadalcanal Pacific/Kosrae Pacific/Nauru Pacific/Palau

Pacific/Chatham Pacific/Fiji Pacific/Guam Pacific/Kwajalein Pacific/Niue Pacific/Pitcairn

Pacific/Easter Pacific/Funafuti Pacific/Honolulu Pacific/Majuro Pacific/Norfolk Pacific/Ponape

Pacific/Efate Pacific/Galapagos Pacific/Johnston Pacific/Marquesas Pacific/Noumea Pacific/Port_Moresby

456

Data Domain Operating System User Guide

Pacific/Rarotonga Pacific/Tongatapu

Pacific/Saipan Pacific/Truk

Pacific/Samoa Pacific/Wake

Pacific/Tahiti Pacific/Wallis

Pacific/Tarawa Pacific/Yap

system V

systemV/AST4 systemV/EST5EDT systemV/PST8PDT

systemV/AST4ADT systemV/HST10 systemV/YST9

systemV/CST6 systemV/MST7 systemV/YST9YDT

systemV/CST6CDT systemV/MST7MDT

systemV/EST5 systemV/PST8

US (United States)

US/Alaska US/Eastern US/Pacific

US/Aleutian US/Hawaii US/Pacific-New

US/Arizona US/Indiana-Starke US/Samoa

US/Central US/Michigan

US/East-Indiana US/Mountain

Aliases GMT=Greenwich, UCT, UTC, Universal, Zulu CET=MET (Middle European Time) US/Eastern=Jamaica US/Mountain=Navajo

457

458

Data Domain Operating System User Guide

Index
"permission denied" error message 192

Symbols

add

a new shelf to a volume 45 adminaccess command 109 administrative email, display address 123 administrative host, display host name 123 AIX 26 alerts add an email address 130 command 130 display current 131 display current and history 133 display the email list 132 display the history 132 remove an address from the email list 130 set the email list to the default 131 test the list 130 alias add 82 command 81 defaults 82 display 82 remove 82 authentication mode for CIFS 316 autonegotiate, set 100 autosupport command 134 display all parameters 137 display history file 138 display list 137 display schedule 138 remove an email address 135 run the report 135 send a report 135
459

B C

send command output 136 set all parameters to default 137 set list to the default 135 set the schedule 136 set the schedule to the default 137 test report 134

backup, recommendations for full

CIFS add a client 311 add a user 310 Add IP address/hostname mappings 317 allow access 110 allow group administrative access 320 allow trusted domain users 319 anonymous user connections 321 certificate authority security 321 configuration set up 22 disable client connections 312 display access settings 114 display active clients 322 display CIFS groups 325 display CIFS users 324 display clients 323 display configuration 323 display group details 326 Display IP address/hostname mappings 324 display statistics 322 display status 325 display user details 325 display valid CIFS options that can be set 322 enable client connections 312 hostname change effects 98 identify a WINS server 318 Increase memory for more user accounts 320 remove a client 312 remove all clients 313 remove all IP address/hostname mappings 317 remove an administrative client 313 Remove one IP address/hostname mapping 317 remove the NetBIOS hostname 313 remove the WINS server 318 reset CIFS options 321
460 Data Domain Operating System User Guide

resolve NetBIOS name 318 restrict administrative access 110 secured LDAP with TLS 311 set a NetBIOS hostname 313 set the authentication mode 316 set the logging level 320 set the maximum transmission size 321 shares, add 313 shares, delete 315 shares, display 325 shares, enable/disable 315 shares, modify 315 SMBD memory 321 user access 309 clean change schedule 222 display amount parameters 223 display schedule 224 display status 224 display throttle 224 monitor operations 225 set schedule to the default 223 set throttle 223 set throttle to the default 223 start 221 stop 222 command output, remote with SSH 114 send output using autosupport command commands listed 10 compression algorithms 225 set for none 225 config command 119 command details 119 configuration basic additions 27 change settings 119 defaults 9 first time 14 context 250 CPU display load 72, 73

136

461

D
data compression 6 integrity checks 5 migration 291 Data Domain Enterprise Manager at system installation 13 introduction 8 system administration with 28 system configuration 14, 120 date display 78 set 66, 78 DDR Manager monitor multiple systems 405 opening and use 399 default gateway change 106 display 107 reset 106 DHCP disable 96 enable 96 disk add disks and LUNs 36, 175 add enclosure command 36 command 173 command format 35 display performance statistics 183 display RAID status 180 display type and capacity 178 estimate use of space 188 failures and spares 32 flash the running light 175 manage use of space 189 reclaim space 190 reliability statistics 185 rescan 36, 175 set statistics to zero 176 set to failed 174 show status 37, 176 spare when add an expansion shelf 45 unfail a disk 175 DNS

462

Data Domain Operating System User Guide

add server 97 display servers 103 domain name display 98 duplex, set line use 99

enclosure beacon 38 display hardware status 41 fans, display status 39 port connections, display 40, 193, power supply status 41 temperature, display 39 enclosures, list 37 Enterprise Manager 405 Ethernet, display interface settings 101 expansion shelf add 32 disk add enclosure command 175 look for new 36

194, 195

fans

display status 76 fans, display status 39 fastcopy 215 file system compression algorithms 225 delete all data 214 disable 214 display compression 217 display status 217 display uptime 217 display utilization 215 enable 213 full 192 maximum number of files 191 restart 214 filesys command 213 FTP add a host 109 disable 111 display user list 113 enable 111 remove a host 110 set user list to empty 111
463

G
gateway section 1, 61, 127, 171, gateway system add a LUN 57, 174 command differences 51 installation 54 points of interest 51 GB defined 10 GUI, see DDR Manager

201, 299, 397

H
halt See poweroff hard address, private loop 367, 421 hardware display status 41 host name add 99 delete 100 display 100 hourly status message 138 HTTPS, generate a new certificate 113

I/O, display load 72, 73 inode reporting 191 installation DD460g 54 default directories under /ddvar 9 login and configuration 14 interface autonegotiate 100 change IP address 98 change transfer unit size 97 disable 95 display Ethernet configuration 101 display settings 101 enable 95 overview 7 set line speed 99 IP address, change for an interface 98

K L

KB defined 10 license add


464

124
Data Domain Operating System User Guide

configuration setup 18 display 125 remove 126 remove feature licenses 126 location display 124 set 122 log archive the log 170 command 165 create file bundles 139 list file names 168 remote logging 165 scroll new entries 165 set the CIFS logging level 320 support upload command 139 view all current entries 167 login, first time 14 LUN groups 377, 425 LUN masking add a client 378, 385 add a LUN mask 388 procedure 384, 430 vtl initiator command 378, 385

mail

change server 122 display server 103 display server name 124 maximum transfer unit size, change 97 MB defined 10 migration set up 291 with replication 296 monitor multiple systems 405 MTU, change size 97

name change 98 display 103 ndmp add a filer 393 backup operation 394 display known filers 396
465

display process status 396 remove a filer 393 remove passwords 395 restore operation 394 stop a process 395 stop all processes 395 test for a filer 396 net failover display 90 failover, add physical interfaces 90 failover, delet virtual interface 91 failover, remove physical interface 90 net command 95 net, display Ethernet hardware settings 102 netmask, change 96 network configuration set up 19 display statistics 104 network parameters, reset 99 NFS add client, read/write 303 clear statistics 305 command 301 configuration set up 24 detailed statistics 307 disable client 304 display active clients 305 display allowed clients 305 display statistics 306 display status 307 enable client 304 remove client 304 set client list to default 305 ntp add a time server 84 delete a time server 84 disable service 83 display settings 85 display status 84 enable service 83 reset to defaults 84 synchronize a Windows domain controller 326 NTP, display server 103 NVRAM, display status 79

466

Data Domain Operating System User Guide

password, change 116 path name length 191 ping a host 97 pools add 388 and replication 253 delete 388 display 388, 438, 439 using 387, 437 port connections display 40, 193, 194, 195 ports display 70 power supply display status 41, 77 poweroff 63 private loop, hard address 367, 421 privilege level, change 116

RAID and a failed disk 175 create a new group 45 display detailed information 181 display status 37, 176, 180 groups 32 type in a restorer 6 with gateway restorers 50 reboot hardware 64 remote command output 114 replication abort a recovery 256 abort a resync 257 change a destination port 259 change a source port 258 change originator name 257 configure 251 context 250 convert to directory from collection directory size 192 display configuration 264 display status 268 display when complete 268 introduced 249 move data to originator 256

277

467

pools 253 remove configuration 255 replace collection source 276 replace directory source 275 reset authorization 255 reset bandwidth 262 reset delay 262 resume 254 resynchronize source and destination seeding 278 bidirectional 282 many-to-one 287 one-to-one 279 set bandwidth 263 setup and start bidirectional 274 setup and start collection 274 setup and start directory 273 setup and start many-to-one 275 start 253 statistics 270 suspend 254 throttle override 261 throttle rate 260 throttle reset 262 throttle, add an event 259 throttle, delete an event 260 throttle, display settings 267 use a network name 258 route add a rule 105 change default gateway 106 command 105 display a route 106 display default gateway 107 display Kernel IP routing table 107 display static routes 106 remove a rule 105 reset default gateway 106 serial number, display 71 shutdown See poweroff snapshot command 231 SNMP

256

468

Data Domain Operating System User Guide

add community strings 144 add trap hosts 143 delete a community string 144 delete a trap host 143 delete all community strings 144 delete all trap hosts 143 disable 142 display all 145 display community strings 146 display status 145 display the system contact 146 display the system location 146 display trap hosts 145 enable 142 reset all SNMP values 144 reset location 142 reset system location 143 system contact 142 system location 142 software display version 81 site requirements 14 space management 187 space.log, format 168 SSH add a public key 112 display the key file 113 display user list 113 remove a key file entry 112 remove the key file 112 set user list to empty 111 statistics clear NFS 305 disk performance 183 disk reliability 185 display for the network 104 display NFS 306 graphic display 75 NFS detailed 307 set disk to zero 176 status, hourly message 138 support log file bundles 139 upload command 139 system
469

change name 98 command 63 display status 76 display uptime 71 display version 81 location 122 location display 124 ports 70 serial number 71

TB defined 10 TELNET add a host 109 disable 111 display user list 113 enable 111 remove a host 110 set user list to empty 111 temperature, display 39, 77 time display 78 display zone 124 set 66, 78 set zone 123 Tivoli Storage Manager 26 traceroute 106 upgrade software 64 uptime, display 71 users add 115 change a password 116 change a privilege level 116 display all 117, 118 regular 115 remove 115 set list to default 116 sysadmin 115 verify process explanation 6 see when the process is running 73 Virtual Tape Library See VTL
470 Data Domain Operating System User Guide

volume expansion 45 VTL auto-eject feature 368, 422 broadcast changes 360 create a new drive 361, 413 create a VTL 359, 384, 411, 430 create tapes 363, 415 delete a VTL 360, 411 disable 360, 411 display a tape summary 362, 372, 414, 424 display all tapes 370, 423 display configurations 369 display statistics 373 display status 369, 422 display tapes in the vault 372, 424 enable 359, 410 export tapes 366 features and limitations 357 import tape 364, 417 LUN groups 377, 425 private loop hard address 367, 421 remove a drive 361, 413 remove tapes 366, 419 retrieve a tape from a destination 376 tape information by VTL 371, 372, 423 WINS server for CIFS 318 WINS server for CIFS, remove 318

471

472

Data Domain Operating System User Guide

Vous aimerez peut-être aussi