Académique Documents
Professionnel Documents
Culture Documents
Disclaimer The information contained in this publication is subject to change without notice. Data Domain, Incorporated makes no warranty of any kind with regard to this manual, including, but not limited to, the implied warranties of merchantability and fitness for a particular purpose. Data Domain, Incorporated shall not be liable for errors contained herein or for incidental or consequential damages in connection with the furnishing, performance, or use of this manual. Notices NOTE: Data Domain hardware has been tested and found to comply with the limits for a Class A digital device, pursuant to Part 15 of the FCC Rules. This Class A digital apparatus complies with Canadian ICES-003. Cet appareil numrique de la classe A est conforme la norme NMB-0003 du Canada. These limits are designed to provide reasonable protection against harmful interference when the equipment is operated in a commercial environment. This equipment generates, uses, and can radiate radio frequency energy and, if not installed and used in accordance with the instruction manual, may cause harmful interference to radio communications. Operation of this equipment in a residential area is likely to cause harmful interference in which case the user will be required to correct the interference at his own expense. Changes or modifications not expressly approved by Data Domain can void the user's authority to operate the equipment. Data Domain Patents Data Domain products are covered by one or more of the following patents issued to Data Domain: U.S. Patents: 6928526, 7007141, 7065619, 7143251, 7305532. Data Domain has other patents pending. Copyright Copyright 2005 - 2008 Data Domain, Incorporated. All rights reserved. Data Domain, the Data Domain logo, Data Domain Operating System, Data Domain OS, Global Compression, Data Invulnerability Architecture, and all other Data Domain product names and slogans are trademarks or registered trademarks of Data Domain, Incorporated in the USA and/or other countries. Microsoft and Windows are either registered trademarks or trademarks of Microsoft Corporation in the United States and/or other countries. Portions of this product are software covered by the GNU General Public License Copyright 1989, 1991 by Free Software Foundation, Inc. Portions of this product are software covered by the GNU Library General Public License Copyright 1991 by Free Software Foundation, Inc. Portions of this product are software covered by the GNU Lesser General Public License Copyright 1991, 1999 by Free Software Foundation, Inc. Portions of this product are software covered by the GNU Free Documentation License Copyright 2000, 2001, 2002, by Free Software Foundation, Inc. Portions of this product are software covered by the GNU Free Documentation License Copyright 2000, 2001, 2002 by Free Software Foundation, Inc. Portions of this product are software Copyright 1999 - 2003, by The OpenLDAP Foundation. Portions of this product are software developed by the OpenSSL Project for use in the OpenSSL
Toolkit (http://www.openssl.org/), Copyright 1998-2005 The OpenSSL Project, all rights reserved. Portions Copyright 1999-2003 Apple Computer, Inc. All rights Reserved. Portions of this product are Copyright 1995 - 1998 Eric Young (eay@cryptsoft.com) All rights reserved. Portions of this product are Copyright Ian F. Darwin 1986-1995. All rights reserved. Portions of this product are Copyright Mark Lord 1994-2004. All rights reserved. Portions of this product are Copyright 1989-1997 Larry Wall All rights reserved. Portions of this product are Copyright Mike Glover 1995, 1996, 1997, 1998, 1999. All rights reserved. Portions of this product are Copyright 1992 by Panagiotis Tsirigotis. All rights reserved. Portions of this product are Copyright 2000-2002 Japan Network Information Center. All rights reserved. Portions of this product are Copyright 1988-2003 by Bram Moolenaar. All rights reserved. Portions of this product are Copyright 1994-2006 Lua.org, PUC-Rio. Portions of this product are Copyright 1990-2005 Info-ZIP. All rights reserved. Portions of this product are under the Boost Software License - Version 1.0 - August 17th, 2003. All rights reserved. Portions of this product are Copyright 1994 Purdue Research Foundation. All rights reserved. This product includes cryptographic software written by Eric Young (eay@cryptsoft.com). Portions of this product are Berkeley Software Distribution software, Copyright 1988 - 2004 by the Regents of the University of California, University of California, Berkeley. Portions of this product are software Copyright 1990 - 1999 by Sleepycat Software. Portions of this product are software Copyright 1985-2004 by the Massachusetts Institute of Technology. All rights reserved. Portions of this product are Copyright 1999, 2000, 2001, 2002 The Board of Trustees of the University of Illinois. All rights reserved. Portions of this product are LILO program code, Copyright 1992 1998 Werner Almesberger. All rights reserved. Portions of this product are software Copyright 1999 - 2004 The Apache Software Foundation, licensed under the Apache License, Version 2.0 (http://www.apache.org/licenses /LICENSE-2.0). Portions of this product are derived from software Copyright 1994, 1995, 1996, 1997, 1998, 1999, 2000, 2001, 2002 by Cold Spring Harbor Laboratory. Funded under Grant P41-RR02188 by the National Institutes of Health. Portions of this product are derived from software Copyright 1996, 1997, 1998, 1999, 2000, 2001, 2002 byBoutell.Com, Inc. Portions of this product relating to GD2 format are derived from software Copyright 1999, 2000, 2001, 2002 Philip Warner. Portions of this product relating to PNG are derived from software Copyright 1999, 2000, 2001, 2002 Greg Roelofs. Portions of this product relating to gdttf.c are derived from software Copyright 1999, 2000, 2001, 2002 John Ellson (ellson@lucent.com). Portions of this product relating to gdft.c are derived from software Copyright 2001, 2002 John Ellson (ellson@lucent.com). Portions of this product relating to JPEG and to color quantization are derived from software Copyright 2000,2001, 2002, Doug Becker and copyright (C) 1994, 1995, 1996, 1997, 1998, 1999, 2000, 2001, 2002, Thomas G. Lane. This software is based in part on the work of the Independent JPEG Group. Portions of this product relating to WBMP are derived from software Copyright 2000, 2001, 2002 Maurice Szmurlo and Johan Van den Brande. Portions of this product are Apache Tomcat version 5.5.23 software covered by the Apache License, Version 2.0 Copyright 2004 by the Apache Software Foundation. Portions of this product are Apache log4j version 1.2.14 software covered by the Apache License, Version 2.0 Copyright 2004 by the Apache Software Foundation. .Portions of this product are Google Web Toolkit version 1.3.3 software covered by the Apache License, Version 2.0 Copyright 2004 by the Apache Software Foundation. Portions of this product are Java Runtime
Environment version 6u1 Copyright 2008 Sun Microsystems, Inc. Other product names and/or slogans mentioned herein may be trademarks or registered trademarks of their respective companies. Data Domain, Incorporated 2421 Mission College Blvd. Santa Clara, CA 95054-1214 USA Phone 408-980-4800 Direct 877-207-3282 Toll-free Fax 408-980-8620 www.datadomain.com Data Domain Software Release 4.5.1 April 28, 2008 Part number: 760-0405-0100 Rev. A
Contents
About This
Guide . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxiii High-Level Table of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxiii Descriptions of Chapters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxiv Conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxvi Audience . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxvi Contacting Data Domain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxvii SECTION 1: Data Domain Systems - Appliance, Gateway, and Expansion Shelf. . . 1 Chapter 1: Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 Documents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 Applications that Send Data to a Data Domain System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 Data Domain System Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 Data Streams Sent to a Data Domain system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 Data Integrity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 Data Compression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 Restore Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 Data Domain Replicator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 Data Domain System Hardware Interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 Licensing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 User Interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 Multipath . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 Related Documentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 Initial system Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
Command Line Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 Chapter 2: Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 Backup Software Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 CIFS Backup Server Timeout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 Login and Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 Additional Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 Administering a Data Domain System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 Command Line Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 Data Domain Enterprise Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 Chapter 3: ES20 Expansion Shelf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 RAID groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 Disk Failures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 Add a Shelf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 Disk Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 Look for New Disks, LUNs, and Expansion Shelves . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 Add an Expansion Shelf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 Display Disk Status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 Shelf (enclosure) Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 List Enclosures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 Identify an Enclosure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 Display Fan Status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 Display Component Temperatures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 Display Port Connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 Display All Hardware Status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 Display Power Supply Status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 Display HBA Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 Display Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 Display Target Storage Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 Display the Layout of SAS Enclosures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
vi
Component Relationship and Commands to show it . . . . . . . . . . . . . . . . . . . . . . . . . . 45 Volume Expansion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 Procedure: Create RAID group on new shelf that has lost disks . . . . . . . . . . . . . . . . . 45 RAID Groups, Failed Disks, and Enclosures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 Chapter 4: Gateway systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 Gateway Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 DD4xxg and DD5xxg series Gateways . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 DD6xxg Gateways . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 Commands not valid for Gateway . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 Commands for Gateway only . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 Disk Commands at LUN level . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 Installation Procedure on DD4xxg and DD5xxg Gateways . . . . . . . . . . . . . . . . . . . . . . . . 54 Installation Procedure on DD6xxg Gateways . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 Procedure: Adding a LUN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 SECTION 2: Configuration - System Hardware, Users, Network, and Services. . . . 61 Chapter 5: System Maintenance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 The system Command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 Shut down the Data Domain System Hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 Reboot the Data Domain System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 Upgrade the Data Domain System Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 To upgrade using HTTP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 To upgrade using FTP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 Set the Date and Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 Restore system configuration after a head unit replacement (with DD690/DD690G) . . . . 66 Procedure to Swap Filesystems: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 Upgrading DD690 and DD690g . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 Create a Login Banner . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
vii
Reset the Login Banner . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 Display the Login Banner Location . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 Display the Ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 Display the Data Domain System Serial Number . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 Display system Uptime . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 Display system Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 Display Detailed system Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 Display system Statistics Graphically . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 Display system Status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 Display Data Transfer Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 Display the Date and Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 Display NVRAM Status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 Display the Data Domain System Model Number . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 Display Hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 Display Memory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 Display the Data Domain OS Version . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 Display All system Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 The alias Command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 Add an Alias . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 Remove an Alias . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 Reset Aliases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 Display Aliases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 Time Servers and the NTP Command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 Enable NTP Service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 Disable NTP Service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 Add a Time Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 Delete a Time Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 Reset the List . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 Reset All NTP Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 Display NTP Status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
viii Data Domain Operating System User Guide
Display NTP Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 Chapter 6: Network Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 Ethernet Failover and Net Aggregation - Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . 87 Supported Pairs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 Ethernet Failover - Set Up Failover Between Ethernet Interfaces . . . . . . . . . . . . . . . . . . . 90 Set up Failover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 Remove a Physical Interface from a Failover Virtual Interface . . . . . . . . . . . . . . . . . . 90 Display Failover Virtual Interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 Delete a Virtual Failover Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 Sample Failover Workflow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 Net Aggregation/Ethernet Trunking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92 Set up link aggregation between Ethernet interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . 92 Remove selected physical interfaces from an aggregate virtual interface . . . . . . . . . . 93 Display basic information on the aggregate setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93 Remove all physical interfaces from an aggregate virtual interface . . . . . . . . . . . . . . . 94 Sample Aggregation Workflow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94 The net Command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 Enable an Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 Disable an Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 Enable DHCP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96 Disable DHCP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96 Change an Interface Netmask . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96 Change an Interface Transfer Unit Size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 Add or Change DNS servers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 Ping a Host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 Change the Data Domain System Hostname . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 Change an Interface IP Address . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 Change the Domain Name . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 Add a Hostname/IP Address to the /etc/hosts File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 Reset Network Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
ix
Set Interface Duplex Line Use . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 Set Interface Line Speed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 Set Autonegotiate for an Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 Delete a Hostname/IP address from the /etc/hosts File . . . . . . . . . . . . . . . . . . . . . . . . . . 100 Delete All Hostname/IP addresses from the /etc/hosts File . . . . . . . . . . . . . . . . . . . . . . . 100 Display Hostname/IP addresses from the /etc/hosts File . . . . . . . . . . . . . . . . . . . . . . . . . 100 Display an Ethernet Interface Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 Display Interface Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 Display Ethernet Hardware Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102 Display the Data Domain System Hostname . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 Display the Domain Name Used for Email . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 Display DNS Servers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 Display Network Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 Display All Networking Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 The route Command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105 Add a Routing Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105 Remove a Routing Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105 Change the Routing Default Gateway . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106 Reset the Default Routing Gateway . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106 Display a Route . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106 Display the Configured Static Routes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106 Display the Kernel IP Routing Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 Display the Default Routing Gateway . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 Multiple Network Interface Usability Improvement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108 Chapter 7: Access Control for Administration . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109 Add a Host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109 Remove a Host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110 Allow Access from Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110 Restrict Administrative Access from Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110 Reset Windows Administrative Access to the Default . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
x Data Domain Operating System User Guide
Enable a Protocol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 Disable a Protocol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 Reset system Access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 Add an Authorized SSH Public Key . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112 Remove an SSH Key File Entry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112 Remove the SSH Key File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112 Create a New HTTPS Certificate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 Display the SSH Key File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 Display Hosts and Status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 Display Windows Access Setting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114 Procedure: Return Command Output to a Remote machine . . . . . . . . . . . . . . . . . . . . . . . . . . 114 Chapter 8: User Administration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115 Add a User . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115 Remove a User . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115 Change a Password . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116 Reset to the Default User . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116 Change a Privilege Level . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116 Display Current Users . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117 Display All Users . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118 Chapter 9: Configuration Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 The config Command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 Change Configuration Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 Save and Return a Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120 Reset the Location Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121 Reset the Mail Server to a Null Entry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121 Reset the Time Zone to the Default . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121 Set an Administrative Email Address . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121 Set an Administrative Host Name . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122 Change the system Location Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
xi
Change the Mail Server Hostname . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122 Set a Time Zone for the system Clock . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123 Display the Administrative Email Address . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123 Display the Administrative Host Name . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123 Display the system Location Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124 Display the Mail Server Hostname . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124 Display the Time Zone for the system Clock . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124 The license Command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124 Add a License . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124 Display Licenses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125 Remove All Feature Licenses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126 Remove a License . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126 SECTION 3: Remote Monitoring - Alerts, SNMP, and Log Files. . . . . . . . . . . . . . . 127 Chapter 10: Alerts and System Reports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129 Alerts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130 Add to the Email List . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130 Test the Email List . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130 Remove from the Email List . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130 Reset the Email List . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131 Display Current Alerts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131 Display the Alerts History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132 Display the Email List . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132 Display Current Alerts and Recent History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133 Display the Email List and Administrator Email . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133 Autosupport Reports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134 Add to the Email List . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134 Test the Autosupport Report Email List . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134 Send an Autosupport Report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135 Remove from the Email List . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
xii Data Domain Operating System User Guide
Reset the Email List . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135 Run the Autosupport Report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135 Email Command Output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136 Set the Schedule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136 Reset the Schedule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137 Reset the Schedule and the List . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137 Display all Autosupport Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137 Display the Autosupport Email List . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137 Display the Autosupport Report Schedule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138 Display the Autosupport History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138 Hourly system Status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138 Collect and Send Log Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139 Chapter 11: SNMP Management and Monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . 141 Enable SNMP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142 Disable SNMP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142 Set the system Location . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142 Reset the system Location . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142 Set a system Contact . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142 Reset a system Contact . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143 Add a Trap Host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143 Delete a Trap Host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143 Delete All Trap Hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143 Add a Community String . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144 Delete a Community String . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144 Delete All Community Strings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144 Reset All SNMP Values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144 Display SNMP Agent status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145 Display Trap Hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145 Display All Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145 Display the system Contact . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
xiii
Display the system Location . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146 Display Community Strings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146 Display the MIB and Traps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147 More about the MIB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147 What is a MIB? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147 MIB Browser . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147 Entire MIB Tree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147 Top-Level Organization of the MIB: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150 Mid-Level Organization of the MIB: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151 The MIB (Current Alerts Section) in Text Form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151 Entries in the MIB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153 Important Areas of the MIB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153 Alerts (.1.3.6.1.4.1.19746.1.4) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154 Data Domain MIB Notifications (.1.3.6.1.4.1.19746.2) . . . . . . . . . . . . . . . . . . . . . . . 154 Filesystem Space (.1.3.6.1.4.1.19746.1.3.2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161 Replication (.1.3.6.1.4.1.19746.1.8) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162 Chapter 12: Log File Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165 Scroll New Log Entries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165 Send Log Messages to Another system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165 Add a Host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166 Remove a Host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166 Enable Sending Log Messages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166 Disable Sending Log Messages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166 Reset to Default . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166 Display the List and State . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167 Display a Log File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167 List Log Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168 Procedure: Understand a Log Message . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169 Procedure: Archive Log Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
xiv
SECTION 4: Capacity - Disk Management, Disk Space, System Monitoring, and Multipath. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171 Chapter 13: Disk Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173 Expand from 9 disks to 15 disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174 Add a LUN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174 Fail a Disk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174 Unfail a Disk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175 Look for New Disks, LUNs, and Expansion Shelves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175 Identify a Physical Disk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175 Add an Expansion Shelf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175 Reset Disk Performance Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176 Display Disk Status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176 Output Format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176 Output Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177 Display Disk Type and Capacity Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178 Display RAID Status for Disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180 Display the History of Disk Failures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181 Display Detailed RAID Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181 Display Disk Performance Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183 Display Disk Reliability Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185 Chapter 14: Disk Space and System Monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . 187 Space Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187 Estimate Use of Disk Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188 Manage File system Use of Disk Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189 Display the Space Graph . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190 Reclaim Data Storage Disk Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190 Maximum Number of Files and Other Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191 Number of Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191 Inode Reporting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
xv
Path Name Length . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191 Directory Size for Directory Replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192 When a Data Domain System is Full . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192 Chapter 15: Multipath . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193 Multipath Commands for Gateway only . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193 Suspend or Resume a Port Connection (Gateway only) . . . . . . . . . . . . . . . . . . . . . . . . . 193 Enable Auto-Failback (Gateway only) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194 Disable Auto-Failback (Gateway only) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194 Reset Auto-Failback to its Default of enabled (Gateway only) . . . . . . . . . . . . . . . . . . . . 194 Go back to using the optimal path (Gateway only) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194 Allow I/O on a specified initiator port (Gateway only) . . . . . . . . . . . . . . . . . . . . . . . . . . 194 Disallow I/O on a specified initiator port (Gateway only) . . . . . . . . . . . . . . . . . . . . . . . . 195 Multipath Commands for all systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195 Display Port Connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195 Enable Monitoring of Multipath Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196 Disable Monitoring of Multipath Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196 Show Monitoring of Multipath Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196 Show Multipath Status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197 Show Multipath History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198 Show Multipath Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199 Clear Multipath Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200 SECTION 5: File System and Data Protection . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201 Chapter 16: Data Layout Recommendations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203 Issue . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203 Reporting on compression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204 Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205
xvi
NFS issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207 Filesystem organizations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207 Mount options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207 CIFS issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208 VTL issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208 OST issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209 Archive implications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209 Very large environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210 IMPORTANT NOTE! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210 Additional Notes on the Filesys Show Compression command . . . . . . . . . . . . . . . . . . . . . . . 210 Chapter 17: File System Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213 The filesys command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213 Statistics and Basic Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213 Start the Data Domain System File system Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213 Stop the Data Domain System File system Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214 Stop and Start the Data Domain System File system . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214 Delete All Data in the File system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214 Fastcopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215 Display File system Space Utilization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215 Display File system Status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217 Display File system Uptime . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217 Display Compression - For Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217 Display Compression - Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218 Display Compression - Daily . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219 Clean Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220 Start Cleaning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221 Stop Cleaning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222 Change the Schedule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222 Set the Schedule or Throttle to the Default . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223
xvii
Set Network Bandwidth Used . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223 Update Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223 Display All Clean Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223 Display the Schedule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224 Display the Throttle Setting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224 Display the Clean Operation Status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224 Monitor the Clean Operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225 Compression Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225 Local Compression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225 Set Local Compression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225 Reset Local Compression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226 Display the Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226 Global Compression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226 Set Global Compression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226 Reset Global Compression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227 Display the Type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227 Replicator Destination Read/Write Option . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227 Report as Read/Write . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227 Report as Read-Only . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228 Return to the Default Read-Only Setting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228 Display the Setting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228 Tape Marker Handling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228 Set a Marker Type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228 Reset to the Default . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229 Display the Marker Setting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229 Chapter 18: Snapshots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231 Create a Snapshot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231 List Snapshots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232 Set a Snapshot Retention Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232 Expire a Snapshot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233
xviii Data Domain Operating System User Guide
Rename a Snapshot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233 Snapshot Scheduling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233 Add a Snapshot Schedule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234 Syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234 Further Examples: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235 Modify a Snapshot Schedule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237 Remove All Snapshot Schedules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238 Display a Snapshot Schedule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238 Display all Snapshot Schedules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238 Delete a Snapshot Schedule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238 Delete all Snapshot Schedules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238 Chapter 19: Retention Lock . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241 The Retention Lock Feature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241 Enable the Retention Lock Feature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242 Disable the Retention Lock Feature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242 Set the Minimum and Maximum Retention Periods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242 Reset the Minimum and Maximum Retention Periods . . . . . . . . . . . . . . . . . . . . . . . . . . . 243 Show the Minimum and Maximum Retention Periods . . . . . . . . . . . . . . . . . . . . . . . . . . . 243 Reset Retention Lock for Files on a Specified Path . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243 Show Retention Lock Status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243 Client-Side Retention Lock File Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244 Create Retention-Locked File and Set Retention Date . . . . . . . . . . . . . . . . . . . . . . . . 244 Extend Retention Date: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244 Identify Retention-Locked Files and List Retention Date: . . . . . . . . . . . . . . . . . . . . . 245 Delete an Expired Retention-Locked File: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245 Retention Lock Sample Procedure: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245 Notes on Retention Lock: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247 Retention Lock and Replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247 Retention Lock and Fastcopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247 Retention Lock and Filesys Destroy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247
xix
Chapter 20: Replication - CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249 Collection Replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249 Directory Replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249 Using Context . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250 Configure Replicator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251 Replicating VTL Tape Cartridges and Pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252 Start Replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253 Suspend Replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254 Resume Replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254 Remove Replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255 Reset Authentication between the Data Domain Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . 255 Move Data to a New Source . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256 Recover from an aborted recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256 Resynchronize Source and Destination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256 Convert from Collection to Directory Replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257 Abort a Resync . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257 Change a Source or Destination Hostname . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257 Connect with a Network Name . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258 Change a Destination Port . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258 Change the Port on a Destination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259 Throttling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259 Add a Scheduled Throttle Event . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259 Set a Temporary Throttle Rate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260 Delete a Scheduled Throttle Event . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260 Set an Override Throttle Rate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261 Reset Throttle Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262 Throttle Reset Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262 TOE versus Throttling: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262 Scripted Cascaded Directory Replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263 Procedure: Set Replication Bandwidth and Network Delay . . . . . . . . . . . . . . . . . . . . . . . . . 263
xx Data Domain Operating System User Guide
Display Bandwidth and Delay Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264 Display Replicator Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264 Display Replication History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266 Display Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267 Display Throttle settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267 Display Replication Complete for Current Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268 Display Initialization, Resync, or Recovery Progress . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268 Display Status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268 Display Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270 Actual example of show stats all: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271 Hostname Shorthand . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272 Procedure: Set Up and Start Directory Replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273 Procedure: Set Up and Start Collection Replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274 Procedure: Set Up and Start Bidirectional Replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274 Procedure: Set Up and Start Many-to-One Replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275 Procedure: Replace a Directory Source - New Name . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275 Procedure: Replace a Collection Source - Same Name . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 276 Procedure: Recover from a Full Replication Destination . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277 Procedure: Convert from Collection to Directory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277 Procedure: Seeding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278 One-to-One . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279 Bidirectional . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282 Many-to-One . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 287 Migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291 Set Up the Migration Destination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 292 Start Migration from the Source . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 292 Create an End Point for Data Migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 294 Display Migration Progress . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 294 Stop the Migration Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 294 Display Migration Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295
xxi
Display Migration Status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295 Procedure: Migrate between Source and Destination . . . . . . . . . . . . . . . . . . . . . . . . . . . 296 Procedure: Migrate with Replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296 SECTION 6: Data Access Protocols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 299 Chapter 21: NFS Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 301 Quicker Start Guide for NFS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 301 Shorthand steps: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 301 Add NFS Clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303 Remove Clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304 Enable Clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304 Disable Clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304 Reset Clients to the Default . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305 Clear the NFS Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305 Display Active Clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305 Display Allowed Clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305 Display Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 306 Display Detailed Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 307 Display Status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 307 Display Timing for NFS Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 308 Chapter 22: CIFS Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309 CIFS Access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309 Add a User . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 310 Add a Client . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 311 Secured LDAP with Transport Layer Security (TLS) . . . . . . . . . . . . . . . . . . . . . . . . . . . 311 CIFS Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 312 Enable Client Connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 312 Disable Client Connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 312 Remove a Backup Client . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 312
xxii
Remove an Administrative Client . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313 Remove All CIFS Clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313 Set a NetBIOS Hostname . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313 Remove the NetBIOS Hostname . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313 Create a Share on the Data Domain System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313 Delete a share . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315 Enable a Share . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315 Disable a Share . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315 Modify a Share . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315 Set the Authentication Mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 316 Remove an Authentication Mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 317 Add an IP Address/NetBIOS hostname Mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 317 Remove All IP Address/NetBIOS hostname Mappings . . . . . . . . . . . . . . . . . . . . . . . . . . 317 Remove an IP Address/NetBIOS hostname Mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . 317 Resolve a NetBIOS Name . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 318 Identify a WINS server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 318 Remove the WINS server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 318 Set Authentication to the Active Directory Mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 318 Set CIFS Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319 Set Organizational Unit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319 Allow Trusted Domain Users . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319 Allow Administrative Access for a Windows Domain Group . . . . . . . . . . . . . . . . . . . . . 320 Set CIFS Logging Levels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 320 Increase Memory to Allow More User Accounts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 320 Set the Maximum Transmission Size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321 Control Anonymous User Connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321 Increase Memory for SMBD Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321 Allow Certificate Authority Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321 Reset CIFS Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321 Display CIFS Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322
xxiii
Display . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322 Display CIFS Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322 Display Active Clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322 Display All Clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323 Display the CIFS Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323 Display Detailed CIFS Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 324 Display All IP Address/NetBIOS hostname Mappings . . . . . . . . . . . . . . . . . . . . . . . . . . 324 Display CIFS Users . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 324 Display CIFS Status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325 Display Shares . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325 Display CIFS Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325 Display CIFS User Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325 Display CIFS Group Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 326 Procedure: Time Servers and Active Directory Mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 326 Synchronizing from a Windows Domain Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . 326 Synchronizing from an NTP Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 327 Procedure: Add a Share on the CIFS Client . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 327 Adding a Share on a UNIX CIFS Client . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 327 Adding a Share on a Windows CIFS Client (MMC) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 328 File Security With ACLs (Access Control Lists) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 336 How to set ACL Permissions/Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 337 Granular and complex permissions (DACL) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 337 Audit ACL (SACL) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 338 Owner SID . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339 ntfs-acls and idmap-type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 340 Procedure to Turn on ACLs: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 341 If this is a new installation: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 341 If this is an existing installation, with pre-existing CIFS data residing on the system: . . 341 Chapter 23: Open STorage (OST) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343
xxiv Data Domain Operating System User Guide
Overview: steps to enable OST on the DDR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344 Add the OST license. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344 Add the ost user - set the ost user to user-name . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345 Reset the ost user back to the default (no user set) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345 Display the current ost user . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345 Enable the OST feature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345 Disable the OST feature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 346 Show the current status (enabled or disabled) for ost . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 346 Create an LSU (logical storage unit) with the given LSU-name . . . . . . . . . . . . . . . . . . . . . . 346 Delete an LSU . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 346 Delete all images and LSUs on the Data Domain system . . . . . . . . . . . . . . . . . . . . . . . . . . . 347 Display LSU / or all the LSUs on the Data Domain system . . . . . . . . . . . . . . . . . . . . . . . . . . 347 Show ost statistics for the Data Domain system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349 Show ost statistics for the Data Domain system over an interval . . . . . . . . . . . . . . . . . . . . . . 350 Display an ost histogram for the Data Domain system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 351 Clear all ost statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 352 Display ost connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 352 Display statistics on active optimized duplication operations . . . . . . . . . . . . . . . . . . . . . . . . 352 Sample workflow sequence: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 353 Chapter 24: Virtual Tape Library (VTL) - CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 357 Compatibility Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 359 Enable VTLs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 359 Create a VTL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 359 Delete a VTL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 360 Disable VTLs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 360 Broadcast new VTLs and VTL Changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 360 Create New Drives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 361 Remove Drives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 361 Use a Changer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 361 Display a Summary of All Tapes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 362
xxv
Create New Tapes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 363 Import Tapes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 364 Examples of importing: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365 Export Tapes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 366 Remove Tapes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 366 Move Tape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 367 Search Tapes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 367 Set a Private-Loop Hard Address . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 367 Reset a Private-Loop Hard Address . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 368 Enable Auto-Eject . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 368 Enable Auto-Offline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 368 Disable Auto-Eject . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 368 Disable Auto-Offline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 368 Display the Auto-Offline Setting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 369 Display the Private-Loop Hard Address Setting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 369 Display VTL Status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 369 Display VTL Configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 369 Display All Tapes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 370 Display Tapes by VTL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 371 Display All Tapes in the Vault . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 372 Display Tapes by Pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 372 Display VTL Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373 Display Tapes using sorting and wildcard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 374 Procedure: Manually Export a Tape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 375 Procedure: Retrieve a Replicated Tape from a Destination . . . . . . . . . . . . . . . . . . . . . . . . . . 376 Access Groups (for VTL Only) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 377 The vtl group Command (Access Group) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 378 Create a Access Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 379 Remove an Access Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 379 Rename an Access Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 379
xxvi Data Domain Operating System User Guide
Add to an Access Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 380 Delete from an Access Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 381 Modify an Access Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 381 Display Access Group information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 382 Switch Virtual Devices between Primary & Secondary Port List . . . . . . . . . . . . . . . . 383 Procedure: Create an Access Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 384 The vtl initiator Command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 385 Add an Initiator (= add WWPN = set alias) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 386 Delete an Initiator (reset alias) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 386 Display Initiators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 387 Pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 387 Add a Pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 388 Delete a Pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 388 Display Pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 388 The vtl port Command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 388 Enable HBA ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 389 Disable HBA ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 389 Show VTL information in per-port format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 389 Chapter 25: Backup/Restore Using NDMP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393 Add a Filer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393 Remove a Filer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393 Backup from a Filer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 394 Restore to a Filer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 394 Remove Filer Passwords . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395 Stop an NDMP Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395 Stop All NDMP Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395 Check for a Filer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 396 Display Known Filers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 396 Display NDMP Process Status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 396
xxvii
SECTION 7: GUI - Graphical User Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 397 Chapter 26: Enterprise Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 399 Graphical User Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 399 Display the Space Graph . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 402 Monitor Multiple Data Domain Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405 Chapter 27: Virtual Tape Library (VTL) - GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 409 Virtual Tape Libraries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 410 Enable VTLs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 410 Disable VTLs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 411 Create a VTL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 411 Delete a VTL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 411 VTL Drives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 412 Create New Drives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 413 Remove Drives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 413 Use a Changer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 414 Display a Summary of All Tapes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 414 Create New Tapes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 415 Import Tapes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 416 Export Tapes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 418 Remove Tapes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 419 Move Tape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 420 Search Tapes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 420 Set Option/Reset Option (loop-id and auto-eject) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 421 Display VTL Status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 422 Display All Tapes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 423 Display Summary Information About Tapes in a VTL . . . . . . . . . . . . . . . . . . . . . . . . . . 423 Display Summary Information About the Tapes in a Vault . . . . . . . . . . . . . . . . . . . . . . . 424 Display All Tapes in a Vault . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 424 Access Groups (for VTL Only) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 425
xxviii
Create an Access Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 426 Remove an Access Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 426 Rename an Access Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 426 Add to an Access Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 426 Delete from an Access Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 427 Modify an Access Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 428 Display Access Group information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 428 UPGRADE NOTE: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 429 Switch Virtual Devices between Primary & Secondary Port List . . . . . . . . . . . . . . . . . . 429 Procedure: Use a VTL Library / Use an Access Group . . . . . . . . . . . . . . . . . . . . . . . . . . 430 Physical Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 432 Initiators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 432 Add an Initiator (= Add WWPN = Set Initiator Alias) . . . . . . . . . . . . . . . . . . . . . . . . 432 Change an Existing Initiator Alias . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 433 Delete an Initiator (Reset Initiator Alias) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 433 Display Initiators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 433 Add an Initiator to an Access Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 434 Remove an Initiator from an Access Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 434 HBA Ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 434 Enable HBA ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 434 Disable HBA ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 434 Show VTL information on all ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 435 Show more detailed information on all ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 435 Show very detailed information on a single port . . . . . . . . . . . . . . . . . . . . . . . . . . . . 436 Pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 437 Add a Pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 438 Delete a Pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 438 Display Pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 438 Display Summary Information about a Single Pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 439 Display All Tapes in a Pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 439
xxix
Chapter 28: Replication - GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 441 Distinction Between Overview Bar/Box and Replication Pair Bar/Boxes . . . . . . . . . . . 444 Pre-Compression and Post-Compression Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 445 Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 446 Throttle Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 446 Bandwidth . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 446 Network Delay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 447 Listen Port . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 447 Status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 447 Current State . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 447 Synchronized as of . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 448 Backup Replication Tracker . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 448 General Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 449 Appendix A Time Zones . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 451 Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 459
xxx
This guide
explains the use of Data Domain systems. A high-level Table of Contents is shown, followed by descriptions of the individual chapters, conventions, audience, and contact information.
Descriptions of Chapters
NFS Management CIFS Management Open STorage (OST) Virtual Tape Library (VTL) - CLI Backup/Restore Using NDMP
SECTION 7: GUI - Graphical User Interface 25. Enterprise Manager 26. Virtual Tape Library (VTL) - GUI 27. Replication - GUI Time Zones Index
Descriptions of Chapters
SECTION 1: Data Domain Systems - Appliance, Gateway, and Expansion Shelf.
The Introduction chapter explains what the Data Domain Systems are and how they work, details features, and gives overviews of configuration tasks, the default configuration, and user interface commands. The Installation chapter gives all configuration steps and information for setting up backup software to use a Data Domain System. The ES20 Expansion Shelf chapter explains how to add and use the Data Domain ES20 disk expansion shelf for increased data storage. The Gateway systems chapter gives installation steps and other information specific to Data Domain Systems that use 3rd-party physical storage disk arrays instead of internal disks or external shelves.
The System Maintenance chapter describes how to manage the background maintenance task that continually checks the integrity of backup images, how to connect to time servers, and how to set up alias commands. The Network Management chapter describes how to manage network tasks such as routing rules, the use of DHCP and DNS, and the setting IP addresses. The Access Control for Administration chapter describes how to give HTTP, FTP, TELNET, and SSH access from remote hosts. The User Administration chapter explains how to deal with users and passwords. The Configuration Management chapter describes how to examine and modify configuration parameters.
Data Domain Operating System User Guide
xxiv
Descriptions of Chapters
The Alerts and System Reports chapter details messages that the Data Domain Operating system (DDOS) sends when monitoring components and details the daily system report. The SNMP Management and Monitoring chapter details the use of SNMP operations between a Data Domain System and remote machines. The Log File Management chapter explains how to view, archive, and clear the log file.
SECTION 4: Capacity - Disk Management, Disk Space, System Monitoring, and Multipath.
The Disk Management chapter explains how to monitor and manage disks on a Data Domain System. The Disk Space and System Monitoring chapter gives guidelines for managing disk space on Data Domain Systems and for setting up backup servers to get the best performance. The Multipathchapter explains how to use external storage I/O paths for failover and load balancing across paths..
The Data Layout Recommendationschapter gives recommendations for data layout on Data Domain Systems. The File System Management chapter gives details about file system statistics and capacity. The Snapshots chapter describes how to create and manage read-only copies of the Data Domain file system. The Retention Lock chapter describes how to lock files so that they cannot be changed or deleted. The Replication - CLI chapter details use of the Data Domain Replicator product for replication of data from one Data Domain System to another.
The NFS Management chapter describes how to deal with NFS clients and status. The CIFS Management chapter details the use of Windows backup servers with a Data Domain System. The Open STorage (OST)chapter explains the use of the OST feature. The Virtual Tape Library (VTL) - CLIchapter explains the use of the Virtual Tape Library feature. The Backup/Restore Using NDMP chapter explains how to do direct backup and restore operations between a Data Domain System and an NDMP-type filer.
Conventions
This set of chapters detail the use of all graphical user interface commands and operations. Each chapter has headings that are a task-oriented list of the operations detailed in that chapter. For any task that you want to perform, look in the table of contents for the heading that describes the task.
The Enterprise Manager chapter explains how to use the main GUI. The Virtual Tape Library (VTL) - GUI chapter explains how to use the VTL GUI. The Replication - GUI chapter explains how to use the Replication GUI.
Conventions
The following table describes the typographic conventions used in this guide.
Typeface Monospace Usage Commands, computer output, directories, files, software elements such as command options, and parameters New terms, book titles, variables, and labels of boxes and windows as seen on a monitor User input; the # symbol indicates a command prompt. Examples Find the log file under /var/log. See the net help page for more information. The name is a path for the device... # config setup
Usage Administrative user prompt In a command synopsis, brackets indicate an optional argument In a command synopsis, a vertical bar separates mutually exclusive arguments In a command synopsis, curly brackets indicate that one of the exclusive arguments is required.
Examples
log view [filename] net dhcp [true | false] adminhost add {ftp | telnet | ssh}
Audience
This guide is for system administrators who are familiar with standard backup software packages and with general backup administration.
xxvi
24 hours a day, 7 days a week at 877-207-DATA (3282) (toll free) or 408-980-4900 (direct) email: support@datadomain.com
Data Domain, Incorporated 2421 Mission College Blvd., Santa Clara CA 95054 USA Phone 408-980-4800 Direct 877-207-3282 Toll-free Fax 408-980-8620
xxvii
xxviii
Introduction
Data Domain Systems are disk-based recovery appliances. A Data Domain System makes backup data available with the performance and reliability of disks at a cost competitive with tape-based storage. Data integrity is assured with multiple levels of data checking and repair. A Data Domain System works seamlessly with your existing backup software. To a backup server, the Data Domain System appears as a file server supporting NFS or CIFS over Gigabit Ethernet, or as a virtual tape library (VTL) over a Fibre Channel connection. Add a Data Domain System to your site as a disk storage device, as defined by your backup software, or as a tape library. Multiple backup servers can share one Data Domain System, and one Data Domain System can handle multiple simultaneous backup and restore operations. For additional throughput and capacity, you can attach multiple Data Domain Systems to one or more backup servers. Figure 1 shows a Data Domain System in a basic backup configuration. SCSI/ Fibre Channel
Backup Server Ethernet from primary storage Gigabit Ethernet or Fibre Channel
NFS/CIFS/VTL/OST Data Verification Data Domain File System Global Compression RAID Management
Tape system
Documents
Referring to Figure 1 on page 3, data flows to a Data Domain System through an Ethernet or Fibre Channel connection. Immediately, data verification processes begin that follow the data for as long as it is on the Data Domain System. In the file system, Data Domain OS Global Compression algorithms prepare the data for storage. Data is then sent to the disk RAID subsystem. The algorithms constantly adjust the use of storage as the Data Domain System receives new data from backup servers. Restore operations flow back from storage, through decompression algorithms and verification consistency checks, and then through the Ethernet connection to the backup servers.
Documents
The main Data Domain system guides are the following:
Data Domain Operating System User Guide Data Domain System Hardware User Guide Data Domain ES20 Expansion Shelf Hardware Guide Data Domain Open Storage (OST) User Guide
Data is sent to the Data Domain System as sequential writes (no overwrites). No compression or encryption is used before sending the data to the Data Domain System.
Note that some storage capacity is used by Data Domain system internal indexes and other product components. The amount of storage used over time for such components depends on the type of data stored and the sizes of files stored. With two otherwise identical systems, one system may, over time, have room for more or less actual backup data than the other if different data sets are sent to each.
16GB
40
40
30
16GB
30
20
30
12GB
20
20
20
8GB
20
20
16
4GB
16
16
Data Integrity
The Data Domain OS Data Invulnerability Architecture protects against data loss from hardware and software failures.
Introduction
Data Compression
When writing to disk, the Data Domain OS creates and stores self-describing metadata for all data received. After writing the data to disk, the Data Domain OS then creates metadata from the data on the disk and compares it to the original metadata. An append-only write policy guards against overwriting valid data. After a backup completes, a validation process looks at what was written to disk to see that all file segments are logically correct within the file system and that the data is the same on the disk as it was before being written to disk. In the background, the Online Verify operation continuously checks that data on the disks is still correct and that nothing has changed since the earlier validation process. Storage in a Data Domain System is set up in a double parity RAID 6 configuration (two parity drives) with a hot spare in 15-disk systems. Eight-disk systems have no hot spare. Each parity stripe has block checksums to ensure that data is correct. The checksums are constantly used during the online verify operation and when data is read from the Data Domain System. With double parity, the system can fix simultaneous errors on up to two disks. To keep data synchronized during a hardware or power failure, the Data Domain System uses NVRAM (non-volatile RAM) to track outstanding I/O operations. An NVRAM card with fully-charged batteries (the typical state) can retain data for a minimum of 48 hours. When reading data back for a restore operation, the Data Domain OS uses multiple layers of consistency checks to verify that restored data is correct.
Data Compression
The Data Domain OS compression algorithms:
store only unique data. Through Global Compression, a Data Domain System pools redundant data from each backup image. Any duplicated data or repeated patterns from multiple backups are stored only once. The storage of unique data is invisible to backup software, which sees the entire virtual file system. are independent of data format. Data can be structured, such as databases, or unstructured, such as text files. Data can be from file systems or raw volumes. All forms are compressed.
Typical compression ratios are 20:1 on average over many weeks assuming weekly full and daily incremental backups. A backup that includes many duplicate or similar files (files copied several times with minor changes) benefits the most from compression. Depending on backup volume, size, retention period, and rate of change, the amount of compression can vary. The best compression happens with backup volume sizes of at least 10 MiB (the base 2 equivalent of MB). See Display File system Space Utilization on page 215 for details on displaying the amount of user data stored and the amount of space available.
Restore Operations
Global Compression functions within a single Data Domain System. To take full advantage of multiple Data Domain Systems, a site that has more than one Data Domain System should consistently backup the same client system or set of data to the same Data Domain System. For example, if a full backup of all sales data goes to Data Domain SystemA, the incremental backups and future full backups for sales data should also go to Data Domain SystemA.
Restore Operations
With disk backup through the Data Domain System, incremental backups are always reliable and access time for files is measured in milliseconds. Furthermore, with a Data Domain System, you can perform full backups more frequently without the penalty of storing redundant data. With tape backups, a restore operation may rely on multiple tapes holding incremental backups. Unfortunately, the more incremental backups a site has on multiple tapes, the more time-consuming and risky the restore process. One bad tape can kill the restore. From a Data Domain System, file restores go quickly and create little contention with backup or other restore operations. Unlike tape drives, multiple processes can access a Data Domain System simultaneously. A Data Domain System allows your site to offer safe, user-driven, single-file restore operations.
Introduction
Licensing
Licensing
The licensed features on a Data Domain System are:
Data Domain Expanded Storage, which allows a user to add an expansion shelf to their system. Data Domain Open Storage (OST), which allows a DDR (Data Domain system) to be a storage server for Symantecs NetBackup OpenStorage feature. Data Domain Replicator, which sets up and manages the replication of data between two Data Domain Systems. Data Domain Retention-Lock, which protects locked files from deletion and modification for up to 70 years. Data Domain Virtual Tape Library (VTL), which allows backup software to see a Data Domain System as a tape library.
The license command allows you to add new licenses, delete current licenses, or display current licenses. See The license Command on page 124 for command details. Contact your Data Domain representative to purchase licensed features.
User Interfaces
A Data Domain System has a complete command set available to users in a command line interface. Commands allow initial system configuration, changes to individual system settings, and give displays of system states and the state of system operations. The command line interface is available through a serial console or keyboard and monitor attached directly to the Data Domain System, or through Ethernet connections. A web-based graphical user interface, the Data Domain Enterprise Manager, is available through Ethernet connections. Using a Data Domain Enterprise Manager, you can do the initial system configuration, make some configuration updates after initial configuration, and display system states and the state of system operations.
Multipath
Multipath allows external storage I/O paths for failover and load balancing across paths. For more on multipath commands, see the chapter Multipath. See also the Data Domain System Hardware Guide.
Related Documentation
8
See the Data Domain Quick Start folder for a simplified list of installation tasks.
Data Domain Operating System User Guide
See the Data Domain Command Reference for Data Domain System command summaries. See the Release Notes for a specific Data Domain software release for late changes and fixes.
If using DNS, one to three DNS servers are identified for IP address resolution. DHCP is enabled or not enabled for each Ethernet interface, as you choose during installation. Each active interface has an IP address. The Data Domain System hostname is set (for use by the network). The IP addresses are set for the backup servers, SMTP server, and administrative hosts. An SMTP (mail) server is identified. For NFS clients, the Data Domain System is set up to export the /backup and /ddvar directories using NFSv3 over TCP. For CIFS clients, the Data Domain System has shares set up for /backup and /ddvar. The directories under /ddvar are: core The default destination for core files created by the system. log The destination for all system log files. See Log File Management on page 165 for details. releases The default destination for operating system upgrades that are downloaded from the Data Domain Support web site. snmp The location of the SNMP MIB (management information base). traces The destination for execution traces used in debugging performance issues.
One or more backup servers are identified as Data Domain System NFS or CIFS clients. A host is identified for Data Domain System administration. Administrative users have access to the partition /ddvar. The partition is small and data in the partition is not compressed. The time zone you select is set. The initial user for the system is sysadmin with the password that you give during setup. The user command allows you to later add administrative and non-administrative users later. The SSH service is enabled and the HTTP, FTP, TELNET, and SNMP services are disabled. Use the adminaccess command to enable and disable services.
9
Introduction
The user lists for TELNET and FTP are empty, SNMP is not configured, and the protocols are disabled, meaning that no users can connect through TELNET, FTP, or SNMP. A system report runs automatically every day at 3 a.m. The report goes to a Data Domain email address and an address that you give during set up. You can add addresses to the email list using the autosupport command. An email list for system alerts that are automatically generated has a Data Domain email address and a local address that you enter during set up. You can add addresses to the email list using the alerts command The clean operation is scheduled for Tuesday at 6:00 a.m. To review or change the schedule, use the filesys clean commands. The background verification operation that continuously checks backup images is enabled.
To list Data Domain System commands, enter a question mark (?) at the prompt. To list the options for a particular command, enter the command with no options at the prompt. To find a keyword used in a command option when you do not remember which command to use, enter a question mark (?) or the help command followed by the keyword. For example, the question mark followed by the keyword password displays all Data Domain System command options that include password. If the keyword matches a command, such as net, then the command explanation appears. To display a detailed explanation of a particular command, enter the help command followed by a command name. Use the up and down arrow keys to move through a displayed command. Use the q key to exit. Enter a slash character (/) and a pattern to search for and highlight lines of particular interest. The Tab key completes a command entry when that entry is unique. Tab completion works for the first three levels of command components. For example, entering syst(tab) sh(tab) st(tab) displays the command system show stats. Any Data Domain System command that accepts a list, such as a list of IP addresses, accepts entries as comma-separated, space-separated, or both. Commands that display the use of disk space or the amount of data on disks compute amounts using the following definitions:
Data Domain Operating System User Guide
10
1 KiB = 210 bytes = 1024 bytes 1 MiB = 220 bytes = 1,048,576 bytes 1 GiB = 230 bytes = 1,073,741,824 bytes 1 TiB = 240 bytes = 1,099,511,627,776 bytes Note The one exception to displays in powers of 2 is the system show performance command, in which the Read, Write, and Replicate values are calculated in powers of 10 (1KB = 1000). The commands are: adminaccess Manages the HTTP, FTP, TELNET, and SSH services. See Access Control for Administration on page 109. alerts Creates alerts for system problems. Alerts are emailed to Data Domain and to a user-configurable list. See Alerts on page 130. alias Creates aliases for Data Domain System commands See The alias Command on page 81. autosupport Generates a system status and health report. Reports are emailed to Data Domain and to a user-configurable list. See Autosupport Reports on page 134. cifs Manages Common Internet File system backups and restores on a Data Domain System and displays CIFS status and statistics for a Data Domain System. See CIFS Management on page 309. config Shows, resets, copies, and saves Data Domain System configuration settings. See Configuration Management on page 119. disk Displays disk statistics, status, usage, reliability indicators, and RAID layout and usage. See Disk Management on page 173. enclosure Use this command to identify and display information about the Data Domain system and about expansion shelves. filesys Displays filesystem status and statistics. See Statistics and Basic Operations on page 213 for details. Manages the clean feature that reclaims physical disk space held by deleted data. See Clean Operations on page 220 for details. help Displays a list of all Data Domain System commands and detailed explanations for each command. license Displays current licensed features and allows adding or deleting licenses. log Displays and administers the Data Domain System log file. See Log File Management on page 165. ndmp Manages direct backup and restore operations between a Network Appliance filer and a Data Domain System using the Network Data Management Protocol Version 2. See Backup/Restore Using NDMP on page 393.
Introduction 11
net Displays network status and set up information. See Network Management on page 87. nfs Displays NFS status and statistics. See NFS Management on page 301 for details. ntp Manages Data Domain System access to one or more time servers. The default setting is multicast. See Time Servers and the NTP Command on page 83. ost This command allows a DDR (Data Domain system) to be a storage server for Symantecs NetBackup OpenStorage feature. OST stands for Open STorage. replication Manages the Replicator for replication of backup data from one Data Domain System to another. See Replication - CLI on page 249. route Manages Data Domain System network routing rules. See The route Command on page 105. snapshot This command manages file system snapshots. A snapshot is a read-only copy of the Data Domain system file system from the top directory: /backup. snmp Enables or disables SNMP access to a Data Domain System, adds community strings, and gives contact and location information. See SNMP Management and Monitoring on page 141. support Send log files to Data Domain Technical Support. See Collect and Send Log Files on page 139. system Displays Data Domain System status, faults, and statistics, enables, disables, halts, and reboots a Data Domain System. See The system Command on page 63. Also sets and displays the system clock and calendar and allows the Data Domain System to synchronize the clock with an external time server. See Set the Date and Time on page 66. user Administers user accounts for the Data Domain System. See User Administration on page 115.
12
Installation
Installation and site configuration for a Data Domain System consist of the tasks listed below. After configuration, the Data Domain System is fully functional and ready for backups. For site hardware and backup software requirements, see Data Domain System Hardware Interfaces on page 7. Note Installation and configuration for a Gateway Data Domain System (using 3rd-party physical disk storage systems) is explained in the chapter Gateway systems.
Check the site and backup software requirements. Set up the Data Domain System hardware and a serial console or a monitor and keyboard if you are not using an Ethernet interface for configuration. See the Data Domain system Hardware Guide for details. Login to the Data Domain System as sysadmin using a serial console, monitor and keyboard, SSH and an Ethernet interface, or the Data Domain Enterprise Manager through a web browser. To configure the system from a browser, the browser must be able to locate the Data Domain system on the network, which means that the Data Domain system must have an IP address (from DHCP, for example). Answer questions asked by the configuration process. The process starts automatically when sysadmin first logs in through the command line interface. To start configuration in the Data Domain Enterprise Manager, click on Configuration Wizard. The process requests all of the basic information needed to use the Data Domain System. Optionally, after completing the initial configuration, follow the steps in Additional Configuration on page 27 to add to the configuration. Configure the backup software and servers. See the Data Domain Support web site (https://support.datadomain.com), Technical Notes section for details about configuring a Data Domain System with specific backup servers and software.
To upgrade Data Domain OS software to a new release, see Upgrade the Data Domain System Software on page 64. Note The Data Domain OS is pre-installed on the Data Domain System. You do not need to install software. In emergency situations, such as when a Data Domain System fails to boot up by itself, call Data Domain Technical Support for step-by-step instructions.
13
If you want detailed background information, see the following web page: http://support.microsoft.com/default.aspx?scid=http://support.m icrosoft.com:80/support/kb/articles/Q102/0/67.asp&NoWebContent= 1
If the SESSTIMEOUT key does not exist, click in the right panel and select New and DWORD value. Create a new key, SESSTIMEOUT. Note that the registry is case sensitive. Use all caps for the new key name. Double click on the new (or existing) key and set to the decimal value of 3600.
14
1. Open a web browser. 2. Enter a path to the Data Domain System. For example: http://rstr01/ for a Data Domain System named rstr01 on a local network. 3. Enter a login name and password. The default password for the sysadmin login is the serial number that appears on the rear panel of the Data Domain System. Note that all characters in a serial number are numeric except for the third and fourth characters. Other than the third and fourth characters, all 0 characters are zeros. See Figure 4 on page 17 for the location. The Data Domain System Summary screen appears. 4. Click on the Configuration Wizard link as shown in Figure 2 on page 15.
Configuration link
Note Most of the installation procedure in this chapter uses the command line interface as an example. However, the Configuration Wizard of the Data Domain Enterprise Manager has the same configuration groups and sets the same configuration parameters. With the Data Domain Enterprise Manager, click on links and fill in boxes that correspond to the command line examples that follow. To return to the list of configuration sections from within one of the sections, click on the Wizard List link in the top left corner of the Configuration Wizard screen. If you earlier set up DHCP for one or more Data Domain System Ethernet interfaces, a number of the config setup prompts display the values given to the Data Domain System from a DHCP server. DHCP servers normally supply values for a number of networking parameters. Press Return during the installation to accept DHCP values. If you do not use DHCP for an interface, determine what you will use for the following values before starting the configuration:
Installation
Interface IP addresses. Interface netmasks. Routing gateway. DNS server list (if using DNS).
15
A site domain name, such as yourcompany.com. A fully-qualified hostname for the Data Domain System, such as rstr01.yourcompany.com.
You can configure different network interfaces on a Data Domain System to different subnets. When configuring Data Domain System software:
At any prompt, enter a question mark (?) for detailed information about the prompt. Press Return to accept a displayed value. Enter either hostnames or IP addresses where ever a prompt mentions a host. Hostnames must be fully qualified, such as srvr22.yourcompany.com. For any entry that accepts a list, the entries in the list can be comma-separated, spaceseparated, or both. When configuration is complete, the system is ready to accept backup data. For NFS clients, the Data Domain System is set up to export the /backup and /ddvar directories using NFSv3 over TCP. For CIFS clients, the Data Domain System has shares set up for /backup and /ddvar.
The configuration utility has five sections: Licenses, Network, NFS, CIFS, and system. You can configure or skip any section. The command line interface automatically moves from one section to the next. With the Data Domain Enterprise Manager, click on the sections as shown in Figure 3.
16
1. The first login to the Data Domain System can be from a serial console, keyboard and monitor, through an Ethernet connection, or through a web browser. Log in as user sysadmin. The default password is the serial number from the rear panel of the Data Domain System. See Figure 4 for the location.
Serial number
Figure 4: Serial number location
From a serial console or keyboard and monitor, log in to the Data Domain System at the login prompt.
17
Installation
From a remote machine over an Ethernet connection, give the following command (with the hostname you chose for the Data Domain System) and then give the default password. # ssh -l sysadmin host-name sysadmin@host-names password:
From a web browser, enter a path to the Data Domain System. For example: http://rstr01/ for a Data Domain System named rstr01 on a local network.
2. When using the command line interface, the first prompt after login gives the opportunity to change the sysadmin password. The prompt appears only once, at the first login to a new system. You can change the sysadmin password immediately at the prompt or later with the user change password command. To improve security, Data Domain recommends that you change the 'sysadmin' password before continuing with the system configuration. Change the 'sysadmin' password at this time? (yes|no) [yes]: 3. When using the command line interface, the Data Domain System command config setup starts next. 4. The first configuration section is for licensing. Licenses that you ordered with the Data Domain System are already installed. At the first prompt, enter yes to configure or view licenses. Enter the license characters, including dashes, for each license category. Make no entry and press Enter for categories that you have not licensed.
18
Licenses Configuration Configure Licenses at this time (yes|no) [no]: yes Expanded Storage License Code Enter your Expanded Storage license code []: Open Storage (OST) License Code Enter your Open Storage (OST) license code []: Replication License Code Enter your Replication license code []: Retention-Lock License Code Enter your Retention-Lock license code []: VTL License Code Enter your VTL license code []: Note (If customers want to use the optimized duplication feature of OST, then they need to get the Replication license as well.) A listing of your choices appears. You can accept the settings (Save), reject the settings and go to the next section (Cancel), or return to the beginning of the current section and change settings (Retry). A Retry shows your previous choice for each prompt. Press Return to accept the displayed value or enter a new value. Pending License Settings. Expanded Storage License: Open Storage (OST) License: Replication License: Retention-Lock License: VTL License: ABCD-ABCD-ABCD-ABCD ABCD-ABCD-ABCD-ABCD ABCD-ABCD-ABCD-ABCD ABCD-ABCD-ABCD-ABCD ABCD-ABCD-ABCD-ABCD
Do you want to save these settings (Save|Cancel|Retry): 5. The second section is for network configuration. At the first prompt, enter yes to configure network parameters. NETWORK Configuration Configure NETWORK parameters at this time (yes|no) [no]: Note After configuring the Data Domain System to use DNS, the Data Domain System must be rebooted. Also, if DHCP is disabled for all interfaces and then later enabled for one or more interfaces, the Data Domain System must be rebooted.
Installation
19
a. The first prompt is for a Data Domain System machine name. Enter a fully-qualified name that includes the domain name. For example: rstr01.yourcompany.com. Note: With CIFS using domain mode authentication, the first component of the name is also used as the netBIOS name, which cannot be over 15 characters. If you use domain mode and the hostname is over 15 characters, use the cifs set nb-hostname command for a shorter netBIOS name. Hostname Enter the hostname for this system (fully-qualified domain name)[]: b. Supply a domain name, such as yourcompany.com, for use by Data Domain System utilities, or accept the display of the domain name used in the hostname. Domainname Enter your DNS domainname []: c. Configure each Ethernet interface that has an active Ethernet connection. If you earlier set up DHCP for an interface, the IP address and netmask prompts do not appear. You can accept or not accept DHCP for each interface. If you enter yes for DHCP and DHCP is not yet available to the interface, the Data Domain System attempts to set up the interface with DHCP until DHCP is available. Use the net show settings command to display which interfaces are configured for DHCP. If you are on an Ethernet interface and you choose to not use DHCP for the interface, the connection is lost when you complete the configuration. At the last prompt, entering Cancel deletes all new values and goes to the next section. Each interface is a Gigabit Ethernet connection. The same set of prompts appears for each interface. Ethernet port eth0: Enable Ethernet port (yes|no) [ ]: Use DHCP on Ethernet port eth0 (yes|no) [ ]: Enter the IP address for eth0 [ ]: Enter the netmask for eth0 [ ]: When not using DHCP on any Ethernet port, you must specify an IP address for a default routing gateway. Default Gateway Enter the default gateway IP address[]:
20
When not using DHCP on any Ethernet port, you can enter up to three DNS servers for a Data Domain System to use for resolving hostnames into IP addresses. Use a comma- separated or space-separated list. Enter a space for no DNS servers. With no DNS servers, you can use the net hosts commands to inform the Data Domain System of IP addresses for relevant hostnames. DNS Servers Enter the DNS Server list (zero, one, two or three IP addresses)[]:
d. A listing of your choices appears. You can accept the settings (Save), reject the settings and go to the next section (Cancel), or return to the beginning of the current section and change settings (Retry). A Retry shows your previous choice for each prompt. Press Return to accept the displayed value or enter a new value.
Cable -----
*** -----
*** No connection on indicated Ethernet port Do you want to save these settings (Save|Cancel|Retry):
Note An information box also appears in the recap if any interface is set up to use DHCP, but does not have a live Ethernet connection. After troubleshooting and completing the Ethernet connection, wait for up to two minutes for the Data Domain System to update the interface. The Cable column of the net show hardware command displays whether or not the Ethernet connection is live for each interface.
Installation 21
6. The third section is for CIFS (Common Internet File system) configuration. At the first prompt, enter yes to configure CIFS parameters. The default authentication mode is Active Directory. Note When configuring a destination Data Domain System as part of a Replicator pair, configure the authentication mode, WINS server (if needed) and other entries as with the originator in the pair. The exceptions are that a destination does not need a backup user and will probably have a different backup server list (all machines that can access data that is on the destination). CIFS Configuration Configure CIFS at this time (yes|no) [no]: yes a. Select a user-authentication method for the CIFS user accounts that connect to the /backup and /ddvar shares on the Data Domain System. CIFS Authentication Which authentication method will this system use (Workgroup|Domain|Active-Directory) [Active Directory]: The Workgroup method has the following prompts. Enter a workgroup, the name of a CIFS workgroup account that will send backups to the Data Domain System, a password for the workgroup account, a WINS server name, and backup server names. Workgroup Name Enter the workgroup name for this system [ ]: Do you want to add a backup user yes|no) [no]: Backup User Enter backup user name: Backup User Password Enter backup user password: Enter the WINS server for the Data Domain System to use: WINS Server Enter the IP address for the WINS server for this system []: Enter one or more backup servers as Data Domain System clients. Backup Servers Enter the Backup Server list (CIFS clients of /backup) []:
22
The Domain method brings the following prompts. Enter a domain name, the name of a CIFS domain account that will send backups to the Data Domain System and optionally, one or more domain controller IP addresses, a WINS server name, and backup server names. Press Enter with no entry to break out of the prompts for domain controllers. Domain Name Enter the name of the Windows domain for this system [ ]: Do you want to add a backup user? (yes|no) [no]: Backup user Enter backup user name: Domain Controller Enter the IP address of domain controller 1 for this system [ ]: Enter the WINS server for the Data Domain System to use: WINS Server Enter the IP address for the WINS server for this system []: Enter one or more backup servers as Data Domain System clients. Backup Servers Enter the Backup Server list (CIFS clients of /backup) []: The Active-Directory method brings the following prompts. Enter a fully-qualified realm name, the name of a CIFS backup account, a WINS server name, and backup server names. Data Domain recommends not specifying a domain controller. When not specifying a domain controller, be sure to specify a WINS server. The Data Domain System must meet all active-directory requirements, such as a clock time that is no more than five minutes different than the domain controller. Press Enter with no entry to break out of the prompts for domain controllers. Active-Directory Realm Enter the name of the Active-Directory Realm for this system [ ]: Do you want to add a backup user? (yes|no) [no]: Backup user Enter backup user name: Domain Controllers Enter list of domain controllers for this system [ ]:
Installation
23
Enter the WINS server for the Data Domain System to use: WINS Server Enter the IP address for the WINS server for this system []: Enter one or more backup servers as Data Domain System clients. An asterisk (*) is allowed as a wild card only when used alone to mean all. Backup Server List Enter the Backup Server list (CIFS clients of /backup) []: b. A listing of your choices appears. You can accept the settings (Save), reject the settings and go to the next section (Cancel), or return to the beginning of the current section and change settings (Retry). A Retry shows your previous choice for each prompt. Press Return to accept the displayed value or enter a new value. The following example is with an authentication mode of Active-Directory. Pending CIFS Settings -------------------------------------
Auth Method Active-Directory Domain domain1 Realm domain1.local Backup User dsmith Domain Controllers WINS Server 192.168.1.10 Backup Server List * -------------------------------------Do you want to save these settings (Save|Cancel|Retry): 7. The fourth section is for NFS configuration. At the first prompt, enter yes to configure NFS parameters. NFS Configuration Configure NFS at this time (yes|no) [no]: yes a. Add backup servers that will access the Data Domain System through NFS. You can enter a list that is comma-separated, space-separated, or both. An asterisk (*) opens the list to all clients. The default NFS options are: rw, no_root_squash, no_all_squash, and secure. You can later use adminaccess add and nfs add /backup to add backup servers. Backup Servers Enter the Backup Server list (NFS clients of /backup)[]:
24
b. A listing of your choices appears. You can accept the settings (Save), reject the settings and go to the next section (Cancel), or return to the beginning of the current section and change settings (Retry). A Retry shows your previous choice for each prompt. Press Return to accept the displayed value or enter a new value. Pending NFS Settings. Backup Server List: Do you want to save these settings (Save|Cancel|Retry): 8. The fifth section is for system parameters. At the first prompt, enter yes to configure system parameters. SYSTEM Configuration Configure SYSTEM Parameters at this time (yes|no) [no]: a. Add a client host from which you will administer the Data Domain System. The default NFS options are: rw, no_root_squash, no_all_squash, and secure. You can later use the commands adminaccess add and nfs add /ddvar to add other administrative hosts. Admin host Enter the administrative host []: b. You can add an email address so that someone at your site receives email for system alerts and autosupport reports. For example, jsmith@yourcompany.com. By default, the Data Domain System email lists include an address for the Data Domain support group. You can later use the Data Domain System commands alerts and autosupport to add more addresses. Admin email Enter an email address for alerts and support emails[]: c. You can enter a location description for ease of identifying the physical machine. For example, Bldg4-rack10. The alerts and autosupport reports display the location. system Location Enter a physical location, to better identify this system[]: d. Enter the name of a local SMTP (mail) server for Data Domain System emails. If the server is an Exchange server, be sure that SMTP is enabled. SMTP Server Enter the hostname of a mail server to relay email alerts[]:
Installation
25
e. The default time zone for each Data Domain System is the factory time zone. For a complete list of time zones, see Time Zones on page 451. Timezone Name Enter your timezone name:[US/Pacific]: f. To allow the Data Domain System to use one or more Network Time Service (NTP) servers, you can enter IP addresses or server names. The default is to enable NTP and to use multicast. Configure NTP Enable Network Time Service? (yes|no)|? [yes]: Use multicast for NTP? (yes|no|?) [no]: Enter the NTP Server list [ ]: g. A listing of your choices appears. Accept the settings (Save), reject the settings and go to the next section (Cancel), or return to the beginning of the current section and change settings (Retry). A Retry shows your previous choice for each prompt. Press Return to accept the displayed value or enter a new value. Pending system Settings ------------------------------Admin host pls@yourcompany.com system Location Server Room 52327 SMTP Server mail.yourcompany.com Timezone name US/Pacific NTP Servers 123.456.789.33 ---------------------------------Do you want to save these settings (Save|Cancel|Retry): Note For Tivoli Storage Manager on an AIX backup server to access a Data Domain System, you must re-add the backup server to the Data Domain System after completing the original configuration setup. On the Data Domain System, run the following command with the server-name of the AIX backup server: # nfs add /backup server-name (insecure) h. Configure the backup servers. For the most up-to-date information about setting up backup servers for use with a Data Domain System, go to the Data Domain Support web site (https://support.datadomain.com/). See the Technical Notes section.
26
Additional Configuration
Additional Configuration
The following are common changes to the Data Domain System configuration that users make after the installation. Changes to the initial configuration settings are all made through the command line interface. Each change describes the general task and the command used to accomplish the task.
Add email addresses to the alerts list and the autosupport list. See Add to the Email List on page 134 for details. alerts add addr1[,addr2,...]
Give access to additional backup servers. See NFS Management on page 301 for details. nfs add /backup srvr1[,srvr2,...]
From a remote machine, add an authorized SSH public key to the Data Domain System. See Add an Authorized SSH Public Key on page 112 for details. ssh-keygen -d ssh -l sysadmin rstr01 adminaccess add ssh-keys \ < ~/.ssh/id_dsa.pub
Add remote hosts that can use FTP or TELNET on the Data Domain System. See Add a Host on page 109 for details. adminaccess add {ftp | telnet | ssh | http}{all | host1[,host2,...]}
Enable HTTP, HTTPS, FTP or TELNET. The SSH, HTTP, and HTTPS services are enabled by default. See Enable a Protocol on page 111 for details. adminaccess enable {http | https | ftp | telnet | ssh | all}
Add a standard user. See User Administration on page 115 for details. user add username
Change a user password. See User Administration on page 115 for details. user change password username
Installation
27
Look in the table of contents at the beginning of this guide for the heading that describes the task. List the Data Domain System commands and operations. To see a list of commands, log in to the Data Domain System using SSH (or TELNET if that is enabled) and enter a question mark (?) at the prompt. To see a list of operations available for a particular command, enter the command name. To display a detailed help page for a command, use the help command with the name of the target command. Use the up and down arrow keys to move through a displayed command. Use the q key to exit. Enter a slash character (/) and a pattern to search for and highlight lines of particular interest.
28
rstr01.yourcompany.com
For a complete explanation of the default Data Domain Enterprise Manager screen, see Graphical User Interface on page 399.
Installation
29
30
A Data Domain ES20 expansion shelf is a 3U chassis that has 16 disks for increasing the storage capacity of a Data Domain system. The Data Domain OS Data Invulnerability Architecture and all other Data Domain System data integrity features that protect against data loss from hardware and software failures also apply to the ES20 expansion shelf. All Data Domain System data compression technology also applies as does the Data Domain Replicator feature that sets up and manages replication of backup data between two Data Domain Systems. The Replicator sees data on an expansion shelf as part of the volume that resides on the managing Data Domain System.
In related Data Domain System commands, the system and each expansion shelf is called an enclosure. A system sees all data storage (system and attached shelves) as part of a single volume. A new system installed along with expansion shelves finds the shelves when booted up. Follow the instructions in this chapter to add shelves to the volume and create RAID groups. After adding a shelf to a system with an existing, active file system, a percentage of new data is sent to the new shelf. An algorithm takes into account the amount of space available in the Data Domain file system, in the file system on a previously installed shelf (if one exists), and the probable impact of location on read/write times. Over time, data is spread evenly over all enclosures.
Warning After adding a shelf to a volume, the volume must always include the shelf to maintain file system integrity. Do not add a shelf and then later remove it, unless you are prepared to lose all data in the volume. If a shelf is disconnected, the volumes file system is immediately disabled. Re-connect the shelf or transfer the shelf disks to another shelf chassis and connect the new chassis to re-enable the file system. If the data on a shelf is not available to the volume, the volume cannot be recovered. Without the same disks in the original shelf or in a new shelf chassis, the Data Domain operating system must be re-installed. Contact Data Domain Technical Support for the re-installation procedure. Note Disk space is given in KiB, MiB, GiB, and TiB, the binary equivalents of KB, MB, GB, and TB.
31
RAID groups
All administrative access to an ES20 shelf is done through the controlling Data Domain System command line interface and graphical user interface. Initial configuration tasks, changes to the configuration, and displaying disk usage in a shelf are all done with standard Data Domain System commands as explained in this chapter.
RAID groups
The single volume that includes all disks and shelves in a system also includes multiple RAID 6 groups; also called disk groups. Each shelf is one RAID group and the system is one RAID group.
The system has a RAID group of 12 data disks, two parity disks, and one spare. Each shelf has a RAID group with 12 data disks and two parity disks. It also has two spares, which are global, and which are used when needed in a certain order. A RAID group is created on a new shelf with the disk add enclosure command.
Disk Failures
A system and two expansion shelves (three enclosures) have a total of five spare disks. If the number of spare disks needed by an enclosure exceeds the number of spares in that enclosure, the RAID group for that enclosure takes an available spare disk from another enclosure. Warning If no spare disks are available from any enclosure, a shelf can have up to two more failed disks and still maintain the RAID group of 12 data disks. However, if one more disk in a shelf fails (leaving only 11 data disks), the data volume (made up of all the enclosures) fails and cannot be recovered. Always replace any failed disk in any enclosure as soon as possible.
Add a Shelf
Physically install shelves by following the installation instructions received with each shelf. After installing shelves and starting the Data Domain System, the following commands display the state of disks and the Data Domain System/shelf connections before the shelves are integrated as a RAID group.
You can check the status of the SAS HBA cards before the shelves are physically connected to the Data Domain System. Enter the disk port show summary command. Each HBA generates one line in the command output. In the example below, the Data Domain System has two HBAs and no shelf cable attached to either card, giving a Status of offline for both HBAs.
32
Add a Shelf
# disk port show summary Port Connection Link Type Speed ----------------3a SAS 4a SAS ----------------
After the shelves are physically connected to the Data Domain System, the enclosure show ports output includes enclosure IDs and a status of online. # disk port show summary Port Connection Link Type Speed ----------------3a SAS 4a SAS ----------------Connected Enclosure IDs ------------2 3 ------------Status ------online online -------
On the system, use the enclosure show summary command to verify that the shelves are recognized. # enclosure Enclosure --------1 2 3 --------show summary Model No. ----------------Data Domain DD580 Data Domain ES20 Data Domain ES20 ----------------Serial No. ---------------1234567890 50050CC100100A3A 50050CC100100AE6 ---------------Capacity -------15 Slots 16 Slots 16 Slots --------
You can physically identify which shelf is identified by an enclosure number by matching the Serial No (actually the world-wide name of the enclosure) from the enclosure show summary command with the enclosure WWN located on the control panel on the back of the shelf. See Figure 6 for the location.
33
Add a Shelf
Enter the disk show raid-info command to show the current RAID status of the disks. All disks should have a State of unknown or foreign. # disk show raid-info
Enter the filesys show space command to display the file system that is seen by the system. # filesys show space
Use the following commands to make the shelf disks available: 1. The new disks are not yet part of a RAID group or part of the Data Domain System volume. Use the disk add enclosure command to add the disks to the volume. The command asks for confirmation and then for the sysadmin password. When adding two shelves, use the command once for enclosure 2 and once for enclosure 3. # disk add enclosure 2 The 'disk add' command adds all disks in the enclosure to the filesystem. Once the disks are added, they cannot be removed from the filesystem without re-installing the system. Are you sure? (yes|no|?) [no]: y ok, proceeding. Please enter sysadmin password to confirm 'disk add enclosure': Note On DD6xx systems, the message returned by the disk add enclosure command will be different from the above, and it could take much longer for the first shelf. Typically it should take 3 or 4 minutes for the first shelf, and half a minute for each subsequent shelf. 2. Use the disk show raid-info command to display the RAID groups. Each shelf should show most disks with a State of in use and two disks with a State of spare. # disk show raid-info If disks from each shelf are labeled as unused rather than spare, use the disk unfail command for each unused disk. For example, if the two disks 2.15 and 2.16 are labeled unused, enter the following two commands: # disk unfail 2.15 # disk unfail 2.16 Use the following commands to display the new state of the file system and disks: # filesys status Check the file system as seen by the system: # filesys show space
34
Disk Commands
Estimated compression factor*: 0.8x = 7040.9/(7880.4+0.3+39.2) * Estimate based on 2007/02/08 cleaning The disk show raid-info command should show a State of in use or spare for all disks in the shelves.
Disk Commands
With DD OS 4.1.0.0 and later releases, all disk commands that take a disk-id variable must use the format enclosure-id.disk-id to identify a single disk. Both parts of the ID are a decimal number. A Data Domain System with no shelves must also use the same format for disks on the Data Domain System. A Data Domain System always has the enclosure-id of 1 (one). For example, to check that disk 12 in a system (with or without shelves) is recognized by the DD OS and hardware, use the following command: # disk beacon 1.12 In DD OS releases previous to 4.1.0.0, output from disk commands listed individual disks with the word disk and a number. For example: # disk show hardware
Disk -----disk1 disk2 Manufacturer/Model ------------------HDS725050KLA360 HDS725050KLA360 Firmware -------K2A0A51A K2AOA51A Serial No. -------------KRFS06RAG9VYGC KRFS06RAG9TYYC Capacity ---------465.76 GiB 465.76 GiB
Output now shows the enclosure (Enc) number, a dot, and the disk (Slot) number:
ES20 Expansion Shelf 35
Disk Commands
Capacity
-------- -------------- ---------K2AOA51A KRFS06RAG9VYGC 465.76 GiB K2AOA51A KRFS06RAG9TYYC 465.76 GiB
Command output for a system that has one or more expansion shelves includes entries for all enclosures, disk slots, and RAID Groups. Note All system commands that display the use of disk space or the amount of data on disks compute and display amounts using base 2 calculations. For example, a command that displays 1 GiB of disk space as used is reporting: 230 bytes = 1,073,741,824 bytes. 1 KiB = 210 bytes = 1024 bytes. 1 MiB = 220 bytes = 1,048,576 bytes 1 GiB = 230 bytes = 1,073,741,824 bytes 1 TiB = 240 bytes = 1,099,511,627,776 bytes
36
List Enclosures
To list known enclosures, model numbers, serial numbers, and capacity (number of disks in the enclosure), use the enclosure show summary command. The serial number for an expansion shelf = the chassis Serial Number = the enclosure WWN (world-wide name) = the OPS Panel WWN. See Figure 7 for the WWN labels physical location on the back panel of the shelf. enclosure show summary For example:
37
Model No. ---------------Data Domain DD560 Data Domain ES20 Data Domain ES20 ----------------
3 enclosures present.
Identify an Enclosure
To check that the Data Domain OS and hardware recognize an enclosure, use the enclosure beacon operation. The operation causes the green (activity) LED on each disk in an enclosure to flash green. Use the (Control) c key sequence to turn off the operation. Administrative users only. enclosure beacon enclosure-id
38
#2 #3 #4
OK OK OK OK OK OK
2 ---------
#2
Enclosure starts with the system as enclosure 1 (one). Description for a shelf lists one fan for each power/cooling unit. Level is the fan speed and depends on the internal temperature and amount of cooling needed. Status is either OK or Failed.
39
2 3 ---------
Port See the "Data Domain System Hardware User Guide" to match a slot to a port number. A DD580, for example, shows port 3a as a SAS HBA in slot 3 and port 4a as a SAS HBA in slot 4. A gateway Data Domain system with a dual-port Fibre Channel HBA always shows #a as the top port and #b as the bottom port, where # is the HBA slot number. Connection Type is SAS or FC, depending on the Data Domain system model. Connection Type is SAS for enclosures and FC (Fibre Channel) for a gateway system. Link Speed is the HBA port link speed. Connected Enclosure IDs shows the number assigned to each shelf. The order in which the shelves are numbered is not important. Status is online or offline. Offline means that the shelf is not seen by the system. Check cabling and that the shelf is powered on.
40 Data Domain Operating System User Guide
41
Port See the "Data Domain System Hardware User Guide" to match a slot to a port number. A DD580, for example, shows port 3a as a SAS HBA in slot 3 and port 4a as a SAS HBA in slot 4. A gateway Data Domain system with a dual-port Fibre Channel HBA always shows #a as the top port and #b as the bottom port, where # is the HBA slot number, depending on the Data Domain system model. Connection Type is SAS for expansion shelves and FC (Fibre Channel) for a gateway system. Link Speed is the HBA port link speed. Connected Enclosure IDs are the IDs of the shelves that are connected. Status is online or offline.
Display Statistics
To display statistics useful when troubleshooting HBA-related problems, use the enclosure port show stats operation. The command output is used by Data Domain Technical Support. enclosure port show stats [port-id]
42
Shelf (enclosure) Commands Encl ---2 3 4 5 6 7 ---WWN ---------------50050CC1001019AA 50050CC10010194D 50050CC100100FD1 50050CC100101A80 50050CC1001019E6 50050CC100101933 ---------------Serial # ---------------50050CC1001019AA 50050CC10010194D 50050CC100100FD1 50050CC100101A80 50050CC1001019E6 50050CC100101933 ----------------
A physical diagram corresponding to the sample output given is shown in the figure: Data Domain system with 2 dual port HBAs and six shelves.
43
Figure 8 Data Domain system with 2 dual port HBAs and six shelves.
Note Enclosure numbers are not static; they may change when the system is rebooted. (The numbers are generated according to when the shelves are detected during system boot.) Thus, in order to determine enclosure cabling, refer to the WWN (World Wide Name) of each enclosure, which is also shown in the output of the enclosure show topology command.
44
Volume Expansion
Volume Expansion
Note Dont add a shelf when theres a disk failure of any kind. Repair any disk failures before adding a shelf.
Procedure: Create RAID group on new shelf that has lost disks
The following procedure shows how to create a RAID group on a new shelf that has lost three or more disks to existing RAID groups. 1. Use the disk show raid-info command to identify which RAID group is using disks in the new shelf. Also note which disk(s) a RAID group is using 2. In the enclosure for the RAID group that is using one or more disks in the new shelf, replace the bad disks that created the need for a spare outside of the enclosure. 3. In the new shelf, fail a disk used by the enclosure that now has a replacement spare disk. The RAID group should immediately start to rebuild using the new spare in its own enclosure. After the rebuild, fail other disks in the new shelf as needed to move data to other replacement spares in other enclosures. 4. Unfail the disk or disks in the new shelf that were used by the other RAID group(s). 5. Run disk add enclosure for the new shelf.
45
Always replace failed disks as soon as possible. See Replace Disks in Hardware Guide. If disk group 1 or disk group 2 use the spare disk on the system for reconstruction: Immediately replace all failed disks in all systems so that spares are available. Fail the group 1 or group 2 disk that is on the system. Wait for reconstruction to complete on one of the expansion shelf spares. Unfail the disk on the system, which should return to the state of spare.
If disk group 0 reconstructs a disk using a spare from an expansion shelf: Immediately replace all failed disks in all systems. Fail the disk group 0 disk that is on a shelf. Wait for reconstruction to complete on a system spare. Unfail the failed shelf disk. The disk should return to the state of spare.
Data Domain Operating System User Guide
46
47
48
Gateway systems
Gateway Data Domain Systems store data in and restore data from 3rd-party physical storage mounted disk arrays through Fibre Channel connections. Currently, the gateway Data Domain Systems support the following types of connectivity:
Fibre Channel direct-attached connectivity to a storage array using a 1, 2, or 4 Gb/sec Fibre Channel interface. Fibre Channel SAN-attached connectivity to a storage array using a 1, 2, or 4 Gib/sec Fibre Channel interface.
Note Generally all serial interfaces for networking are quoted in numbers of bits per second (lower case b) rather than Bytes (upper case B). See the Gateway Compatibility Matrix on the Data Domain Support web site for the latest updates of certified storage arrays, storage firmware, and SAN topology. Points to be aware of with a gateway system are:
The system supports a single volume with a single data collection. A data collection is all the files stored in a single Data Domain System. When using a SAN-attached gateway Data Domain System, the SAN must be zoned before the Data Domain System is booted. The storage array can have single or multiple controllers and each controller can have multiple ports. The storage array port used for gateway connectivity cannot be shared with other SAN-connected hosts that access the array. Multiple gateway systems can access storage on a single storage array. The 3rd-party storage physical disks that provide storage to the gateway should be dedicated to the gateway and not shared with other hosts. 3rd-party physical disks storage is configured into one or more LUNs that are exported to the gateway.
49
All LUNs presented to the gateway are used automatically when the gateway is booted. Use the Data Domain System commands disk rescan and disk add to see newly added LUNs. A volume may use any of the disk types supported on the disk array. However, only one disk type can be used for all LUNs in the volume to assure equal performance for all LUNs. All disks in the LUNs must be like drives in identical RAID configurations. Multiple storage array RAID configurations can be used; however, you should select RAID configurations that provide the fastest possible sequential data access for the type of disks used. A gateway system supports one volume composed of 1 to 16 LUNs. LUN numbers must start at 0 (zero) and be contiguous. The total amount of storage can be no more than a certain maximum--see the table Data Domain system capacities in the Introduction chapter of the System Hardware Guide. LUNs should be provisioned across the maximum number of spindles available. Vendorspecific provisioning best practices should be used and, if available, vendor-specific tools should be used to create a virtual- or meta-LUN that spans multiple LUNs. If virtual- or meta-LUNs are used, they must follow the configuration parameters defined in this chapter. For replication between a gateway Data Domain System and other model Data Domain Systems, the total amount of storage on the originator must not exceed the total amount of storage on the destination. Replication between gateway systems must use storage arrays with similar performance characteristics. The size of destination storage must be equal to or greater than the size of source storage. Configurations do not need to be identical. The maximum data size for a LUN that a gateway Data Domain system can access is no longer limited to 2 TiB; however, LUNs larger than 10 TiB are not tested. (The "data size" means the size of the LUN presented to the Data Domain system by the 3rd-party physical disk storage.) The minimum data size for a LUN that a gateway system can access is 400 GiB for the first LUN, and 100 GiB for subsequent LUNs. That is, for the initial install the LUN size should be 400GiB or higher, and if you only have one LUN it must be at least 400 GiB. To use the maximum amount of space on a system, create multiple LUNs and adjust the LUN sizes so that the smallest is at least 100 GiB. The data size means the size of the LUN presented to the Data Domain System by the 3rd-party physical disk storage. The maximum total size of all LUNs accessed by a Data Domain System depends on the system, and is shown in the table Data Domain System Capacities in the Hardware Guide. A smaller volume can be expanded by adding LUNs. A Fibre Channel host bus adapter card in the Data Domain System communicates with the 3rd-party physical storage disk array.
50
Gateway Types
Gateway Types
A gateway system has the same chassis and CPUs as the equivalent model number non-gateway system. See the table Data Domain system capacities in the Introduction chapter of the System Hardware Guide for details.
DD6xxg Gateways
The DD6xx gateway systems have four disks used for file system configuration and location information. The DD6xx disks are not used for file system data storage. All data storage is on the external disk arrays. The system can boot up without LUNs. Note For the DD690g, the maximum # of LUNs is 16. The maximum total limit for all LUNs is the same as the max. limit with 6 shelves: 35.47 TB.The maximum data size for a LUN that a gateway Data Domain system can access is no longer limited to 2 TiB; however, LUNs larger than 10 TiB are not tested. .See the table Data Domain System Capacities in the Hardware Guide.
disk add dev<dev-id> Expand the 3rd-party physical disk storage seen by the Data Domain System to include a new LUN. Example: # disk add dev3 disk rescan Search 3rd-party physical disk storage for new or removed LUNs.
disk show raid-info The following example shows two LUNs available to the Data Domain System. After the drives are in use line, the remainder of the drives lines are not valid. system12# disk show raid-info
Disk ----1 2 ----State -----------in use (dg0) in use (dg0) -----------Additional Status -------------------------------
2 0 0 0 0 0 0
are "in use" have "failed" are "hot spare(s)" are undergoing "reconstruction" are undergoing "resynch" are "not in use" are "missing/absent"
disk show performance Displays information similar to the following for each LUN. system12# disk show performance
Disk Read sects/s ------- ------1 46 2 0 Write sects/s ------109 0 Cumul. Busy MiB/sec ------- ---0.075 14 % 0.000 0 % Data Domain Operating System User Guide
52
Cumulative
disk show detailed-raid-info Displays information similar to the following for each LUN: system12# disk show detailed-raid-info Disk Group (dg0) - Status: normal Raid Group (ext3):(raid-0)(61.01 GiB) - Status: normal Raid Group (ext3_1):(raid-100)(68.64 GiB) - Status: normal Slot ---1 ---Disk ----1 ----State Additional Status ------------ ----------------in use (dg0) ------------ -----------------
Raid Group (ppart):(raid-0)(3.04 TiB) - Status: normal Raid Group (ppart_1):(raid-100)(3.04 TiB) - Status: normal Slot ---1 2 ---Spare Disks None Unused Disks None disk show hardware Displays information similar to the following for each LUN. LUN is the LUN number used by the 3rd-party physical disk storage system. Port WWN is the world-wide number of the port on the 3rd-party physical disk storage system through which data is sent to the Data Domain System. Manufacturer/Model includes a label that identifies the manufacturer. The display may include a model ID or RAID type or other information depending on the vendor string sent by the 3rd-party physical disk storage system. Firmware is the firmware level used by the 3rd-party physical disk storage controller. Disk ----1 2 ----State Additional Status ------------ ----------------in use (dg0) in use (dg0) ------------ -----------------
Gateway systems
53
Installation
Serial No. is the serial number of the 3rd-party physical disk storage system. Capacity is the amount of data in a volume sent to the Data Domain System.
2 drives present.
disk status Displays information similar to the following. After drives are in use, the remainder of the drives lines are not valid. system12# disk status Normal - system operational 1 disk group total 9 drives are operational
Installation
A Data Domain System using 3rd-party physical disk storage must first connect with the 3rd-party physical disk storage and then configure the use of the storage.
54
Installation
3. On the 3rd-party physical storage disk array system, configure LUN masking so that the Data Domain System can see only those LUNs that should be available to the Data Domain System. The Data Domain System writes to every LUN that is available. 4. Connect the Fiber Channel cable to one of the Fiber Channel HBA card ports on the back of the Data Domain System. The cable and the 3rd-party physical disk storage must also be connected to the FC-AL. Up to 4 cables can be used for basic connectivity and also for multipath. 5. Connect a serial terminal to the Data Domain System. A VGA console does not display the menu mentioned in the next step of this procedure. 6. Press the Power button on the front of the Data Domain System. During the initial system start, the Data Domain System does not know of the available LUNs. The following menu appears with the Do a New Install entry selected: New Install 1. Do a New Install 2. Show Configuration 3. Reboot 7. Check that the LUNs available from the connected array system are correct. Use the down-arrow key, select Show Configuration, and press Enter. The configuration menu appears with Show Storage Information selected: system Configuration (Before Installation) 1. 2. 3. 4. 5. Show Storage Information Show Head Information Go to Previous Menu Go to Rescue Menu Reboot
8. Press Enter to display storage information. Each LUN that is available from the array system appears as a one line entry in the List of SCSI Disks/LUNs. The Valid RAID DiskGroup UUID List section shows no disk groups until after installation. Use the arrow keys to move up and down in the display. Storage Details Software Version: 4.5.0.0-62320 Valid RAID DiskGroup UUID List: ID DiskGroup UUID Last Attached Serialno ------------------------------------------------- No diskgroup uuids were found -Gateway systems 55
Installation
List of SCSI Disks/LUNs: (Press ctrl+m for disk size information) ID -1 2 UUID ------No UUID No UUID tgt --0 0 lun --0 4 loop ---0 0 wwpn comments ---------------- ------------------500601603020e212 500601603020e212
Number of Flash disks: 1 ---------------------------------------Errors Encountered: ----------------------------------------- No errors to report 9. Press Enter to return to the New Install menu. 10. Use the up-arrow key to select Do a New Install. 11. Press Enter to start the installation. The system automatically configures the use of all LUNs available from the array. 12. Press Enter to accept the Yes selection in the New Install? Are you sure? display. No other user input is required. A number of displays appear during the reboot. Each one automatically times-out with the displayed information and the reboot continues. 13. When the reboot completes, the login prompt appears. Login and configure the Data Domain System as explained in the Installation chapter of this manual beginning with step 2 on page 18.
56
3. On the 3rd-party physical storage disk array system, configure LUN masking so that the Data Domain System can see only those LUNs that should be available to the Data Domain System. The Data Domain System writes to every LUN that is available. 4. Connect the Fiber Channel cable from the Fiber-Channel Arbitrated Loop (FC-AL) to one of the Fiber Channel HBA card ports on the back of the Data Domain System. The cable and the 3rd-party physical disk storage must also be connected to the FC-AL. 5. Connect a serial terminal to the Data Domain System. A VGA console does not display the menu mentioned in the next step of this procedure. 6. Press the Power button on the front of the Data Domain System. 7. Boot up. 8. Login as sysadmin. 9. Enter the command: disk rescan 10. In order to find out the device name, enter the command: disk show raid-info 11. Where dev<x> is the device returned by the above command, for example dev3, enter the command: disk add dev<x> 12. Wait 3 or 4 minutes. 13. Enter the command filesys status to verify that the system is up and running.
Gateway systems
57
1. On the 3rd-party physical disk storage, create the new LUN. Make sure that masking for the new LUN allows the Data Domain System to see the LUN. 2. On the Data Domain System, enter the disk rescan command to find the new LUN. # disk rescan NEW: Host: scsi0 Channel: 00 Id: 00 Lun: 03 Vendor: NEXSAN Model: ATAbea(C0A80B0C) Rev: 8035 Type: Direct-Access ANSI SCSI revision: 04 1 new device(s) found. The disk show raid-info command then shows all of the previously configured LUNs (as disk1, disk2, and so on) and the new LUN as unknown. Also, the new LUN is referenced in the line 1 drive is not in use. A LUN that has been used by a different Data Domain system previously and that shows as foreign cannot be added. # disk show raid-info Disk ----1 2 3 ----2 0 0 0 0 1 0 State Additional Status -------------------- ----------------in use (dg0) in use (dg0) unknown -------------------- -----------------
drives are "in use" drives have "failed" drives are "hot spare(s)" drives are undergoing "reconstruction" drives are undergoing "resynch" drive is "not in use" drives are "missing/absent"
Note At this point, the new LUN can be removed from 3rd-party physical disk storage with no damage to the Data Domain System file system. The disk rescan command then shows the LUN as removed. After using the disk add command (the next step), you cannot safely remove the LUN. 3. Use the disk add dev<dev-id> command to add the new LUN to the Data Domain System volume. The disk-id is given in the output from the disk show raid-info command. # disk add dev3
58
The 'disk add' command adds a disk to the filesystem. Once the disk is added, it cannot be removed from the filesystem without re-installing the Data Domain System. Are you sure? (yes|no|?) [no]: yes Output from the disk show raid-info command should now show the new disk (LUN) as in use. Output from the filesys show space command should include the new space in the Data section.
Gateway systems
59
60
61
62
System Maintenance
The Data Domain System system, ntp, and alias commands allow you to take system-level actions. Examples for the system command are shutting down or restarting the Data Domain System, displaying system problems and status, and setting the system date and time. The alias command allows users to set up aliases for Data Domain System commands. The ntp command manages access to one or more time servers. The support command sends multiple log files to the Data Domain Support organization. Support staff may ask you to use the command when dealing with unusual situations. See Collect and Send Log Files on page 139 for details.
63
The upgrade operation shuts down the Data Domain System file system and reboots the Data Domain System. (If an upgrade fails, call customer support.) The upgrade operation may take over an hour, depending on the amount of data on the system. After the upgrade completes and the system reboots, the /backup file system is disabled for up to an hour for upgrade processing. Stop any active CIFS client connections before starting an upgrade. Use the cifs show active command on the Data Domain System to check for CIFS activity. Disconnect any client that is active. On the client, enter the command net use \\dd\backup /delete. For systems that are already part of a replication pair: With directory replication, upgrade the destination and then upgrade the source. With collection replication, upgrade the source and then upgrade the destination. With one exception, replication is backwards compatible within release families (all 4.2.x releases, for example) and with the latest release of the previous family (4.3 is compatible with release 4.2, for example). The exception is bi-directional directory replication, which requires the source and destination to run the same release. Do NOT disable replication on either system in the pair.
Note Before starting an upgrade, always read the Release Notes for the new release. DD OS changes in a release may require unusual, one-time operations to perform an upgrade.
64
System Maintenance
65
3. Log in with the Data Domain login name and password that you use for access to the support web page. 4. Download the release recommended by your Data Domain field representative. The file should go to /ddvar/releases on the Data Domain System. Note When using Internet Explorer to download a software upgrade image, the browser may add bracket and numeric characters to the upgrade image name. Remove the added characters before running the system upgrade command. 5. To start the upgrade, log in to Data Domain System as sysadmin and enter a command similar to the following. Use the file name (not a path) received from Data Domain. (Always close the Enterprise Manager graphical user interface before an upgrade operation to avoid a series of harmless warning messages when rebooting.) For example: # system upgrade 4.0.2.0-30094.rpm
"data storage" = a set of disks that make up a metagroup which houses a file system. This set of disks could be physical disks or LUNs residing in an external storage array in a gateway system. "DD4xxg/DD5xxg" = DD4xx or DD5xx series gateway = DD460g, DD560g, or DD580g.
Possible cases: There are three possible cases: 1. DD690 -> DD690 (You are the owner of a DD690 and just purchased another DD690 and want to use the same storage/data). 2. DD690g -> DD690g (You are the owner of a DD690g and just purchased another DD690g and want to use the same storage/data). 3. DD4xxg/DD5xxg -> DD690g (You are the owner of a DD4xx or DD5xx series gateway, and just purchased a DD690g, and want to use the same storage/data). For this case, have an SE do Step #15 for you. (As of release 4.5.1, the system headswap command is only available when swapping to DD690/DD690g models.)
4. To determine if the above conditions are met, run the 'disk status' command. IF the output of 'disk status' is one of the following: Error - data storage unconfigured, a complete set of foreign storage attached" Error - system non-operational, a complete set of foreign storage attached"
System Maintenance
67
THEN Continue to step 6. (The 'system headswap' command will result in a headswap operation.) ELSE Go back to step 3 and fix the hardware configuration. (Other error messages are shown below.)
5. Considering the three cases: DD690 -> DD690 DD690g -> DD690g DD4xxg/DD5xxg -> DD690g (For this case, have an SE do step #15 for you.)
6. Upgrade the system to the left of the arrow (DD690, DD690g, or DD4xxg/DD5xxg) to the release you want to run. Note: the system to the left of the arrow should be at least at Release 4.5.0.0. 7. Install on (or upgrade to the release you want to run) the system to the right of the arrow (DD690 or DD690g). 8. Using the system power off command (not the power switch), power off both systems. Note Please do not power-cycle the system with the power switch, or hit the Reset switch, without calling Data Domain Support first! Instead, use the system power off command, for which you dont need to contact Data Domain Support. 9. Move the fiber channel cables from the DD4xxg/DD5xxg to the DD690g (or DD690 to DD690 or DD690g to DD690g) and make any necessary SAN/Storage management changes. 10. Power on the dlh gateway and do a "disk rescan" to discover the luns. 11. Make sure the luns show up as "foreign" when issuing a "disk show raid-info" command. Then issue the "system show hardware" command to verify that you are seeing the luns you are expecting to see. 12. After verifying that the luns are visible by the dlh gateway as foreign devices, issue the "system headswap" command. 13. It will do the necessary checks and once its done with the swap, the system will reboot. 14. After the system comes up issue "disk show raid-info" again to verify that the new luns are part of a disk group and show up as "in use". Wait until this is so.
68
15. Set the system to ignore nvram, using the command: "reg set system.IGNORE_NVRAM=1". NOTE: This is a workaround for the 690g only, and it should not be used with any other system! For the DD4xxg/DD5xxg -> DD690g case, case, have an SE do this step for you! 16. Issue a "filesys enable" to bring the filesystem up. 17. Once the filesystem is up, issue "filesys status" and "filesys show space" to verify the health of the filesystem. 18. If directory replication contexts are present, break all replication contexts and then re-add them, then issue the "replication resync" command to resume the original replication contexts. 19. (IMPORTANT) Set the system back to not ignoring NVRAM, using the command reg set system.IGNORE_NVRAM=0. Note Note If doing a headswap from a DD4xx/DD5xx-series gateway, the disk group that is created is not dg1, but rather "(dg0(2))". This is a new convention that might be confusing to someone doing this for the first time.
ERROR MESSAGES:
"No file system present, unable to headswap." There is no "data storage" present. "Incomplete file system, unable to headswap." There is no complete set of "data storage". "More than one file systems present, unable to headswap." There are more than one "data storage" present. "Existing file system incomplete, headswap unnecessary." The existing incompleted "data storage" belongs to the "head unit". "File system operational, headswap unnecessary." The system is operating normally, no headswap operation is needed.
For more information on system headswap, see the documentation on your particular platform, including the appropriate Field Replacement Unit documents and sections of the the Hardware Guide.
System Maintenance
69
For example, you may have a DD690 and expansion shelves running 4.5.0. You install 4.5.1 on the head unit. It asks for the system headswap command. After reboot, you find that the head unit is back at 4.5.0. This is as it should be: the head unit resyncs itself with the storage on the expansion shelves, because they are more important, as the stored data is there.
Port See the "Data Domain System Hardware User Guide" to match a slot to a port number. A DD580, for example,shows port 3a as a SAS HBA in slot 3 and port 4a as a SAS HBA in slot 4. A gateway Data Domain system with a dual-port Fibre Channel HBA always shows #a as the top port and #b as the bottom port, where # is the HBA slot number, depending on the Data Domain system model. Link Speed is given in Gbps (Gigabits per second). Firmware refers to the Data Domain system HBA firmware version. Hardware Address is a MAC address, WWN or WWPN/WWNN, as follows: WN is the world-wide name of the Data Domain system SAS HBA(s) on a system with expansion shelves. WWPN/WWNN is the world-wide port name or node name from the Data Domain system FC HBA on gateway systems.
System Maintenance
71
The system display includes the current time, time since the last reboot (in days and hours), the current number of users, and the average load for file system operations, disk operations, and the idle time. The Filesystem line displays the time that has passed since the file system was last started.
For example: # system show uptime 12:57pm up 9 days, 18:55, 3 users, load average: 0.51, 0.42, 0.47 Filesystem has been up 9 days, 16:26
72
Display To display detailed system statistics, use the system show detailed-stats operation or click system Stats in the left panel of the Data Domain Enterprise Manager. The time period covered is from the last reboot, except when using interval and count. An interval, in seconds, runs the command every number of seconds (nsecs) for the number of times in count. The first report covers the time period since the last reboot. Each subsequent report is for activity in the last interval. The default interval is five seconds. The interval and count labels are optional when giving both an interval and a count. To give only an interval, you can enter a number for nsecs without the interval label. To give only a count, you must enter the count label and a number for count. The start and stop options return averages per second of statistics over the time between the commands. system show detailed-stats [start | stop | ([interval int][count count])] The display is similar to the following: # system show detailed-stats
CPU0 NFS busy send ------0 % 0 CPU1 State NFS busy CDVMS idle ---- --------0 % 0 NFS CIFS ops/s ops/s ---------624 0 eth1 kiB/s Disk kiB/s in read --------- ----0 0 eth2 kiB/s out write ----0 0 in ----0 out ----0 0% 0% ------proc recv NFS NFS
eth0 kiB/s eth3 kiB/s in out in out ----------- ----0 0 0 0 Disk busy ---0 74 NVRAM kiB/s read ---0
write ----0
out ---0
System Maintenance
75
Display To display general system statistics, click system Stats in the left panel of the Data Domain Enterprise Manager.
Fans displays status for all the fans cooling each enclosure: Description tells where the fan is located in the chassis. Level gives the current operating speed range (low, medium, high) for each fan. The operating speed changes depending on the temperature inside the chassis. See Replace Fans in Hardware Guide to identify fans in the Data Domain System chassis by name and number. All of the fans in an expansion shelf are located inside the power supply units. Status is the system view of fan operations.
76
Temperature displays the number of degrees that each CPU is below the maximum allowable temperature and the actual temperature for the interior of the chassis. The C/F column displays temperature in degrees Celsius and Fahrenheit. The Status column shows whether or not the temperature is acceptable. If the overall temperature for a Data Domain System reaches 50 degrees Celcius, a warning message is generated. If the temperature reaches 60 degrees Celcius, the Data Domain System shuts down. The CPU numbers depend on the Data Domain System model. With newer models, the numbers are negative when the status is OK and move toward 0 (zero) as CPU temperature increases. If a CPU temperature reaches 0 Celsius, the Data Domain System shuts down. With older models, the numbers are positive. If the CPU temperature reaches 80 Celcius, the Data Domain System shuts down.
Power Supply informs you that all power supplies are either operating normally or that one or more are not operating normally. The message does not identify which power supply or supplies are not functioning (except by enclosure). Look at the back panel of the enclosure and check the LED for each power supply to identify those that need replacement. Display To display the current hardware status, use the system status operation. system status The display is similar to the following: # system status Enclosure 1 Fans Description --------------Crossbar fan #1 Crossbar fan #2 Crossbar fan #3 Crossbar fan #4 Rear fan #1 Rear fan #2 --------------Temperature Description --------------CPU 0 Actual CPU 1 Actual Chassis Ambient --------------Power Supply
System Maintenance 77
Level -----medium medium medium medium medium medium -----C/F -------40/-72 -46/-83 31/88 -------
78
The memory size, window size, and number batteries identify the type of NVRAM card. The errors entry shows the operational state of the card. If the card has one or more PCI or memory errors, an alerts email is sent and the daily AM-email includes an NVRAM entry. Each battery entry should show 100% charged, enabled. The exceptions are for a new system or for a replacement NVRAM card. In both cases, the charge may initially be below 100%. If the charge does not reach 100% in three days (or if a battery is not enabled), the card should be replaced.
Display To display the NVRAM information, use the system show nvram operation. system show nvram The display is similar to the following: # system show nvram NVRAM Card: component ------------------memory size window size number of batteries errors battery 1 battery 2 -------------------
value --------------------512 MiB 16 MiB 2 0 PCI, 0 memory 100% charged, enabled 100% charged, enabled ---------------------
Display Hardware
To display the PCI cards and other hardware in a Data Domain System, use the system show hardware operation. The display is useful for Data Domain Support when troubleshooting. system show hardware A few sample lines from the display follow: # system show hardware Slot Vendor Device ---------------------------0 Intel 82546GB GigE 1 (empty) (empty) 2 3-Ware 8000 SATA 3 QLogic QLE2362 2Gb FC 4 (empty) (empty) 5 Micro Memory MM-5425CN 6 (empty) (empty) ---------------------------Ports -----0a, 0b 3a
------
Display Memory
To display a summary of the memory in a Data Domain System, use the system show meminfo operation. The display is useful for Data Domain Support when troubleshooting. system show meminfo For example: # system show meminfo Memory Usage Summary Total memory: 7987 MiB Free memory: 1102 MiB Total swap: 12287 MiB Free swap: 12287 MiB
80
Add an Alias
To add an alias, use the alias add name command operation. Use double quotes around the command if it has includes one or more spaces. A new alias is available only to the user who creates the alias. A user can not create a working alias for a command that is outside of that users permission level. alias add name command For example, to add an alias named rely for the Data Domain System command that displays reliability statistics: # alias add rely disk show reliability-data
Remove an Alias
To remove an alias, use the alias del name operation. alias del name For example, to remove an alias named rely: # alias del rely
Reset Aliases
To return to the default alias list, use the alias reset operation. Administrative users only. alias reset
Display Aliases
To display all aliases and their definitions, use the alias show operation. alias show The following example displays the default aliases: # alias show date -> system show date df -> filesys show space hostname -> net show hostname ifconfig -> net config iostat -> system show detailed-stats 2 netstat -> net show stats nfsstat -> nfs show statistics passwd -> user change password
82 Data Domain Operating System User Guide
ping -> net ping poweroff -> system poweroff reboot -> system reboot sysstat -> system show stats traceroute -> route trace uname -> system show version uptime -> system show uptime who -> user show active You have 16 aliases The sysstat alias can take an interval value for the number of seconds between each display of statistics. The following example refreshes the display every 10 seconds: # sysstat 10
Time servers set with the ntp add command override time servers from DHCP and from multicast operations. Time servers from DHCP override time servers from multicast operations. The Data Domain system ntp del and ntp reset commands act only on manually added time servers, not on DHCP supplied time servers. You cannot delete DHCP time servers or reset to multicast when DHCP time servers are supplied.
# ntp status NTP Service is currently enabled. Current Clock Time: Fri, Nov 12 2004 16:05:58.777 Clock last synchronized: Fri, Nov 12 2004 16:05:19.983 Clock last synchronized with time server: srvr26.company.com
System Maintenance
85
86
Network Management
The net command manages the use of DHCP, DNS, and IP addresses, and displays network information and status. The route command manages routing rules.
Note Changes to the ethernet interfaces made with the net command options flush the routing table. All routing information is lost and any data movement currently using routing is immediately cut off. Data Domain recommends making interface changes only during scheduled maintenance down times. After making interface changes, you must reconfigure any routing rules and gateways.
A Data Domain system can have up to six physical Ethernet interface ports (eth0, eth1, eth2, eth3, eth4, and eth5). Two or more interfaces (depending on the restrictions below) can be set up as a virtual interface for failover or aggregation. The recommended number of physical interfaces for failover is two. However, you can set up one primary interface and up to five failover interfaces (except with 10 Gb Ethernet cards, which are restricted to one primary and one failover). The recommended number of physical interfaces for aggregation is two. Because ports eth0 and eth1 are reserved for the motherboard, aggregation can use at most 4 (two with 10 Gb Ethernet cards) physical interfaces (eth2, eth3, eth4, eth5) configured in a virtual interface. Aggregation between motherboard interfaces (eth0 and eth1) and optional NIC interfaces is not supported. Each physical interface (eth0, eth1, eth2, eth3, eth4, eth5) can be a part of at most 1 virtual interface. A system can have multiple and mixed failover and aggregation virtual interfaces subject to the restrictions above.
87
Virtual interfaces must be created from identical physical interfaces (all copper or all fiber or all 1 Gb or all 10 Gb).
Guidelines: Interface Aggregation Failover (Both Non-10GE and 10GE) Does not work
SUPPORTED SUPPORTED
1 Gb -> 10 Gb
1 Gb -> 1 Gb 10 Gb-> 10 Gb
Not supported
SUPPORTED SUPPORTED
Not supported
SUPPORTED SUPPORTED
Not supported
SUPPORTED
SUPPORTED
SUPPORTED
The virtual-name must be in the form veth<x> where <x> is a number from 0 (zero) to 3. The physical-name must be in the form eth<x> where <x> is a number from 0 (zero) to 5. Each interface used in a virtual interface must first be disabled with the net disable command. An interface that is part of a virtual interface is seen as disabled by other net commands. All interfaces in a virtual interface must be on the same subnet and on the same LAN or VLAN (or card for 10 Gb). Network switches used by a virtual interface must be on the same subnet. A virtual interface needs an IP address that is set manually. Use the net config command. The first interface given for a virtual interface is the primary interface used, and the other interfaces are backup interfaces. If the primary interface goes down and multiple interfaces are still available, the next interface used is a random choice.
88
Supported Pairs
Non-10GE Failover eth0-eth1, eth0-eth2, eth0-eth3, eth0-eth4, eth0-eth5, eth1-eth2, eth1-eth3, eth1-eth4, eth1-eth5, eth2-eth3, eth2-eth4, eth2-eth5, eth3-eth4, eth3-eth5, eth4-eth5.
eth0-eth1, eth0-eth2, eth0-eth3, eth0-eth4, eth0-eth5, eth1-eth2, eth1-eth3, eth1-eth4, eth1-eth5 (anything with eth0 or eth1).
eth0-eth1, eth0-eth2, eth0-eth3, eth0-eth4, eth0-eth5, eth1-eth2, eth1-eth3, eth1-eth4, eth1-eth5, eth2-eth4, eth2-eth5, eth3-eth4, eth3-eth5.
Network Management
89
Set up Failover
To set up failover, use the net failover add command with a virtual interface name in the form veth<x>, where <x> is a number from 0 (zero) to 3. net failover add virtual-ifname interfaces physical-ifnames For example, to create a failover virtual interface named veth1 using the physical interfaces eth2 and eth3: # net failover add veth1 interfaces eth2,eth3 Interfaces for veth1: eth2, eth3
90
4. Add physical interface eth4 to failover virtual interface veth1: # net failover add veth1 interfaces eth4 Interfaces for veth1: eth2,eth3,eth4 5. Remove eth2 from the virtual interface veth1: # net failover del veth1 interfaces eth2 Interfaces for veth1: eth3,eth4 6. Show configured failover virtual interfaces: # net failover show Ifname Hardware Address ---------------------veth0 00:04:23:d4:f1:27 ---------------------# net failover reset veth1 Interfaces for veth1: 8. Re-enable the physical interfaces: # net enable eth2 # net enable eth3 # net enable eth4 9. Show the failover setup: # net failover show
No interfaces in failover mode.
7. Remove the virtual interface veth1 and release all of its associated physical interfaces:
To create a virtual interface with supplied physical interfaces in a specified mode, use the net aggregate command: net aggregate add <virtual-ifname> mode {roundrobin | xor-L2 | xor-L3L4} interfaces <physical-ifname-list> The command creates a virtual interface virtual-ifname in the mode aggregate-mode with the supplied physical interfaces physical-ifname-list. The aggregated links transmit packets out of the Data Domain system. The supported aggregate modes are: round-robin: Transmit packets in sequential order from the first available link through the last in the aggregated group. XOR-L2: Transmit based on a hash policy. An XOR of the source and destination MAC addresses generates the hash. XOR-L3L4: Transmit based on a hash policy. An XOR of the source and destination's upper layer (i.e. Layers 3 and 4) protocol info generates the hash. This allows for traffic to a particular network peer to span multiple slaves, although a single connection does not span multiple slaves. L3 = Layer 3 = IP Addresses for the source and destination. L4 = Layer 4 = TCP or UDP Ports for the source and destination.
For example, to enable link aggregation on virtual interface veth1 to physical interfaces eth1 and eth2 in mode xor-L2, use the following command: # net aggregate add veth1 mode xor-L2 interfaces eth2 eth3
-----veth1 ------
----------------00:15:17:0f:63:fc -----------------
---------------xor-L2 ----------------
--------------------eth4,eth5 ---------------------
4. Delete physical interface eth3 from the aggregate virtual interface veth1: # net aggregate del veth1 interfaces eth3 5. Show the aggregate setup: # net aggregate show
Ifname -----veth1 Hardware Address ----------------00:15:17:0b:d0:61 Aggregation Mode ---------------xor-L2 Configured Interfaces --------------------eth2
94
# net aggregate add veth1 mode xor-L2 interfaces eth4 7. Show the aggregate setup: # net aggregate show
Ifname -----veth1 Hardware Address ----------------00:15:17:0b:d0:61 Aggregation Mode ---------------xor-L2 Configured Interfaces --------------------eth2,eth4
8. Remove all physical interfaces from the aggregate virtual interface veth1 # net aggregate reset veth1 Interfaces for veth1: # 9. Re-enable the physical interfaces: # net enable eth2 # net enable eth3 # net enable eth4 10. Show the aggregate setup: # net aggregate show
No interfaces in aggregate mode.
Enable an Interface
To enable a disabled Ethernet interface on the Data Domain System, use the net enable ifname operation, where ifname is the name of an interface. Administrative users only. net enable ifname For example, to enable the interface eth0: # net enable eth0
Disable an Interface
To disable an Ethernet interface on the Data Domain System, use the net disable ifname operation. Administrative users only.
Network Management 95
net disable ifname For example, to disable the interface eth0: # net disable eth0
Enable DHCP
To set up an Ethernet interface to expect DHCP information, use the net config ifname dhcp yes operation. Changes take effect only after a system reboot. Administrative users only. Note To activate DHCP for an interface when no other interface is using DHCP, the Data Domain System must be rebooted. To activate DHCP for an optional gigabit Ehternet card, either have a network cable attached to the card during the reboot or, after attaching a cable, run the net enable command for the interface. net config ifname dhcp yes For example, to set DHCP for the interface eth0: # net config eth0 dhcp yes To check the operation, use the net show configuration command. To check that the Ethernet connection is live, use the net show hardware command.
Disable DHCP
To set an Ethernet interface to not use DHCP, use the net config ifname dhcp no operation. After the operation, you must set an IP address for the interface. All other DHCP settings for the interface are retained. Administrative users only. net config ifname dhcp no For example, to disable DHCP for the interface eth0: # net config eth0 dhcp no To check the operation, use the net show configuration command.
96
Ping a Host
To check that a Data Domain System can communicate with a remote host, use the net ping operation with a hostname or IP address. net ping hostname [broadcast] [count n] [interface ifname] broadcast Allows pinging a broadcast address.
Network Management 97
count Gives the number of pings to issue. interface Gives the interface to use: eth0 through eth3. For example, to check that communication is possible with the host srvr24: # net ping srvr24
net config ifname speed {10 | 100 | 1000} For example, to set the line speed to 100 Base-T for interface eth1: # net config eth1 speed 100
100
A display for interface eth0 looks similar to the following: # net show config eth0 eth0 Link encap:Ethernet HWaddr 00:02:B3:B0:8A:D2 inet addr:192.168.240.187 Bcast:123.456.78.255 Mask:255.255.255.0 UP BROADCAST NOTRAILERS RUNNING MULTICAST MTU:1500 Metric:1 RX packets:3081076 errors:0 dropped:0 overruns:0 frame:0 TX packets:1533783 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:100 RX bytes:3764464 (3.5 Mb) TX bytes:136647745 (130.3 Mb) Interrupt:20 Base address:0xc000
Network Management
101
DHCP shows whether or not port characteristics are supplied by DHCP. If a port uses DHCP for configuration values, the display does not have values for the remaining columns. IP address is the address used by the network to identify the port. Netmask is the standard IP network mask. Display Use the net show settings operation or click Network in the left panel of the Data Domain Enterprise Manager and look at Network Settings. net show settings The display is similar to the following: # net show settings
Ethernet settings: port enabled netmask ---------------------eth0 yes (dhcp-supplied) eth1 no n/a eth2 yes 255.255.255.0 eth3 yes (dhcp-supplied) --------------------DHCP -------yes n/a no yes -------IP address --------------(dhcp-supplied) n/a 192.168.10.187 (dhcp-supplied) ---------------
102
Cable shows whether or not the port currently has a live Ethernet connection. Display Use the net show hardware operation or click Network in the left panel of the Data Domain Enterprise Manager and look at Network Hardware State. net show hardware The display looks similar to the following (each line wraps in the example here): # net show hardware
Port ---eth0 eth1 eth2 eth3 Speed -------100Mb/s unknown 1000Mb/s unknown Duplex ------full unknown full unknown Supp Speeds ----------10/100/1000 10/100/1000 10/100/1000 10/100/1000 Hardware Address ----------------00:02:b3:b0:8a:d2 00:02:b3:b0:80:3f 00:07:e9:0d:5a:1a 00:07:e9:0d:5a:1b Physical -------Copper Copper Copper Copper Cable ----yes no yes no
net show dns The display looks similar to the following. The last line informs that the servers were configured manually or by DHCP. # net show dns # Server - ----------1 192.168.1.3 2 192.168.1.4 - ----------Showing DNS servers configured manually.
104
# route del -host user24 To remove a route with a route specification of 192.168.1.x and a gateway of srvr12: # route del -net 192.168.1.0 netmask 255.255.255.0 gw srvr12
Display a Route
To display a route used by a Data Domain System to connect with a particular destination, use the route show trace host operation. route trace host For example, to trace the route to srvr24: # route trace srvr24 Traceroute to srvr24.yourcompany.com (192.168.1.6), 30 hops max, 38 byte packets 1 srvr24 (192.168.1.6) 0.163 ms 0.178 ms 0.147 ms
106
# route show config The Route Config list is: -host user24 gw srvr12 -net 192.168.1.0 netmask 255.255.255.0 gw srvr12
0 0 0
Network Management
107
108
The FTP and TELNET protocols have host-machine access lists that limit access. The SSH protocol is open to the default user sysadmin and to all Data Domain System users added with the user add command. By default, only the SSH protocol is enabled.
Add a Host
To add a host (IP address or hostname) to the FTP or TELNET protocol access lists, use the adminaccess add operation. You can enter a list that is comma-separated, space-separated, or both. To give access to all hosts, the host-list can be an asterisk (*). Administrative users only. adminaccess add {ftp | telnet | ssh | http} host-list The host-list can contain class-C IP addresses, IP addresses with either netmasks or length, hostnames, or an asterisk (*) followed by a domain name, such as *.yourcompany.com. For SSH, TCP wrappers are used and /etc/hosts.allow and /etc/hosts.deny files are updated. For HTTP/HTTPS, Apache's mod_access is used for host-based accesscontrol and /usr/local/apache2/conf/httpd-ddr.conf file is updated. For example, to add srvr24 and srvr25 to the list of hosts that can use TELNET on the Data Domain System: # adminaccess add telnet srvr24,srvr25 Netmasks, as in the following examples, are supported: # adminaccess add ftp 192.168.1.02/24 # adminaccess add ftp 192.168.1.02/255.255.255.0
109
Remove a Host
Remove a Host
To remove hosts (IP addresses, hostnames, or asterisk (*)) from the FTP or TELNET access lists, use the adminaccess del operation. You can enter a list that is comma-separated, space-separated, or both. Administrative users only. adminaccess del {ftp | telnet} host-list For example, to remove srvr24 from the list of hosts that can use TELNET on the system: # adminaccess del telnet srvr24
110
Enable a Protocol
Enable a Protocol
By default, the SSH, HTTP, and HTTPS services are enabled. FTP and TELNET are disabled. HTTP and HTTPS allow users to log in through the web-based graphical user interface. The adminaccess enable operation enables a protocol on the Data Domain System. Note that to use FTP and TELNET, you must also add host machines to the access lists. Administrative users only. adminaccess enable {http | https | ftp | telnet | ssh | all} For example, to enable the FTP service: # adminaccess enable ftp
Disable a Protocol
To disable a service on the Data Domain System, use the adminaccess disable operation. Disabling FTP or TELNET does not affect entries in the access lists. If all services are disabled, the Data Domain System is accessible only through a serial console or keyboard and monitor. Administrative users only. adminaccess disable {http | https | ftp | telnet | ssh | all} For example, to disable the FTP service: # adminaccess disable ftp
111
113
114
User Administration
The Data Domain System command user adds, removes, and displays users and changes user passwords. A Data Domain System has two classes of user accounts. The user class is for standard users who have access to a limited number of commands. Most of the user commands display information. The admin class is administrative users who have access to all Data Domain System commands. The default administrative account is sysadmin. You can change the sysadmin password, but cannot delete the account. Throughout this manual, command explanations include text similar to the following for commands or operations that standard users cannot access: Available to administrative users only.
Add a User
To add a Data Domain System user, use the user add user-name operation. The operation asks for a password and confirmation or you can include the password as part of the command. Each user has a privilege level of either admin or user. Admin is the default. The only way to change a users privilege level is to delete the user and then add the user with the other privilege level. Available to administrative users only. A user name must start with an alpha character. user add user-name [password password][priv {admin | user}] Note The user names root and test are default existing names on every Data Domain System and are not available for general use. Use the existing sysadmin user account for administrative tasks. For example, to add a user with a login name of jsmith, a password of usr256, and administrative privilege: # user add jsmith password usr256 priv
Remove a User
To remove a user from a Data Domain System, use the user del user-name operation. Available to administrative users only.
115
Change a Password
user del user-name For example, to remove a user with a login name of jsmith: # user del jsmith user jsmith removed
Change a Password
To change a user password, including the password for the sysadmin user, use the user change password user-name operation. The operation asks for the new password and then asks you to re-enter the password as a confirmation. Without the user-name component, the command changes the password for the current user. Available to sysadmin to change any user password and available to all users to change only their own password. user change password [user-name] For example, to change the password for a user with a login name of jsmith: # user change password jsmith Enter new password: Re-enter new password: Passwords matched
For example, to change the privilege level from admin to user for the login name of jsmith: # user change jsmith user
User Administration
117
118
Configuration Management
The Data Domain System config command allows you to examine and modify all of the configuration parameters that are set in the initial system configuration. The license command allows you to add, delete, and display feature licenses. Note The migration command copies all data from one Data Domain system to another. The command is usually used when upgrading from a smaller Data Domain system to a larger Data Domain system. For information on migration, see the chapter Replication - CLI.
119
Note You can also use the Data Domain Enterprise Manager graphical user interface to change all of the same parameters that are available through the config setup command. In the Data Domain Enterprise Manager, select Configuration Wizard in the top section of the left panel.
120
Configuration Management
121
122
# config set timezone new Ambiguous timezone name, matching ... America/New_York Canada/Newfoundland
Add a License
To add a feature license, use the license add operation. The code for each license is a string of 16 letters with dashes. Include the dashes when entering the license code. Administrative users only.
124
Expanded Storage Add disks to a DD510 or DD530 system. Open Storage (OST) Use a system with the Symantec OpenSTorage product. Replication Use the Data Domain Replicator for replication of data from one Data Domain System to another. Retention-Lock Prevent certain files from being deleted or modified, for up to 70 years. VTL Use a Data Domain System as a virtual tape library. license add license-code
Display Licenses
The license display shows only those features licensed on the Data Domain System. Administrative users only. ## is the license number of the feature. License Key is the characters of a valid license key. Feature is the name of the licensed feature. Current licensed features are Replicator for replication from one Data Domain System to another, and the virtual tape library (VTL) feature. Display To display current licenses and default features, use the license show operation. Each line shows the license code. license show For example: # license show ## License Key -- ------------------Feature -----------------
Configuration Management
125
Remove a License
To remove a current license, use the license del operation. Enter the license feature name or code (as shown with the license show command). Administrative users only. license del {license-feature | license-code} For example: # license del replication The Replication license is removed.
126
127
128
10
A Data Domain System uses multiple methods to inform administrators about the status of the Data Domain OS and hardware. The Data Domain System alerts, autosupport, and AM email features send messages and reports to user-configurable lists of email addresses. The lists include an email address for Data Domain support staff who monitor the status of all Data Domain Systems and contact your company when problems are reported. The messages also go to the system log.
The alerts feature sends an email whenever a critical component in the system fails or is known, through monitoring, to be out of an acceptable range. Consider adding pager email addresses to the alerts email list so that someone is informed immediately about system problems. For example, a single fan failure is not critical and does not generate an alert as the system can continue normal operations; however, multiple fan failures can cause a system to begin overheating, which generates an alerts email. Each disk, fan, and CPU in the Data Domain System is monitored. Temperature extremes are also monitored.
The autosupport feature sends a daily report that shows system identification information and consolidates the output from a number of Data Domain System commands. See Run the Autosupport Report on page 135 for details. Data Domain support staff use the report for troubleshooting. Every morning at 8:00 a.m. (local time for your system), the Data Domain System sends an AM email to the autosupport email list. The purpose is to highlight hardware or other failures that are not critical, but that should be dealt with soon. An example would be a fan failure. A failed fan should be replaced as soon as is reasonably possible, but the system can continue operations. The AM email is a copy of output from alerts show current (see Display Current Alerts on page 131) and alerts show history (see Display the Alerts History on page 132) messages about non-critical hardware situations, and some disk space usage numbers.
Non-critical hardware problems generate email messages to the autosupport list. An example is a failed power supply when the other two power supplies are still fine. If the situation is not fixed, the message also appears in the AM email.
129
Alerts
Every hour, the Data Domain System logs a short system status message. See Hourly system Status on page 138 for details. The support command sends multiple log files to the Data Domain Support organization.
Alerts
Use the alerts command to administer the alerts feature.
130
Alerts
131
Alerts
Alerts
alerts show alerts-list The display is similar to the following: # alerts show alerts-list Alert email list autosupport@datadomain.com admin12 jsmith@company.com
133
Autosupport Reports
# alerts show all The Admin email is: admin@yourcompany.com Alerts email autosupport@datadomain.com admin@yourcompany.com admin12 jsmith@company.com
Autosupport Reports
The autosupport feature automatically generates reports detailing the state of the system. The first section of an autosupport report gives system identification and uptime information. The next sections display output from numerous Data Domain System commands and entries from various log files. At the end of the report, extensive and detailed internal statistics and information are included to aid Data Domain in debugging system problems.
134
Autosupport Reports
Autosupport Reports
SYSTEM_ID=Serial number: 22BM030026 MODEL_NO=DD560 HOSTNAME=dd10.yourcompany.com LOCATION=Bldg12 room221 rack6 ADMIN_EMAIL=admin@yourcompany.com UPTIME= 1:17pm up 124 days, 14:31, 0.00, 0.00, 0.00
2 users,
load average:
The next sections display output from numerous Data Domain System commands and entries from various log files. At the end of the report, extensive and detailed internal statistics and information appear to aid Data Domain in debugging system problems.
A time is required. 2400 is not a valid time. An entry of 0000 is midnight at the beginning of a day. The never option turns off the report. Set a schedule using any of the other options to turn on the report. autosupport set schedule [daily | never day1[,day2,...]] time
For example, the following command runs the report automatically every Tuesday at 4 a.m.: # autosupport set schedule tue 0400 The most recent invocation of the scheduling operation cancels the previous setting.
136
Autosupport Reports
Nov 12 14:00:00 localhost logger: at 2:00pm up 3 days, 4:42, 59411 NFS ops, 84840 GiB data col. (1%)
139
140
11
Simple Network Management Protocol (SNMP) is a standard protocol used to exchange network management information. It is part of the Transmission Control Protocol/Internet Protocol (TCP/IP) protocol suite. SNMP provides a tool for network administrators to monitor and manage network attached devices such as Data Domain systems. For information specific to the MIB, see the last half of this chapter, beginning at the heading More about the MIB on page 147. Data Domain systems support SNMP versions V1 and V2C. SNMP management requires two primary elements: an SNMP manager and an SNMP agent. An SNMP manager is software running on a workstation from which an administrator monitors and controls the different hardware and software systems on a network. These devices include, but not limited to, storage systems, routers, switches, etc. The agent is software running on equipment that implements the SNMP protocol. SNMP defines exactly how a SNMP manager communicates with an SNMP agent. For example, SNMP defines the format of requests that an SNMP device manager sends to an agent and the format of replies the agent returns. The SNMP feature allows a Data Domain System to respond to a set of SNMP get operations from a remote machine. From an SNMP perspective, a Data Domain System is a read-only device with the following exceptions: A remote machine can set the SNMP location, contact, and system name on a Data Domain System. To configure community strings, hosts, and other SNMP variables on the Data Domain System, use the snmp command. With one or more trap hosts defined, a Data Domain System takes the additional action of sending alerts messages as SNMP traps, even when the SNMP agent is disabled. Note The SNMP sysLocation and sysContact variables are not the same as those set with the config set location and config set admin-email commands. However, if the SNMP variables are not set with the SNMP commands, the variables default to the system values given with the config set commands.
141
Enable SNMP
Enable SNMP
To enable the SNMP agent on a Data Domain System, use the snmp enable operation. The default port that is opened when SNMP is enabled is port 161. Traps are sent to port 162. Administrative users only. snmp enable
Disable SNMP
To disable the SNMP agent on a Data Domain System, use the snmp disable operation. Ports 161 and 162 are closed. Administrative users only. snmp disable
143
sysLocation The system location as used in the SNMP MIB II system variable sysLocation. sysContact The system contact as used in the SNMP MIB II system variable sysContact. Trap Hosts The list of machines that receive SNMP traps generated by the Data Domain System. Read-only Communities One or more read-only community strings that enable access to the Data Domain System Read-write Communities One or more read-write community strings that enable access to the Data Domain System.
Display To display all of the SNMP parameters, use the snmp show config operation. Administrative users only.
SNMP Management and Monitoring 145
snmp show config The output is similar to the following: # snmp show config ---------------------SNMP sysLocation SNMP sysContact Trap Hosts Read-only Communities Read-write Communities ---------------------------------------bldg3-rm222 smith@company.com admin10 admin11 public snmpadmin23 private snmpadmin1 -------------------
146
What is a MIB?
Simply put, a MIB (Management Information Base) is a hierarchy of objects. The Data Domain MIB is a hierarchy of objects that define the status and operation of a Data Domain system. The hierarchy is in the form of a table.
MIB Browser
The user may find it worthwhile to download a freeware MIB Browser. Many can be found by searching on Google. As an example, the iReasoning MIB Browser can be found for downloading at http://www.ireasoning.com/mibbrowser.shtml, at the link "Download Free Personal Edition".
147
More about the MIB Figure 10 : Entire MIB Tree - 1st half
148
More about the MIB Figure 11: Entire MIB Tree - 2nd half
149
Tree/subtree The Data Domain MIB Description: This document describes the Management Information Base for Data Domain Products. The Data Domain enterprise number is 19746. The ASN.1 prefix up to and including the Data Domain, Inc. Enterprise is 1.3.6.1.4.1.19746. The top line is truncated in the image, it is really: DATA-DOMAIN-MIB.iso.org.dod.internet.private. enterprises.dataDomainMib
Info
The MIB is divided into four top-level entities: MIB Conformance MIB Objects MIB Notifications Products
150
At a middle level, the main subheadings of the MIB are shown in Figure 12 on page 151. On the "Entire MIB Tree" diagrams in Figure 10 on page 148 and Figure 11 on page 149 , these are the nodes that divide the MIB into sets of leaf nodes. That is, these are the nodes that have only one set of leaf nodes under them.
151
More about the MIB -dataDomainMibObjects(1.3.6.1.4.1.19746.1) -alerts (1.3.6.1.4.1.19746.1.4) -currentAlerts(1.3.6.1.4.1.19746.1.4.1) --- ********************************************************************** currentAlerts OBJECT IDENTIFIER ::= { alerts 1 }
currentAlertTable OBJECT-TYPE SYNTAX SEQUENCE OF CurrentAlertEntry ACCESS not-accessible STATUS mandatory DESCRIPTION "A table containing entries of CurrentAlertEntry." ::= { currentAlerts 1 } currentAlertEntry OBJECT-TYPE SYNTAX CurrentAlertEntry ACCESS not-accessible STATUS mandatory DESCRIPTION "currentAlertTable Row Description" INDEX { currentAlertIndex } ::= { currentAlertTable 1 } CurrentAlertEntry ::= SEQUENCE { currentAlertIndexAlertIndex, currentAlertTimestampAlertTimestamp, currentAlertDescriptionAlertDescription } currentAlertIndex OBJECT-TYPE SYNTAX AlertIndex ACCESS read-only STATUS mandatory DESCRIPTION "Current Alert Row index" ::= { currentAlertEntry 1 } currentAlertTimestamp OBJECT-TYPE SYNTAX AlertTimestamp ACCESS read-only STATUS mandatory DESCRIPTION "Timestamp of current alert" ::= { currentAlertEntry 2 } currentAlertDescription OBJECT-TYPE SYNTAX AlertDescription ACCESS read-only STATUS mandatory DESCRIPTION "Alert Description" ::= { currentAlertEntry 3 } 152 Data Domain Operating System User Guide
-- **********************************************************************
Syntax Brief description. Access Example: read-only. Status Examples: mandatory, current.
DefVal Default Value. Indexes For tables, lists indexes into the table. (For objects, lists the object.) Descr Description of the field.
Alerts (.1.3.6.1.4.1.19746.1.4) DataDomain MIB Notifications (.1.3.6.1.4.1.19746.2) Filesystem Space (.1.3.6.1.4.1.19746.1.3.2) Replication (.1.3.6.1.4.1.19746.1.8)
A section of information on each area is given (see Alerts (.1.3.6.1.4.1.19746.1.4) on page 154, Data Domain MIB Notifications (.1.3.6.1.4.1.19746.2) on page 154, Filesystem Space (.1.3.6.1.4.1.19746.1.3.2) on page 161, and Replication (.1.3.6.1.4.1.19746.1.8) on page 162).
153
Alerts (.1.3.6.1.4.1.19746.1.4)
The Alerts table is a set of containers (variables or fields) that hold the current problems happening in the system. [By contrast, the Notifications table holds a set of rules for what the system does in in response to problems whenever they happen in the system. See also Data Domain MIB Notifications (.1.3.6.1.4.1.19746.2) on page 154.] Alerts are the system for communicating problems, Data Domain's version of Notifications. The table currentAlertTable holds many current alert entries at once, with an Index, Timestamp, and Description for each. The Data Domain Alerts are shown in Figure 13 on page 154 and Table 2 on page 154.
Figure 13: Alerts
Description
A table containing entries of CurrentAlertEntry currentAlertTable Row Description Current Alert Row index Timestamp of current alert Alert Description
As a user, the only thing you can do with notifications and alerts is choose to receive them or not. Choosing to receive notifications is called "adding a trap host", that is, adding the name of a host machine to the list of machines that receive notifications when traps are sprung. Choosing not to receive notifications on a given machine is called "deleting a trap host". See the entries Add a Trap Host on page 143, Delete a Trap Host on page 143, and Delete All Trap Hosts on page 143 in this chapter. Notifications vary in severity level, and thus in result. This is shown in Table 3 on page 155.
Table 3 Notification Severity Levels and Results
Result An Autosupport email is sent. An Alert email is sent. The system shuts down.
In addition to the above results, in each case a Notification is sent if supported. The following is an example of how the user might use the MIB Notifications table. Example: A user adds the hostname "panther5" to the list of machines that receive notifications, using the command: snmp add trap-host panther5 Later a fan module fails on the enclosure. The alarm "fanModuleFailedAlarm" is sent to panther5. The user gets this alarm, and looks it up in the MIB, in the Notifications table. The entry looks like somewhat like this:
Table 4: Part of the fanModuleFailedAlarm Field of the Notifications Table in the MIB
fanInd ex
Meaning:a Fan Module in the enclosure has failed. The index of the fan is given as the index of the alarm. This same index can be looked up in the environmentals table 'fanProperties' for more information about which fan has failed. What to do:replace the fan!
The user looks up the index in the MIB environmentals table 'fanProperties', and finds that fan #1 has failed. Back in the Notifications table, the user sees that What to do is: replace the fan. The user replaces the fan, removing the error condition. More on Notifications is given in Figure 14 on page 156 and Table 5 on page 156.
SNMP Management and Monitoring 155
In the Notifications table, Notifications are indexed into other tables by various indexes, given in the Indexes column. The table names can be found under Description.
Table 5 : Notifications
OID .1.3. 6.1.4 .1.19 746. 2 .1.3. 6.1.4 .1.19 746. 2.1
Name dataD omain MibN otificat ions power Supply Failed Alarm
Index es
Description
156
Meaning:the temperature reading of one of the thermometers in the Chassis has exceeded the 'warning' temperature level. If it continues to rise, it may eventually trigger a shutdown of the DDR. The index value of the alarm indicates the thermometer index that may be looked up in the environmentals table 'temperatures' for more information about the actual thermometer reading the high value. What to do:check the Fan status, temperatures of the environment in which the DDR is, and other factors which may increase the temperature. Meaning:the temperature reading of one of the thermometers in the Chassis is more than halfway between the 'warning' and 'shutdown' temperature levels. If it continues to rise, it may eventually trigger a shutdown of the DDR. The index value of the alarm indicates the thermometer index that may be looked up in the environmentals table 'temperatures' for more information about the actual thermometer reading the high value. What to do:check the Fan status, temperatures of the environment in which the DDR is, and other factors which may increase the system temperature. Meaning:the temperature reading of one of the thermometers in the Chassis has reached or exceeded the 'shutdown' temperature level. The DDR will be shutdown to prevent damage to the system. The index value of the alarm indicates the thermometer index that may be looked up in the environmentals table 'temperatures' for more information about the actual thermometer reading the high value. What to do:Once the system has been brought back up, after checking for high environment temperatures or other factors which may increase the system temperature, check other environmental values, such as Fan Status, Disk Temperatures, etc...! Meaning:a Fan Module in the enclosure has failed. The index of the fan is given as the index of the alarm. This same index can be looked up in the environmentals table 'fanProperties' for more information about which fan has failed. What to do:replace the fan! Meaning:The system has detected that the NVRAM is potentially failing. There has been an excessive amount of PCI or Memory errors. The nvram tables 'nvramProperties' and 'nvramStats' may provide for information on why the NVRAM is failing. What to do:check the status of the NVRAM after reboot, and replace if the errors continue.
.1.3. 6.1.4 .1.19 746. 2.5 .1.3. 6.1.4 .1.19 746. 2.6
fanInd ex
157
.1.3. 6.1.4 .1.19 746. 2.7 .1.3. 6.1.4 .1.19 746. 2.8
filesys temFai ledAla rm fileSpa ceMai ntenan ceAlar m filesys temRe source Index
Meaning:The File system process on the DDR has had a serious problem and has had to restart. What to do:check the system logs for conditions that may be triggering the failure. Other alarms may also indicate why the File system is having problems. Meaning:DDVAR File system Resource Space is running low for system maintenance activities. The system may not have enough space for the routine system activities to run without error. What to do:Delete unneeded files, such as old log files, support bundles, core files, upgrade rpm files stored in the /ddvar file system. Consider upgrading the hardware or adding shelves to high-end units. Reducing the retention times for backup data can also help. When files are deleted from outside of the /ddvar space, filesys clean will have to be done before the space is recovered. Meaning:A File system Resource space is 90% utilized. The index value of the alarm indicates the file system index that may be looked up in the filesystem table 'filesystemSpace' for more information about the actual FS that is getting full. What to do:Delete unneeded files, such as old log files, support bundles, core files, upgrade rpm files stored in the /ddvar file system. Consider upgrading the hardware or adding shelves to high-end units. Reducing the retention times for backup data can also help. When files are deleted from outside of the /ddvar space, filesys clean will have to be done before the space is recovered. Meaning:A File system Resource space is 95% utilized. The index value of the alarm indicates the file system index that may be looked up in the filesystem table 'filesystemSpace' for more information about the actual FS that is getting full. What to do:Delete unneeded files, such as old log files, support bundles, core files, upgrade rpm files stored in the /ddvar file system. Consider upgrading the hardware or adding shelves to high-end units. Reducing the retention times for backup data can also help. When files are deleted from outside of the /ddvar space, filesys clean will have to be done before the space is recovered.
158
Meaning:A File system Resource space is 100% utilized. The index value of the alarm indicates the file system index that may be looked up in the filesystem table 'filesystemSpace' for more information about the actual FS that is full. What to do:Delete unneeded files, such as old log files, support bundles, core files, upgrade rpm files stored in the /ddvar file system. Consider upgrading the hardware or adding shelves to high-end units. Reducing the retention times for backup data can also help. When files are deleted from outside of the /ddvar space, filesys clean will have to be done before the space is recovered. Meaning:some problem has been detected about the indicated disk. The index value of the alarm indicates the disk index that may be looked up in the disk tables 'diskProperties', 'diskPerformance', and 'diskReliability' for more information about the actual disk that is failing. What to do:monitor the status of the disk, and consider replacing if the problem continues. Meaning:some problem has been detected about the indicated disk. The index value of the alarm indicates the disk index that may be looked up in the disk tables 'diskProperties', 'diskPerformance', and 'diskReliability' for more information about the actual disk that has failed. What to do:replace the disk. Meaning:the temperature reading of the indicated disk has exceeded the 'warning' temperature level. If it continues to rise, it may eventually trigger a shutdown of the DDR. The index value of the alarm indicates the disk index that may be looked up in the disk tables 'diskProperties', 'diskPerformance', and 'diskReliability' for more information about the actual disk reading the high value. What to do:check the disk status, temperatures of the environment in which the DDR is, and other factors which may increase the temperature.
diskPr opInd ex
.1.3. 6.1.4 .1.19 746. 2.13 .1.3. 6.1.4 .1.19 746. 2.14
diskPr opInd ex
diskEr rIndex
159
diskEr rIndex
Meaning:the temperature reading of the indicated disk is more than halfway between the 'warning' and 'shutdown' temperature levels. If it continues to rise, it will trigger a shutdown of the DDR. The index value of the alarm indicates the disk index that may be looked up in the disk tables 'diskProperties', 'diskPerformance', and 'diskReliability' for more information about the actual disk reading the high value. What to do:check the disk status, temperatures of the environment in which the DDR is, and other factors which may increase the temperature. If the temperature continues stays at this level or rises, and no other disks are reading this trouble, consider 'failing' the disk, and get a replacement. Meaning:the temperature reading of the indicated disk has surpassed the 'shutdown' temperature level. The DDR will be shutdown. The index value of the alarm indicates the disk index that may be looked up in the disk tables 'diskProperties', 'diskPerformance', and 'diskReliability' for more information about the actual disk reading the high value. What to do:Boot the DDR and monitor the status and temperatures. If the same disk has continued problems, consider 'failing' it and get a replacement disk. Meaning:Raid group reconstruction is currently active and has not completed after 71 hours. Reconstruction occurs when the raid group falls into 'degraded' mode. This can happen due to a disk failing at run-time or boot-up. What to do:while it is still possible that the reconstruction could succeed, the disk should be replaced to ensure data safety. Meaning:Raid group reconstruction is currently active and has not completed after 72 hours. Reconstruction occurs when the raid group falls into 'degraded' mode. This can happen due to a disk failing at run-time or boot-up. What to do:the disk should be replaced to ensure data safety. Meaning:Raid group reconstruction is currently active and has not completed after more than 72 hours. Reconstruction occurs when the raid group falls into 'degraded' mode. This can happen due to a disk failing at run-time or boot-up. What to do:the disk must be replaced.
diskEr rIndex
.1.3. 6.1.4 .1.19 746. 2.17 .1.3. 6.1.4 .1.19 746. 2.18 .1.3. 6.1.4 .1.19 746. 2.19
160
Description A table containing entries of FilesystemSpaceEntry. filesystemSpaceTable Row Description File system resource index File system resource name Size of the file system resource in gigabytes Amount of used space within the file system resource in gigabytes Amount of available space within the file system resource in gigabytes Percentage of used space within the file system resource
.1.3.6.1.4.1.19746.1.3.2.1.1.6
filesystemPercentUsed
161
Replication (.1.3.6.1.4.1.19746.1.8)
Various values related to Replication are contained in the Replication table in the MIB. See Figure 16 on page 162 and Table 7 on page 162. (More on Replication can be found in the Replication chapter of the User Guide, for example under the heading Replication - CLI on page 249.)
Figure 16: Replication
Description
A table containing entries of ReplicationInfoEntry. raidInfoTable Row Description state of replication source/dest pair status of replication source/dest pair connection status of filesystem
162
.1.3.6.1.4.1.19746.1.8.1.1.1.5
replConnTime
time of connection established between source and dest, or time since disconnect if status is 'disconnected' network path to replication source directory network path to replication destination directory time lag between source and destination pre compression bytes sent post compression bytes sent pre compression bytes remaining post compression bytes received replication throttle in bps
163
164
12
The log command allows you to view Data Domain System log file entries and to save and clear the log file contents. Messages from the alerts feature, the autosupport reports, and general system messages go to the log directory and into the file messages. A log entry appears for each Data Domain System command given on the system. The log directory is /ddvar/log. Every Sunday at 3 a.m., the Data Domain System automatically opens new log files and renames the previous files with an appended number of 1 (one) through 9, such as messages.1. Each numbered file is rolled to the next number each week. For example, at the second week, the file messages.1 is rolled to messages.2. If a file messages.2 already existed, it would roll to messages.3. An existing messages.9 is deleted when messages.8 is rolled to messages.9. See Procedure: Archive Log Files on page 170 for instructions on saving log files.
*.notice Sends all messages at the notice priority and higher. *.alert Sends all messages at the alert priority and higher (alerts are included in *.notice). kern.* Sends all kernel messages (kern.info log files). local7.* Sends all messages from system startups (boot.log files).
165
The log host commands manage the process of sending log messages to another system:
Add a Host
To add a system to the list that receives Data Domain System log messages, use the log host add command. log host add host-name For example, the following command adds the system log-server to the hosts that receive log messages: # log host add log-server
Remove a Host
To remove a system from the list that receives Data Domain System log messages, use the log host del command. log host del host-name For example, the following command removes the system log-server from the hosts that receive log messages: # log host del log-server
Reset to Default
To reset the log sending feature to the defaults of an empty list and disabled, use the log host reset command. log host reset
166
167
168
Display To list all of the files in the log directory, use the log list operation or click Log Files in the left panel of the Data Domain Enterprise Manager. log list The list is similar to the following: # log list Last modified ----------------------Tue May 24 12:15:01 2005 Wed May 25 00:28:27 2005 Wed May 25 08:43:03 2005 Sun May 22 03:00:01 2005 Sun May 15 03:00:00 2005 Size ----3 KiB 933 KiB 42 KiB 70 KiB 111 KiB File ------------boot.log ddfs.info messages messages.1 messages.2
5. Based on the message, the user could run the "replication throttle add" command to set the throttle.
170
SECTION 4: Capacity - Disk Management, Disk Space, System Monitoring, and Multipath.
171
172
Disk Management
13
The Data Domain System disk command manages disks and displays disk locations, logical (RAID) layout, usage, and reliability statistics. Command output examples in this chapter show systems with 15 disk drives. Each Data Domain System model reports on the number of disks actually in the system. With a DD560 that has one or more Data Domain external disk shelves, commands also include entries for all enclosures, disks, and RAID groups. See the Data Domain publication ES20 Expansion Shelf User Guide for details about disks in external shelves. A Data Domain System has either 8 or 15 disks, depending on the model. Each disk in a Data Domain system has two LEDs at the bottom of the disk carrier. The right LED on each disk flashes (green or blue depending on the Data Domain system model) whenever the system accesses the disk. The left LED glows red when the disk has failed. In a DD460 or DD560, both LEDs are dark on the disk that is available as a spare. DD460 and DD560 systems maintain data integrity with a maximum of two failed disks. The DD410 and DD430 models have no spare and maintain data integrity with a maximum of one failed disk. DD530 and DD510 models have one spare and maintain data integrity with a maximum of two failed disks. Each disk in an external shelf has two LEDs at the right edge of the disk carrier. The top LED is green and flashes when the disk is accessed or when the disk is the target of a beacon operation. The bottom LED is amber and glows steadily when the disk has failed. The disk-identifying variable used in disk commands (except gateway-specific commands) is in the format enclosure-id.disk-id. An enclosure is a Data Domain system or an external disk shelf. A Data Domain system is always enclosure 1 (one). For example, disk 12 in a Data Domain system is 1.12. Disk 12 in the first external shelf is 2.12. On gateway Data Domain Systems (that use 3rd-party physical storage disk arrays other than Data Domain external disk shelves), the following command options are not valid: disk disk disk disk disk disk beacon expand fail unfail show failure-history show reliability-data
With gateway storage, output from all other disk commands returns information about the LUNs and volumes accessed by the Data Domain System.
173
Add a LUN
For gateway systems only. Add a new LUN to the current volume. To get the dev-ID, use the disk rescan command and then use the disk show raid-info command. The dev-ID format is the word dev and the number as seen in output from the disk show raid-info command. See Procedure: Adding a LUN on page 57 for details. disk add dev<dev-id> For example, to add a LUN with a dev-id of 2 as shown by the disk show raid-info command: # disk add dev2
Fail a Disk
To set a disk to the failed state, use the disk fail enclosure-id.disk-id operation. The command asks for a confirmation before carrying out the operation. Available to administrative users only. disk fail enclosure-id.disk-id
174
Unfail a Disk
A failed disk is automatically removed from a RAID disk group and is replaced by a spare disk (when a spare is available). The disk use changes from spare to in use and the status becomes reconstructing. See Display RAID Status for Disks on page 180 to list the available spares. Note A Data Domain system can run with a maximum of two failed disks. Always replace a failed disk as soon as possible. Spare disks are supplied in a carrier for a Data Domain system or a carrier for an expansion shelf. DO NOT move a disk from one carrier to another.
Unfail a Disk
To change a disk status from failed to available, use the disk unfail enclosure-id.disk-id command. Use the command when replacing a failed disk. The new disk in the failed slot is seen as failed until the disk is unfailed. disk unfail enclosure-id.disk-id
Output Format
The general format of the disk status command is as follows: 1. <summary> - <description> This line shows a summary of disks in the system. The summary can be "Error", "Normal" or "Warning". If it says "Normal", you need look no further, because all the disks in the system are in good condition. If it says Warning, the system is operational, but there are problems that need to be corrected, so see the further info given. If it says "Error", the system is not operational, so look at the further information given to fix the problems. The description provides more detail of the summary. See OUTPUT EXAMPLES below. 2. <additional information> This section show lists of disks in different states relevant to the above summary line.
176
Error: A brand-new "head unit" will be in this state when foreign storage is present. For a system that has been configured with some storage, the "Error" indicates that some or all of its own storage is missing. Normal: A brand-new "head unit" is normal if there is no configured storage attached, it has never used 'disk add' or 'disk add enclosure' before, and all disks outside of the "head unit" are not in any of the following states: "in use", "foreign", or "known". A system that has been configured with "data storage" = "Normal" indicates that the entire "data storage" set is present. Warning: special case of a system that would have been "Normal" if the system had had none of the following conditions that require user action: RAID system degraded Foreign storage present Some of the disks are failed or absent
Output Examples
A) Brand-new "head unit". Error - data storage unconfigured and foreign storage attached Error - data storage unconfigured, a complete set of foreign storage attached Error - data storage unconfigured, multiple set of foreign storage attached B) Configured "head unit" without its own "data storage". Error - system non-operational, storage missing Error - system non-operational, incomplete set of foreign storage attached Error - system non-operational, a complete set of foreign storage attached Error - system non-operational, multiple set of foreign storage attached C) Configured "head unit" with part of its "data storage". Error - "system non-operational, partial storage attached" If there is any foreign storage in the system that belongs to any of the above cases (A) (B) and (C), a list of foreign storage as seen the following example will be shown:
7DD6841003 ---------------
14 ---------------
incomplete -----------
In case (C), the number of total (expected) and presented RAID groups is also shown. D) Normal - system operational E) Warning - unprotected - no redundant protection system operational Warning - degraded - single redundant protection system operational Warning - foreign disk attached system operational Warning - disk fails system operational Warning - disk absent system operational Warning - disk has invalid status system operational Note that in the above case (E) the descriptions are shown in the order of severity, from least severe to most severe. For example, a system may contain a failed disk and have no redundant protection at the same time. In this case, the "no redundant protection" message will be shown because it has the higher severity (is more severe).
Disk (Enc.Slot) is the enclosure and disk numbers. Manufacturer/Model shows the manufacturers model designation. Firmware is the firmware revision on each disk. Serial No. is the manufacturers serial number for the disk. Capacity is the data storage capacity of the disk when used in a Data Domain System. The Data Domain convention for computing disk space defines one gigabyte as 230 bytes, giving a different disk capacity than the manufacturers rating.
The display for a gateway Data Domain System has the columns:
178 Data Domain Operating System User Guide
Disk displays each LUN accessed by the Data Domain System as a disk. LUN is the LUN number given to a LUN on the 3rd-party physical disk storage system. Port WWN is the world-wide number of the port on the storage array through which data is sent to the Data Domain System. Manufacturer/Model includes a label that identifies the manufacturer. The display may include a model ID or RAID type or other information depending on the vendor string sent by the storage array. Firmware is the firmware level used by the 3rd-party physical disk storage controller. Serial No. is the serial number from the 3rd-party physical disk storage system for a volume that is sent to the Data Domain System. Capacity is the amount of data in a volume sent to the Data Domain System.
Display Use the disk show hardware operation or click Disks in the left panel of the Data Domain Enterprise Manager to display disk information. disk show hardware The display for disks in a Data Domain System is similar to the following: # disk show hardware
Disk Manufacturer/Model Firmware Serial No. (Enc.Slot) --------- -------------------------------------1.1 HDS724040KLSA80 KFAOA32A KRFS06RAG9VYGC 1.2 HDS724040KLSA80 KFAOA32A KRFS06RAG9TYYC 1.3 HDS724040KLSA80 KFAOA32A KRFS06RAG99EVC 1.4 HDS724040KLSA80 KFAOA32A KRFS06RAGA002C 1.5 HDS724040KLSA80 KFAOA32A KRFS06RAG9SGMC 1.6 HDS724040KLSA80 KFAOA32A KRFS06RAG9VX7C 1.7 HDS724040KLSA80 KFAOA32A KRFS06RAG9SEKC 1.8 HDS724040KLSA80 KFAOA32A KRFS06RAG9U27C Disk Management Capacity
--------372.61 GiB 372.61 GiB 372.61 GiB 372.61 GiB 372.61 GiB 372.61 GiB 372.61 GiB 372.61 GiB 179
Display RAID Status for Disks 1.9 HDS724040KLSA80 KFAOA32A KRFS06RAG9SHXC 1.10 HDS724040KLSA80 KFAOA32A KRFS06RAG9SJWC 1.11 HDS724040KLSA80 KFAOA32A KRFS06RAG9SHRC 1.12 HDS724040KLSA80 KFAOA32A KRFS06RAG9SK2C 1.13 HDS724040KLSA80 KFAOA32A KRFS06RAG9WYVC 1.14 HDS724040KLSA80 KFAOA32A KRFS06RAG9SJDC 1.15 HDS724040KLSA80 KFAOA32A KRFS06RAG9SKBC --------- -------------------------------------15 drives present.
372.61 GiB 372.61 GiB 372.61 GiB 372.61 GiB 372.61 GiB 372.61 GiB 372.61 GiB ---------
180
# disk show raid-info Disk State Status (Enc.Slot) -----------------------------------------------1.1 in use (dg0) 1.2 in use (dg0) 1.3 in use (dg0) 1.4 in use (dg0) 1.5 in use (dg0) 1.6 in use (dg0) 1.7 in use (dg0) 1.8 spare 1.9 in use (dg0) 1.10 in use (dg0) 1.11 in use (dg0) 1.12 in use (dg0) 1.13 in use (dg0) 1.14 in use (dg0) 1.15 in use (dg0) -------------------------------------------14 drives are in use 0 drives have "failed" 1 drive is spare(s) 0 drives are undergoing reconstruction 0 drives are not in use 0 drives are missing/absent
Additional
disk show detailed-raid-info The Slot column in the Disk Group section shows the logical slot for each disk in a RAID subgroup. In the example below, the RAID group name is ext3 with subgroups of ext3_1 through ext3_4 (only subgroups ext_1 and ext_2 are shown). The number of Gigabytes allocated for the RAID group and for each subgroup is shown just after the group or subgroup name. The Raid Group section shows the logical slot and actual disks for the whole group. On a gateway system, the display does not include information about individual disks. # disk show detailed-raid-info Disk Group (dg0) - Status: normal Raid Group (ext3):(raid-0)(61.6 GiB) - Status: normal Raid Group (ext3_1):(raid-6)(15.26 GiB) - Status: normal Slot Disk State Additional Status -----------------------------------0 1.10 in use (dg0) 1 1.11 in use (dg0) 2 1.12 in use (dg0) -----------------------------------Raid Group (ext3_2):(raid-6)(15.26 GiB) - Status: normal Slot Disk State Additional Status -----------------------------------0 1.13 in use (dg0) 1 1.14 in use (dg0) 2 1.15 in use (dg0) -----------------------------------Raid Group (ppart):(raid-6)(2.47 TiB) - Status: normal Slot Disk State Additional Status ---------------------------------0 1.16 in use (dg0) 1 1.11 in use (dg0) 2 1.12 in use (dg0) 3 1.13 in use (dg0) 4 1.14 in use (dg0)
182 Data Domain Operating System User Guide
5 1.15 6 1.6 7 1.9 8 1.10 9 1.1 10 1.2 11 1.3 12 1.4 13 1.5 14 1.7 ------------------------Spare Disks Disk (Enc.Slot) ---------1.8 ---------Unused Disks None
in use (dg0) in use (dg0) in use (dg0) in use (dg0) in use (dg0) in use (dg0) in use (dg0) in use (dg0) in use (dg0) in use (dg0) -----------State --------spare ---------
Note MiB = Mebibytes, the base 2 equivalent of Megabytes. TiB = Tebibytes, the base 2 equivalent of Terabytes.
Disk Management
183
Display Use the disk show performance operation or click Disks in the left panel of the Data Domain Enterprise Manager to see disk performance statistics. disk show performance
The display is similar to the following: # disk show performance Disk Read Cumul. Busy (Enc.Slot) sects/s MiBytes/s ----------------------------1.1 378 0.392 11 % 1.2 0 0.000 0 % 1.3 346 0.379 10 % 1.4 0 0.000 0 % 1.5 410 0.414 11 % 1.6 397 0.402 11 % 1.7 360 0.389 11 % 1.8 (spare) (spare) 1.9 358 0.384 10 % 1.10 390 0.399 11 % 1.11 412 0.411 11 % 1.12 379 0.394 11 % 1.13 392 0.399 11 %
184
Write sects/s -------426 0 432 0 439 427 439 (spare) 430 429 430 429 426 (spare)
1.14 373 0.390 12 % 1.15 424 0.417 12 % ---------------------------Cumulative 5.583 MiB/s, 11 % busy
Display Disk Reliability Details 1.7 0 0 33 C 91 F 1.8 0 0 33 C 91 F 1.9 0 0 34 C 93 F 1.10 0 0 34 C 93 F 1.11 0 0 35 C 95 F 1.12 0 0 33 C 91 F 1.13 0 0 34 C 93 F 1.14 0 0 34 C 93 F 1.15 0 0 56 C 133 F ---------- -------- ----------------14 drives operating normally. 1 drive reporting excessive temperatures.
186
14
Gives general guidelines for predicting how much disk space your site may use over time. Explains how to deal with Data Domain System components that run out of disk space. Gives background information on how to reclaim Data Domain System disk space.
Note Data Domain offers guidance on setting up backup software and backup servers for use with a Data Domain System. Because such information tends to change often, it is available on the Data Domain Support web site (http://support.datadomain.com/). See the Technical Notes section on the web site. Note Disk space is given in KiB, MiB, GiB, and TiB, the binary equivalents of KB, MB, GB, and TB.
Space Management
A Data Domain System is designed as a very reliable online cache for backups. As new backups are added to the system, old backups are removed. Such removals are normally done under the control of backup software (on the backup server) based on the configured retention period. The process with a Data Domain System is very similar to tape policies where older backups are retired and the tapes are reused for new backups. When backup software removes an old backup from a Data Domain System, the space on the Data Domain System becomes available only after the Data Domain System internal clean function reclaims disk space. A good way to manage space on a Data Domain System is to retain as many online backups as possible with some empty space (about 20% of total space available) to allow for data growth over time. Data growth on a Data Domain System is primarily affected by:
The size and compressibility of the primary storage that you are backing up. The retention period that you specify with the backup software.
187
If you backup volumes that in total size are near the space available for data storage on a Data Domain System (for example 4 TiB--the base 2 equivalent of TB--on a model DD460, which has 3.9 TiB space available, see the table Data Domain system capacities in the Introduction chapter of the System Hardware Guide) or the retention time for volumes that do not compress well is greater than four months, backups may fill space on a Data Domain System more quickly than expected.
Through the Data Domain System, the filesys show space command (or the alias of df) shows both physical and virtual space. See Manage File system Use of Disk Space on page 189. Directly from clients that mount a Data Domain System, use your usual tools for displaying a file systems physical use of space.
The Data Domain System generates log messages as the file system approaches its maximum size. The following information about data compression gives guidelines for disk use over time. The amount of disk space used over time by a Data Domain System depends on:
The size of the initial full backup. The number of additional backups (incremental and full) over time. The rate of growth for data in the backups.
For data sets with average rates of change and growth, data compression generally matches the following guidelines:
For the first full backup to a Data Domain System, the compression factor is about 3:1. Disk space used on the Data Domain System is about one-third the size of the data before the backup. Each incremental backup to the initial full backup has a compression factor of about 6:1. The next full backup has a compression factor of about 60:1. All data that was new or changed in the incremental backups is already in storage. Over time, with a schedule of weekly full and daily incremental backups, the aggregate compression factor for all the data is about 20:1. The compression factor is lower for incremental-only data or for backups without much duplicate data. Compression is higher with only full backups.
Data Domain Operating System User Guide
188
Size GiB Used GiB Avail GiB 19.7 0.4 3.2 3.0 151.9 15.7
* Estimated based on last cleaning of 2008/02/12 06:14:02. The /backup: pre-comp line shows the amount of virtual data stored on the Data Domain System. Virtual data is the amount of data sent to the Data Domain System from backup servers. Do not expect the amount shown in the /backup: pre-comp line to be the same as the amount displayed with the filesys show compression command, Original Bytes line, which includes system overhead.
The /backup: post-comp line shows the amount of total physical disk space available for data, actual physical space used for compressed data, and physical space still available for data storage. Warning messages go to the system log and an email alert is generated when the Use% figure reaches 90%, 95%, and 100%. At 100%, the Data Domain System accepts no more data from backup servers. The total amount of space available for data storage can change because an internal index may expand as the Data Domain system fills with data. The index expansion takes space from the Avail GiB amount. If Use% is always high, use the filesys clean show-schedule command to see how often the cleaning operation runs automatically, then use filesys clean schedule to run the
189
operation more often. Also consider reducing the data retention period or splitting off a portion of the backup data to another Data Domain System.
The /ddvar line gives a rough idea of the amount of space used by and available to the log and core files. Remove old logs and core files to free space in this area.
During the clean operation, the Data Domain System file system is available for backup (write) and restore (read) operations. Although cleaning uses a noticeable amount of system resources, cleaning is self-throttling and gives up system resources in the presence of user traffic. Data Domain recommends running a clean operation after the first full backup to a Data Domain System. The initial local compression on a full backup is generally a factor of 1.5 to 2.5. An immediate clean operation gives additional compression by another factor of 1.15 to 1.2 and reclaims a corresponding amount of disk space.
A default schedule runs the clean operation every Tuesday at 6 a.m. (tue 0600). You can change the schedule or you can run the operation manually with the filesys clean commands. DataDomain recommends that you run the clean operation at least once a week. If you want to increase file system availability and if the Data Domain System is not short on disk space, consider changing the schedule to clean less often. See Clean Operations on page 220 for details on changing the schedule. When the clean operation finishes, it sends a message to the system log giving the percentage of storage space that was cleaned. A Data Domain system that has become full may need multiple clean operations to clean 100% of the file system, especially if there is an external shelf. Depending on the type of data stored, such as when using markers for specific backup software (filesys option set marker-type ... ), the file system may never report 100% cleaned. The total space cleaned may always be a few percentage points less than 100.
190
Note Replication between Data Domain systems can affect filesys clean operations. If a source Data Domain system receives large amounts of new or changed data while disabled or disconnected, resuming replication may significantly slow down filesys clean operations.
Inode Reporting
An NFS or CIFS client request causes a Data Domain System to report a capacity of about 2 billion inodes (files and directories). A Data Domain System can safely exceed that number, but the reporting on the client may be incorrect.
191
Level 1: When no more new data can be written to the file system, an informative out of space message is returned. Run the filesys clean command. Level 2: Deleting files and expiring snapshots increases the amount of space used for each file that is involved as the new state is recorded. After deleting a large number of files or expiring a large number of snapshots or both, the space available does not allow any more file deletions. At that time, a misleading permission denied error message appears. A full system that generates permission denied messages is most likely at this level. Run the filesys clean command Level 3: After the permission denied message, you can still expire snapshots until no more disk space is available. Attempts to expire snapshots, delete files, or write new data all fail at this level. Run the filesys clean command
192
Multipath
15
Multipath allows external storage I/O paths to be used for failover and load balancing across paths. Multipath is available for all releases from 4.5 onward , on all Data Domain systems that support dual-port HBAs. (Multipath may also be supported if the system has two single-ported HBAs, depending on upgrade, etc.) Note 4.4.x releases have multipath functionality on Gateway systems only. Failover means that for any system that has more than one path, if the path being used fails, the system will begin using the other path, with no interruption of service. Any Data Domain system that has more than one path configured and enabled, failover will happen automatically.
...are only available on Gateway systems. They display useful gateway-oriented information and control multipathing for gateway systems. They are described below.
193
Case 1: auto-failback is enabled: the system fails back to the optimal path automatically. Case 2: auto-failback is disabled: the system continues using the second path. This continues until the user manually commands it to failback to the optimal path, using the command disk multipath failback.
To enable auto-failback (that is, to configure the system to go back to using the optimal path when it comes back up), use the command: disk multipath option set auto-failback enabled
194
---3a 3b 4a 4b ----
Output for ES20 Expansion Shelves (example is a DD690 with 6 shelves): # disk port show summary
Port ---3a 3b 4a 4b ---Connection Type ---------SAS SAS SAS SAS ---------Link Speed ------12 Gbps 12 Gbps 12 Gbps 12 Gbps ------Connected Enclosure IDs ------------2, 3, 4 5, 6, 7 5, 6, 7 2, 3, 4 ------------Status -----online online online online ------
Multipath
195
Port See the "Data Domain System Hardware User Guide" to match a slot to a port number. A DD580, for example,shows port 3a as a SAS HBA in slot 3 and port 4a as a SAS HBA in slot 4. A gateway Data Domain system with a dual-port Fibre Channel HBA always shows #a as the top port and #b as the bottom port, where # is the HBA slot number, depending on the Data Domain system model.. Connection Type is FC (Fibre Channel) for a gateway system. Link Speed is the HBA port link speed. Port ID identification number of the port. Connected Number of LUNs is the number of LUNs seen through the port. Connected Enclosure IDs is the ID numbers of the shelves connected to the port. Status is online or offline. Offline means that no LUNs are seen by the port.
Port ---3a
Disk ---------2.1 - 2.16 3.1 - 3.16 2.1 - 2.16 3.1 - 3.16 ----------
Target WWPN LUN Disk Status ----------------------- --- ---- ------50:06:01:61:1f:20:95:ad 0 dev1 Active dev2 Active 50:06:01:61:1f:20:95:af 0 dev1 Standby dev2 Standby ----------------------- --- ---- -------
Port is the port number on the HBA. Looking at the back of a Gateway system, the slots are numbered from right to left, and the ports (on a dual-port Fibre Channel HBA) are given letter "a" for the upper port and "b" for the lower. Thus: - The rightmost slot has port 1a (the upper port) and 1b (the lower port). - The slot to the left of it has port 2a (upper) and 2b (lower). And so on. Hops is the number of cable jumps to reach the destination.
Multipath
197
Target WWNN is the WorldWide Node Name for the target array. Target WWPN is the WorldWide Port Name for the target port. LUN displays Logical Unit Numbers visible by specified system disks (or drives). Disk is the Disk ID. Status is the running status of the path. Possible values: Active, Standby, Failed, Disabled.
Time
Port Target (Enc.Disk) ----------------- ---- ---------03/08/07 12:30:04 3a 2.1 ----------------- ---- ----------
Time is the time when an event occurred. Port is the initiator of a path identified by PCI slot and HBA port number. Target WWPN is the target of a path identified by WWPN. Target (Enc. Disk) is the target of a path identified by Enclosure and Disk. LUN is the Logical Unit Number.
198
Target Serial No. is the Serial Number of the shelf controller. Disk Serial No. is the Serial Number of the Disk. Event is the Type of Event: Active, Standby, Failed, Disabled.
Port ---3a
3b
----
Multipath
199
enc is the enclosure ID. Port is the port number identified by PCI slot ID and Port number on HBA. Target WWPN is the Port WWN of the target LUN is the Logical Unit Number. Disk is the Disk ID. Status is the Running status of the path. Possible values: Active, Standby, Failed, Disabled. Read Requests is the number of read requests issued since the last reset. A 64-bit number. Read Failures is the number of read request failures that have occurred since last reset. A 64-bit number. Write Requests is the Number of write requests issued since last reset. A 64-bit number. Write Failures is the number of write request failures that have occurred since the last reset. A 64-bit number.
200
201
202
16
Introduction
Data Domain sells a number of platforms that provide an ideal disk-based environment for efficiently storing backups and archived data. These appliances are easy to setup and install and set the standard for storage efficiency through a combination of deduplication and compression technologies. While these appliances are easy to install, configure, and manage, questions arise as to how best to organize the data stored on them to maximally benefit from their use. It is common for a user to wonder how well the data is being compressed and several tools are provided to answer this question. But when questions arise as to how effective the compression is on specific data sets or types, some simple organization at the outset can help simplify this troubleshooting down the line. This paper provides an outline of some of these recommendations. Following these recommendations when the appliance is first configured will make determining the compression characteristics of data sets much easier. It will also simplify backup and recovery processes by clearly separating various data types so they can be quickly identified and accessed.
Issue
The primary reason customers are interested in Data Domain systems is to make the most effective use of their storage footprint. It is important to be able to measure and understand these compression effects and to know for certain what is compressing well and what isn't. By using the directory structure on the Data Domain system, it is easier to observe and troubleshoot these issues.
Background
The Data Domain system is an appliance which presents three types of interfaces to the data center environment; NFS via IP and Ethernet, CIFS (Microsoft file sharing) via IP and Ethernet, or Virtual Tape Library emulation via fibre channel. These are well understood industry-standard access
203
Background
mechanisms that are simple to setup and use. The appliance also has a small set of configuration and monitoring tools accessible via either command line or web-based GUI. This paper will focus on those commands used to report on the deduplication and compression effects that characterize the system.
Reporting on compression
The reason directory organization is an important consideration on a Data Domain is that one administrative command reports how well the compression capabilities of a DDR are being utilized. That command is filesys show compression <directory>. The documentation for this command reads: filesys show compression [path] [last {n hours | n days}]
In the display, the value for bytes/storage_used is the compression ratio after all compression of data (global and then local) plus the overhead space needed for meta data. In the Original bytes line, (which includes system overhead) do not expect the amount shown to be the same as the amount displayed with the filesys show space command, Pre-compression line, which does not include system overhead. The Original Bytes gives the cumulative (since file creation) number of bytes written to all files that were updated in the previous time period (if a time period is given in the command). The value may be different on a replication destination than on a replication source for the same files or file system. On the destination, internal handling of replicated meta-data and unwritten regions in files lead to the difference. The value for Meta-data includes an estimate for data that is in the Data Domain System internal index and is not updated when the amount of data on the Data Domain System decreases after a file system clean operation. Because of the index estimate, the amount shown is not the same as the amount displayed with the filesys show space command, Meta-data line.
The display is similar to the following: # filesys show compression /backup/usr Total files: 6,018; bytes/storage_used: 10.7 Original Bytes:6,599,567,913,746 Globally Compressed: 992,690,774,605 Locally Compressed: 608,225,239,283 Meta-data: 7,329,091,080 It is recommended that the optional parameter "last 24 hours" be used, since this reports on the data most recently backed up and gives the most accurate measure of how recent compression is behaving. Without this optional parameter, the compression reported is the overall compression experienced during the lifetime of the filesystem. When the system is first being placed into service, much of the data is seen as new so the early compression is generally lower than it will be later. Over time it improves and should reach a near-steady state which the "last 24 hours" option allows to be monitored.
204 Data Domain Operating System User Guide
Background
Use filesys show compression last 24 hours to get the compression for the last day's backup Use filesys show compression last 7 days to get a rough idea of the compression for the last week. This command is more useful to find the backup dataset size for a week. Use df to get the real compression numbers for the DDR.
By separating the data stored on the Data Domain system into separate subdirectories, the overall compression effects can be observed and measured using the command: # filesys show compression All compressed data on a Data Domain system is stored on the /backup filesystem. Therefore, all recommended organization takes place below this level.
Considerations
Several approaches exist for organizing the data. 1. Client source of data 2. Category of data - NFS vs. CIFS vs. VTL 3. Application type It's not really important which of these are used or combined as long as enough organization is provided to be able to determine the compression characteristics of specific areas of storage. At the same time, it is important to avoid too much organizing that gets in the way of effectively using the Data Domain system. If too many directories are created, it could complicate setting up backup and recovery policies which leads to more management and opportunities for error. So a careful balance needs to be maintained. An example of a way to line up directory structure is given in the figure Directory Structure Example on page 206.
205
Background
Further explanation and discussion for the above table The first level of organization separates the data by which style of access is used to read/write the data on the Data Domain system. The next level separates out the major sources of backup data sent to the Data Domain. In some circumstances, breaking this backup data into one additional level of organization can help understand how the data from major applications are handled and compressed. Be aware that when using the command filesys show compression <directory name> that specifying a <directory name> that has sub-directories will show a compression summary for all the sub-directories as well. To get the most granular information, specify the lowest relevant <directory name> in the tree whenever possible.
206
Background
NFS issues
The Network Filesystem was originally developed by Sun Microsystems and is the defacto standard today for sharing filesystem information across various flavors of UNIX platforms today. All major UNIX derivatives including Solaris, AIX, HP-UX, Linux, and Free-BSD support this method of access over Ethernet.
Filesystem organizations
The example shown in Table 1 shows a separation of backup data into two types: home directories and Oracle data. It is not uncommon for two separate backup policies to exist for this situation, an enterprise backup application that can backup all user home directories, and the use of Oracle's RMAN utility to backup Oracle database information. Further separating the Oracle archivelog files from the rest of the database also provides the ability to monitor how the two portions independently compress. Keeping these directories separate allows administrators to know how space is being used and adjust the retention policies accordingly. A general purpose best practice is to isolate database logfiles from the database data and control files wherever possible. Logfiles generally do not compress terribly well since they frequently have data patterns never seen before, so keeping them separated allows their possibly negative effect on overall compression to be measured. For large environments with significantly different databases, an additional level of decomposition can be added either above or below the database / logfile separation.
Mount options
Since each of these subdirectories is also available as an NFS export it is not unreasonable to take advantage of this fact and make only those directories available to the specific servers performing that type of backup. This provides improved security to the overall environment. Example of a UNIX /etc/vfstab or /etc/fstab file: dd460a:/backup/NFS/HomeDirs /backup/target rw,nolock,rsize=32768,wsize=32768,vers=3,proto=tcp 0 0 dd580a:/backup/NFS/Oracle/data /backup/Oracle-data rw,nolock,rsize=32768,wsize=32768,vers=3,proto=tcp 0 0 dd580a:/backup/NFS/Oracle/archivelogs /backup/Oracle-logs rw,nolock,rsize=32768,wsize=32768,vers=3,proto=tcp 0 0 On the Data Domain system the "nfs add client" command can be used to restrict export access of the mount points. Using "nfs show clients" we can see this in action:
207
Background
path -------------------/backup/vm /backup/vm /backup/vm /backup/vm /backup/vm /backup/vm /backup/misc_backups /backup/sample_data /backup/app_os_images
options -------------------------------------rw,no_root_squash,no_all_squash,secure rw,no_root_squash,no_all_squash,secure rw,no_root_squash,no_all_squash,secure rw,no_root_squash,no_all_squash,secure rw,no_root_squash,no_all_squash,secure rw,no_root_squash,no_all_squash,secure rw,no_root_squash,no_all_squash,secure ro,no_root_squash,no_all_squash,secure rw,no_root_squash,no_all_squash,secure
CIFS issues
The Common Internet Filesystem is used by Microsoft Windows products to share filesystem information across a LAN. The approach mentioned above for NFS apply equally to CIFS with an appropriate substitution of terms. NFS mounts become CIFS shares; Oracle becomes SQL Server or Exchange Server; etc.
VTL issues
The tape image files for all VTL library definitions are stored under the /backup/vtc directory. By default, all tapes images defined and created are stored in the Default directory (/backup/vtc/Default) unless other VTL "pools" are utilized. When creating tape definitions (part of VTL commissioning) the administrator can optionally assign tapes to various pools and give each pool a name. These pools are implemented by creating subdirectories under /backup/vtc, which keeps the various tapes grouped and separated so they can be managed, and most notably, replicated as separate entities. It is therefore a good idea to use the pool mechanism to keep collections of tapes used for different purposes separated and organized. Since they are in separate subdirectories, the compression effects of each separate pool can be determined using the command: # filesys show compression /backup/vtc/<pool name> You can also use the command: # vtl tape show pool <poolname> summary
208
Archive implications
OST issues
The best practice recommendation is to create one LSU on the DD system for optimal interaction with NetBackup's capacity management and intelligent resource selection algorithms. Use the ost lsu show command to display all the logical storage units. If an lsu name is given, display all the images in the logical storage unit. If compression is specified, the logical storage unit or images' original, globally compressed and locally compressed sizes will also be displayed. ost lsu show [compression] [lsu-name] Example output for the commands Without an LSU specified, the command shows summary information for all the LSUs. # ost lsu show compression List of LSUs and their compression info: LSU_NBU1: Total files: 4; bytes/storage_used: 206.6 Original Bytes: 437,850,584 Globally Compressed: 2,149,216 Locally Compressed: 2,113,589 Meta-data: 6,124 When an LSU is specified, the command shows information for the given LSU. # ost lsu show compression LSU_NBU1 List of images in LSU_NBU1 and their compression info: zion.datadomain.com_1184350349_C1_HDR:1184350349:jp1_policy1:4: 1::: Total files: 1; bytes/storage_used: 9.1 Original Bytes: 8,872 Globally Compressed: 8,872 Locally Compressed: 738 Meta-data: 236
Archive implications
Archived data tends to remain stored on the Data Domain system for much longer periods than backup data. It is also not uncommon for the data to only be written a single time to the appliance, which results in reduced opportunities for the deduplication technology to have the same benefit as seen for traditional backups. Keeping the archive data separate allows its effects on overall compression to be observed and accounted for.
209
IMPORTANT NOTE!
Note IMPORTANT: Keep in mind that deduplication only operates across a single Data Domain system. This means that data spread across several will not be deduplicated. If you have a large environment consisting of multiple Data Domain systems, it is important that the same data be sent to the same appliance every time. If a failure prevents this and a single backup has to be sent to an alternate appliance it could have significant effects on compression. Taking the manual step of moving this backup to its original destination after the failure is corrected may be necessary, depending on the degree that the compression is degraded.
Summary
By applying some early organization to the directory structure configured on the Data Domain system, future storage management and troubleshooting issues can be simplified and often avoided. This paper outlines some of the reasons for doing this and recommendations that can be followed. Each site is still unique, so these recommendations should be understood as to their spirit, and the detailed deployment performed to the specific circumstances that the Data Domain system will be used in.
210
Let's look at an example. Let's write a 2MB file to the DDR and observe that it experiences a 5X compression. We immediately write the same file again to a different location. It is natural to assume that the second copy will be highly deduplicated, so let's say it gets 200X compression. Filesys Show Compression <File1> will report 5X and Filesys Show Compression <File2> will show 200X. We then delete <File1>. Filesys Show Compression <File2> will still show 200X even though it obviously should report the 5X value that the first copy of the file received. Herein lies the potential for confusion. There are other less significant factors that can affect the numbers and which offer more opportunities for the exact numbers to be off. Therefore, the exact numbers reported by Filesys Show Compression are less interesting than the comparative numbers displayed when various separate directories are reported, or trends are observed over time. Obvious from the example above, any large-scale deletions can have an effect, sometimes significant, on the reported numbers that may need to be accounted for. Only the system administrator will know about such deletions, which may be explicitly executed, or done in the background through the expiration process built into all Enterprise backup software.
211
212
17
213
214
Fastcopy
To copy a file or directory tree from a Data Domain System source directory to another destination on the Data Domain System, use the filesys fastcopy operation. See Snapshots on page 231 for shapshot details. filesys fastcopy [force] source src-path destination dest-path src-path The location of the directory or file that you want to copy. The first part of the path must be /backup. Snapshots always reside in /backup/.snapshot. Use the snapshot list command to list existing snapshots. dest-path The destination for the directory or file being copied. The destination cannot already exist. force Allows the fastcopy to proceed without warning in the event the destination exists. The force option is useful for scripting, because it is not interactive. filesys fastcopy force causes the destination to be an exact copy of the source even if the two directories had nothing in common before. Use Case: Users may want or need to use fastcopy force if they are scripting fastcopy operations to simulate cascaded replication, the major use case for the option. It is not needed for interactive use, because regular fastcopy will warn if the destination exists and then re-execute with the force option if allowed to proceed. Note If the destination has retention-locked files, fastcopy and fastcopy force will fail, aborting the moment they encounter retention-locked files. For example, to copy the directory /user/bsmith from the snapshot scheduled-200704-27 and put the bsmith directory into the user directory under /backup: # filesys fastcopy source /backup/.snapshot/scheduled-2007-04-27/user/bsmith destination /backup/user/bsmith Like a standard unix copy, filesys fastcopy goes through making the destination equal to the source, but not at a particular point in time. If you change either folder while copying, there are no guarantees that the two are or were ever equal.
215
The /backup: pre-comp line shows the amount of virtual data stored on the Data Domain System. Virtual data is the amount of data sent to the Data Domain System from backup servers. Do not expect the amount shown in the /backup: pre-comp line to be the same as the amount displayed with the filesys show compression command, Original Bytes line, which includes system overhead. The /backup: post-comp line shows the amount of total physical disk space available for data, actual physical space used for compressed data, and physical space still available for data storage. Warning messages go to the system log and an email alert is generated when the Use% figure reaches 90%, 95%, and 100%. At 100%, the Data Domain System accepts no more data from backup servers. The total amount of space available for data storage can change because an internal index may expand as the Data Domain system fills with data. The index expansion takes space from the Avail GiB amount. If Use% is always high, use the filesys clean show-schedule command to see how often the cleaning operation runs automatically, then use filesys clean schedule to run the operation more often. Also consider reducing the data retention period or splitting off a portion of the backup data to another Data Domain System.
The /ddvar line gives a rough idea of the amount of space used by and available to the log and core files. Remove old logs and core files to free space in this area.
Display To display the space available to and used by file system components, use the filesys show space operation or click File system in the left panel of the Data Domain Enterprise Manager. Values are in gigabytes to one decimal place. filesys show space The display is similar to the following:
# filesys show space Resource -----------------/backup: pre-comp /backup: post-comp /ddvar -----------------Size GiB -------9511.5 98.4 ------Used GiB --------117007.4 7170.5 37.3 --------Avail GiB --------2341.0 56.1 --------Use% ---75% 40% ---Cleanable GiB* -------------257.8 --------------
216
In the display, the value for bytes/storage_used is the compression ratio after all compression of data (global and then local) plus the overhead space needed for meta data. In the Original bytes line, (which includes system overhead) do not expect the amount shown to be the same as the amount displayed with the filesys show space command, Pre-compression line, which does not include system overhead.
217
The Original Bytes gives the cumulative (since file creation) number of bytes written to all files that were updated in the previous time period (if a time period is given in the command). The value may be different on a replication destination than on a replication source for the same files or file system. On the destination, internal handling of replicated meta-data and unwritten regions in files lead to the difference. The value for Meta-data includes an estimate for data that is in the Data Domain System internal index and is not updated when the amount of data on the Data Domain System decreases after a file system clean operation. Because of the index estimate, the amount shown is not the same as the amount displayed with the filesys show space command, Meta-data line.
The display is similar to the following: # filesys show compression /backup/naveen/ last 2 d Total files: 4; bytes/storage_used: 4.2 Original Bytes: 4,486,393,430 Globally Compressed (g_comp): 2,965,916,936 Locally Compressed (l_comp): 1,054,560,528 Meta-data: 9,697,288
------------15.6x (93.6%) 9.9x (89.9%) 16.3x (93.8%) ------------* Does not include the effects of pre-comp file deletes/truncates since the last cleaning on 2007/11/09 14:48:26.
Key: Pre-Comp = Data written before compression Post-Comp = Storage used after compression Compression Factor = pre-comp / post-comp Compression % = ((pre-comp - post-comp) / pre-comp) * 100 Global-Comp Factor = pre-comp / (size after de-dupe) Local-Comp Factor = (size after de-dupe) / post-comp
219
Clean Operations
2982.5 325.1 9.2x -41525.6 233.9 6.5x -111854.4 318.9 5.8x 540.2 164.4 3.3x -5454.2 16.4 27.6x -12520.7 22.5 23.1x 736.2 66.9 11.0x -6579.4 35.2 16.5x -13495.7 20.0 24.8x 378.4 27.2 13.9x -7246.8 16.7 14.8x -14269.6 16.6 16.3x Post-Comp (GiB) --------7348.8 Global-Comp Factor ----------4.9x 330.2 27.3 12.1x -8304.4 13.8 22.0x 265.6 14.8 18.0x -9311.0 11.4 27.2x 2325.6 135.1 17.2x -101827.7 159.0 11.5x 7558.5 760.8 9.9x
3140.4 378.0 8.3x Local-Comp Factor ---------3.2x Compression Factor (%) ------------15.6x (93.6%)
-----------Current: Written:* Last 7 day 5583.4 562.2 6.6x 1.5x 9.9x (89.9%) Last 24 hr 269.6 16.6 8.4x 1.9x 16.3x (93.8%) ------------ ----------------------------------------------* Does not include the effects of pre-comp file deletes/truncates since the last cleaning on 2007/11/09 14:48:26. Key: Pre-Comp = Data written before compression Post-Comp = Storage used after compression Compression Factor = pre-comp / post-comp Compression % = ((pre-comp - post-comp) / pre-comp) * 100 Global-Comp Factor = pre-comp / (size after de-dupe) Local-Comp Factor = (size after de-dupe) / post-comp
Clean Operations
The filesys clean operation reclaims physical storage occupied by deleted objects in the Data Domain file system. When application software expires backup or archive images and when the images are not present in a snapshot, the images are not accessible or available for recovery from the application or from a shapshot. However, the images still occupy physical storage. Only a filesys clean operation relcaims the physical storage used by files that are deleted and that are not present in a snapshot.
During the clean operation, the Data Domain System file system is available for backup (write) and restore (read) operations. Although cleaning uses a noticeable amount of system resources, cleaning is self-throttling and gives up system resources in the presence of user traffic.
220
Clean Operations
Data Domain recommends running a clean operation after the first full backup to a Data Domain System. The initial local compression on a full backup is generally a factor of 1.5 to 2.5. An immediate clean operation gives additional compression by another factor of 1.15 to 1.2 and reclaims a corresponding amount of disk space. When the clean operation finishes, it sends a message to the system log giving the percentage of storage space that was cleaned.
A default schedule runs the clean operation every Tuesday at 6 a.m. (tue 0600). You can change the schedule or you can run the operation manually with the filesys clean commands. Data Domain recommends running the clean operation at least once a week. If you want to increase file system availability and if the Data Domain System is not short on disk space, consider changing the schedule to clean less often. A Data Domain system that is full may need multiple clean operations to clean 100% of the file system, especially when one or more external shelves are attached. Depending on the type of data stored, such as when using markers for specific backup software (filesys option set marker-type ... ), the file system may never report 100% cleaned. The total space cleaned may always be a few percentage points less than 100. With collection replication, the clean operation does not run on the destination. With directory replication, the clean operation does not run on directories that are replicated to the Data Domain System (where the Data Domain System is a destination), but does run on other data that is on the Data Domain System. Note Any operation that shuts down the Data Domain System file system, such as the filesys disable command, or that shuts down the Data Domain System, such as a system power-off or reboot, stops the clean operation. The clean does not restart when the system and file system restart. Either manually restart the clean or wait until the next scheduled clean operation. Note Replication between Data Domain systems can affect filesys clean operations. If a source Data Domain system receives large amounts of new or changed data while disabled or disconnected, resuming replication may significantly slow down filesys clean operations.
Start Cleaning
To manually start the clean process, use the filesys clean start operation. The operation uses the current setting for the scheduled automatic clean operation and cleans up to 34% of the total space available for data on a DD560 or DD460 system. If the system is less than 34% full, the operation cleans all data. Administrative users only. filesys clean start
221
Clean Operations
For example, the following command runs the clean operation and reminds you of the monitoring command. When the operation finishes, a message goes to the system log giving the amount of free space available. # filesys clean start Cleaning started. Use filesys clean watch to monitor progress.
Stop Cleaning
To stop the clean process, use the clean stop operation. Stopping the process means that all work done so far is lost. Starting the process again means starting over at the beginning. If the clean process is slowing down the rest of the system, consider using the filesys clean set throttle operation to reset the amount of system resources used by the clean process. The change in the use of system resources takes place immediately. Administrative users only. filesys clean stop
Daily runs the operation every day at the given time. Monthly starts on a given day or days (from 1 to 31) at the given time. Never turns off the clean process and does not take a qualifier. With the day-name qualifier, the operation runs on the given day(s) at the given time. A day-name is three letters (such as mon for Monday). Use a dash (-) between days for a range of days. For example: tue-fri. Time is 24-hour military time. 2400 is not a valid time. mon 0000 is midnight between Sunday night and Monday morning. The most recent invocation of the scheduling operation cancels the previous setting.
The command syntax is: filesys clean set schedule daily time filesys clean set schedule monthly day-numeric-1 [,day-numeric-2,...]time filesys clean set schedule never filesys clean set schedule day-name-1[,day-name-2,...]time
222 Data Domain Operating System User Guide
Clean Operations
For example, the following command runs the operation automatically every Tuesday at 4 p.m.: # filesys clean set schedule tue 1600 To run the operation more than once in a month, set multiple days in one command. For example, to run the operation on the first and fifteenth of the month at 4 p.m.: # filesys clean set schedule monthly 1,15 1600
Update Statistics
To update the If 100% cleaned numbers that show in the output from filesys show space, use the filesys clean update-stats operation. With a full file system, the update operation can take up to 12 hours. Administrative users only. filesys clean update-stats
Clean Operations
The display is similar to the following.: # filesys clean show config 50 Percent Throttle Filesystem cleaning is scheduled to run "Tue" at "0600".
224
Compression Options
Compression Options
A Data Domain system compresses data at two levels: global and local. Global compression compares received data to data already stored on disks. Data that is new is then locally compressed before being written to disk. Command options allow changes at both compression levels.
Local Compression
A Data Domain System uses a local compression algorithm developed specifically to maximize throughput as data is written to disk. The default algorithm allows shorter backup windows for backup jobs, but uses more space. Local compression options allow you to choose slower performance that uses less space, or you can set the system for no local compression.
Changing the algorithm affects only new data and data that is accessed as part of the filesys clean process. Current data remains as is until a clean operation checks the data. To enable the new setting, use the filesys disable and filesys enable commands.
lz The default algorithm that gives the best throughput. Data Domain recommends the lz option. gzfast A zip-style compression that uses less space for compressed data, but more CPU cycles. Gzfast is the recommended alternative for sites that want more compression at the cost of lower performance. gz A zip-style compression that uses the least amount of space for data storage (10% to 20% less than lz), but also uses the most CPU cycles (up to twice as many as lz).
225
Compression Options
Global Compression
DD OS 4.0 and later releases use a global compression algorithm called type 9 as the default. Earlier releases use an algorithm called type 1 (one) as the default.
A Data Domain system using type 1 global compression continues to use type 1 when upgraded to a new release. A Data Domain system using type 9 global compression continues to use type 9 when upgraded to a new release. A DD OS 4.0.3.0 or later Data Domain system can be changed from one type to another if the file system is less than 40% full. Directory replication pairs must use the same global compression type.
226
Before changing the reported setting, use the filesys disable command. After changing the setting, use the filesys enable command. When using CIFS on the Data Domain System, use the cifs disable command before changing the reported state and use the cifs enable command after changing the reported state.
Report as Read/Write
Use the filesys option enable report-replica-as-writable command on the destination Data Domain System to report the file system as writable. Some backup applications must see the replica as writable to do a restore or vault operation from the replica. filesys option enable report-replica-as-writable
227
Report as Read-Only
Use the filesys option disable report-replica-as-writable command on the destination Data Domain System to report the file system as read-only. filesys option disable report-replica-as-writable
The setting is system-wide and applies to all data received by a Data Domain system. If a Data Domain system is set for a marker type and data is received that has no markers, compression and system performance are not affected.
Data Domain Operating System User Guide
228
If a Data Domain system is set for a marker type and data is received with markers of a different type, compression is degraded for the data with different markers. filesys option set marker-type {cv1 | eti1 | hpdp1 | nw1 | tsm1 | tsm2 | none}
cv1 for CommVault Galaxy with VTL and file system backups. eti1 for HP NonStop systems using ETI-NET EZX/BackBox. hpdp1 for HP DP versions 5.1, 5.5, and 6.0 with VTL and file system backups. nw1 for Legato NetWorker with VTL. tsm1 for IBM Tivoli Storage Manager on media servers with small endian processor architecture, such as x86 Intel or AMD. tsm2 for IBM Tivoli Storage Manager on media servers with big endian processor architecture, such as SPARC or IBM mainframe. PowerPC cna be configured as either big or small endian. Check with your system administrator if you are not sure about the media server architecture configuration. none for data with no markers (none is also the default setting). # filesys disable # filesys enable
After changing the setting, enter the following two commands to enable the new setting:
229
230
Snapshots
18
The snapshots command manages file system snapshots. A snapshot is a read-only copy of the Data Domain System file system from the top directory: /backup. Snapshots are useful for avoiding version skew when backing up volatile data sets, such as tables in a busy data base, and for retrieving earlier versions of a directory or file that was deleted. If the Data Domain System is a source for collection replication, snapshots are replicated. If the Data Domain System is a source for directory replication, snapshots are not replicated. Snapshots must be created separately on a directory replication destination. Snapshots are created in the system directory: /backup/.snapshot. Each directory under /backup also has a .snapshot directory with the name of each snapshot that incudes the directory. The filesys fastcopy command can use snapshots to copy a file or directory tree from a snapshot to the active file system.
Create a Snapshot
To create a snapshot, use the snapshot create operation. snapshot create name [retention {date | period}] Choose a descriptive name. A retention date is a four-digit year, a two-digit month, and a two-digit day separated by dots ( . ), slashes ( / ), or dashes ( - ). For example, 2009.05.22. A retention period is a number of days, weeks or wks, or months or mos with no space between the number and the days, weeks, or months. For example, 6wks. The months or mos period is always 30 days. With a retention date, the snapshot is retained until midnight (00:00, the first minute of the day) of the given date. With a retention period, the snapshot is retained until the same time of day as the creation. For example, when a snapshot is created at 8:48 a.m. on April 27, 2007: # snapshot create test22 retention 6wks Snapshot "test22" created and will be retained until Jun 08:48. 8 2007
231
List Snapshots
Note The maximum number of snapshots allowed to be stored on a system is 100. If the number reaches 100, the system generates an alert. If your system becomes filled with snapshots, you can resolve this by expiring snapshots and then running filesys clean.
List Snapshots
To list existing snapshots, use the snapshot list option. The display gives the snapshot name, pre-compression amount of data in the snapshot, the creation date, the retention date, and the status. Status is either blank or Expired. An expired snapshot remains available until the next file system clean operation. Use the snapshot expire command to set a future expiration date for an expired, but still available, snapshot. snapshot list For example:
# snapshot list Name -------------------SS_FULL_1 SS_INCR_1 SS_INCR_2 SS_FULL_2 DAILY_1 DAILY_2 WEEKLY_1 DAILY_3 scheduled-2007-05-05 scheduled-2007-07-07 scheduled-2007-08-02 -------------------Pre-Comp (GB) Create Date ------------- ----------------948.1 944.4 938.7 939.9 942.8 940.7 937.8 937.3 944.6 944.5 943.9 Feb 1 2007 22:16 Feb 1 2007 23:09 Feb 2 2007 00:31 Mar 2 2007 00:48 Mar 12 2007 01:03 Mar 13 2007 02:24 Apr 12 2007 02:51 Apr 13 2007 03:40 May 5 2007 13:08 Jul 7 2007 13:09 Aug 2 2007 13:11 Aug 2 2007 07:33 Aug 2 2007 11:16 Aug 2 2007 13:09 Aug 2 2007 09:52 Aug 2 2007 07:33 Aug 2 2007 07:33 Aug 1 2007 13:08 Aug 7 2007 13:09 Sep 1 2007 13:11 ---------------------expired expired expired expired expired expired expired Retain Until ---------------Status -------
------------- -----------------
232
Expire a Snapshot
A retention period is a number of days, weeks or wks, or months or mos with no space between the number and the days, weeks, or months. For example, 6wks. The months or mos period is always 30 days. The value forever means that the snapshot does not expire. With a retention date, the snapshot is retained until midnight (00:00, the first minute of the day) of the given date. With a retention period, the snapshot is retained until the same time of day as the snapshot expire command was entered. For example: # snapshot expire tester23 retention 5wks Snapshot "tester23" will be retained until Jun 1 2007 09:26.
Expire a Snapshot
To immediately expire a snapshot, use the snapshot expire operation with no options. An expired snapshot remains available until the next file system clean operation. snapshot expire name (See also filesys clean.)
Rename a Snapshot
To change the name of a snapshot, use the snapshot rename operation. snapshot rename name new-name For example, to change the name from snap12-20 to snap12-21: # snapshot rename snap12-20 snap12-21 Snapshot snap12-20 renamed to snap12-21.
Snapshot Scheduling
The commands above this point had to do with the capturing of a single one-time snapshot at the point in time when the command is executed. The commands below have to do with arranging a series of snapshots to be taken at a regular series of times in the future. Such a series of snapshots is called a snapshot schedule, or schedule for short. We therefore speak of adding a snapshot schedule to the set of all snapshot schedules. Note It is strongly recommended that snapshot schedules always explicitly specify a retention time. The default retention time is 14 days. If no retention time is specified, all snapshots will be retained 14 days, consuming valuable resources.
Snapshots
233
Snapshot Scheduling
Note There can be multiple snapshot schedules active at the same time. Note If multiple snapshots are scheduled to occur at the same time, only one will be retained. However, which one is retained is indeterminate, thus only one snapshot should be scheduled for a given time.
Syntax
There are several possible syntaxes: snapshot add schedule <name> [days <days>] time <time>[,<time> ...] [retention <period>] The default for days is daily and the user can specify a list of hours. snapshot add schedule <name> [days <days>] time <time> [every <mins>] [retention <period>] The default for days is daily. The user can also specify the interval in mins. snapshot add schedule <name> [days <days>] time <time>[-<time>] [every <hrs | mins>] [retention <period>] The default for days is daily. When every is omitted it defaults to every 1hr. Where: time can be of the form: - 10:10 - 1010 - 10:00-2300 NOTE: Time is expressed in 24hrs format (not am/pm) and ":" is optional. days can be of the form: 234
mon,tue : For Monday, Tuesday every week mon-fri : For Monday through Friday every week daily : For every day of the week 1,2 : For days in the month 1-3 : For 1,2,3 days in the month
Data Domain Operating System User Guide
Snapshot Scheduling
The naming convention for scheduled snapshots is the word scheduled followed by a four-digit year, a two-digit month, a two-digit day, a two-digit hour, and a two-digit minute. All elements of the name are separated by a dash ( - ). For example: scheduled-2007-04-27-13-41. The name every_day_8_pm is the name of a snapshot schedule. Snapshots generated by that schedule might have the names scheduled-2008-03-24-20-00, scheduled-2008-03-25-20-00, etc.
Additional notes:
The default retention time for a scheduled snapshot is 14 days. Snapshots reside in the directory /backup/.snapshot/
The days-of-week are one or more three-letter day abbreviations, such as tue for Tuesday. Use a dash ( - ) between days to denote a range. For example, mon-fri creates a snapshot every day Monday through Friday. The time uses a 24 hour clock that starts at 00:00 and goes to 23:59. The format in the command is a three or four digit number with an optional colon ( : ) between hours and minutes. For example, 4:00 or 04:00 or 0400 sets the time to 4:00 a.m., and 14:00 or 1400 sets the time to 2:00 p.m. The retention period is a number plus days, weeks or wks, or months or mos with no space between the number and the days, weeks, or months tag. For example, 6wks. The months or mos period is always 30 days. For example, to schedule a snapshot every Monday and Thursday at 2:00 a.m. with a retention of two months: # snapshot add schedule mon thu 02:00 retention 2mos Snapshots are scheduled to run "Mon, Thu" at "0200". Snapshots are retained for "60" days.
Further Examples:
1. Every day at 8:00pm add schedule every_day_8_pm days daily time 20:00
Snapshots
235
Snapshot Scheduling
OR add schedule every_day_8_pm days mon-sun time 20:00 Note The name every_day_8_pm is the name of a snapshot schedule. Snapshots generated by that schedule will have names like scheduled-2008-03-24-20-00, scheduled-2008-03-25-20-00, etc. a. Every midnight add schedule every_midnight days daily time 00:00 retention 3 days OR add schedule every_midnight days mon-sun time 00:00 retention 3 days 2. Every weekday at 6:00am add schedule wkdys_6_am days mon-fri time 06:00 retention 4 days OR add schedule wkdys_6_am days mon,tue,wed,thu,fri time 06:00 retention 4 days 3. Every weekend sun at 10:00am add schedule every_sunday_10_am days sun time 10:00 retention 2 mos a. Every sunday midnight add schedule every_sunday_midnight days sun time 00:00 retention 2 mos 4. Every 2 hrs add schedule every_2_hours days daily every 2hrs retention 3 days a. Every hour add schedule every_hour days daily every 1hrs retention 3 days b. Every 2 hrs 15mins past the hour add schedule every-2h-15-past days daily time 00:15-23:15 every 2 hrs retention 3 days c. Every 2 hrs between 8:00am-5:00pm on weekdays.
236 Data Domain Operating System User Guide
Snapshot Scheduling
add schedule wkdys-every-2-hrs-8a_to_5p days mon-fri time 08:00-17:00 every 2 hrs retention 3 days 5. A specific day of week at a specific time (for e.g., every week on Mondays, Tuesdays at 8:00am) add schedule ev-wk-mon-and-tu-8-am days mon,tue time 08:00 retention 3 mos 6. Every specific day of a month at a specific time (for e.g, every 2nd day in the month at 10:15am) add schedule ev_mo_2nd_day_1015a days 2 time 10:15 retention 3 mos 7. Every last day in a month at 11:00pm add schedule ev_mo_last_day_11pm days last time 23:00 retention 2 yrs a. Beginning of every month add schedule ev_mo_1st_day_1st_hr days 1 time 00:00 retention 2 yrs 8. Every 15mins add schedule ev_15_mins days daily time 00:00-23:00 every 15mins retention 5 days 9. Every week day at 10:30am and 3:30pm add schedule ev_weekday_1030_and_1530 days mon-fri time 10:30,15:30 retention 2 mos
Snapshots
237
Snapshot Scheduling
The default for days is daily. When every is omitted it defaults to every 1hr.
238
Snapshot Scheduling
Note that there are two ways to delete all scheduled snapshots: snapshot del schedule all or snapshot reset schedule
Snapshots
239
Snapshot Scheduling
240
Retention Lock
19
241
Note A file must be explicitly committed to be a retention-locked file through client-side file commands before the file is protected from modification and premature deletion. These commands may be issued directly by the user or automatically by applications that support the retention lock feature. Applications that do not issue these commands will not trigger the retention lock feature. Note The "retention period" referred to here under this section titled The Retention Lock Feature differs from the retention period for snapshots. The retention period for the retention lock feature specifies the minimum period of time a retention-locked file is retained whereas the retention period for snapshots specifies the maximum length of time snapshot data is retained.
mo year The period should not be more than 70 years; any period larger than 70 years results in an error. The limit of 70 years may be raised in a subsequent release. By default, the min-retention-period is 12 hours and the max-retention-period is 5 years. These default values may be subsequently revised. For example, to set the min-retention-period to 24 months: DDOS# filesys retention-lock option set min-retention-period 24 mo
244
5. Set the minimum retention period for the Data Domain system: DDOS# filesys retention-lock option set min-retention-period 96 hr 6. Set the maximum retention period for the Data Domain system: DDOS# filesys retention-lock option set max-retention-period 30 year 7. Reset both minimum and maximum retention periods to their default values: DDOS# filesys retention-lock option reset The min and max retention periods have now been reset to their defaults: 12 hours and 5 years, respectively. 8. Show the maximum and minimum retention periods: DDOS# filesys retention-lock option show Now using Client Operating system commands on the client system: Suppose the current date/time is December 18th 2007 at 1 p.m., that is, 200712181300. Adding the min retention period of 12 hours gives 200712190100. Thus if atime for a file is set to a value greater than 200712190100, the file becomes retention-locked. 9. Put a retention lock on the existing file SavedData.dat, by setting its atime to a value greater than the current time plus the minimum retention period: ClientOS# touch -a -t 200912312230 SavedData.dat 10. Extend the retention date of the file: ClientOS# touch -a -t 202012121230 SavedData.dat 11. Identify retention-locked files and list retention date: ClientOS# touch -a -t 202012121200 SavedData.dat ClientOS# ls -l --time=atime SavedData.dat 12. Delete an expired retention-locked file: Assuming the retention date of the retention-locked file has expired as determined in the previous step, ClientOS# rm SavedData.dat
246
Now using Data Domain Operating system commands: 13. Disable the retention lock feature DDOS# filesys retention-lock disable Until retention lock has been re-enabled, it is now not possible to place a retention lock on files. However, any files that were previously retention-locked remain so.
Collection replication replicates min and max retention periods to the destination system. Directory replication does not replicate min and max retention periods to the destination system.
Replication resync will fail if the destination is not empty and retention lock is currently or was previously enabled on either the source or destination system.
Retention Lock
247
248
Replication - CLI
20
The replication command sets up and manages the Data Domain Replicator for replicating data between Data Domain Systems. The Replicator is a licensed product. Contact Data Domain for license keys. Use the license add command to add one key to each Data Domain System in the Replicator configuration.
Collection Replication
Collection replication replicates the complete /backup directory from one Data Domain System (a source that receives data from backup systems) to another Data Domain System (a destination). Each Data Domain System is dedicated as a source or a destination and each can be in only one replication pair. The destination is a read-only system except for receiving data from the source. With collection replication:
A destination Data Domain System can be mounted as read-only for access from other systems. A destination Data Domain System removed from a collection pair (with the replication break command) cannot be brought back into the pair or be used as a destination for another source until the file system is emptied with the filesys destroy command. Note that the filesys destroy command erases all Replicator configuration settings. A destination Data Domain System removed from a collection pair becomes a stand-alone Data Domain System that can be used as a source for replication. With collection replication, all user accounts and passwords are replicated from the source to the destination. Any changes made manually on the destination are overwritten after the next change is made on the source. Data Domain recommends making changes only on the source.
Directory Replication
Directory replication provides replication at the level of individual directories. Each Data Domain System can be the source or the destination for multiple directories and can also be a source for some directories and a destination for others. During directory replication, each Data Domain System can also perform normal backup and restore operations. Replication command options with
249
Using Context
directory replication may target a single replication pair (source and destination directories) or may target all pairs that have a source or destination on the Data Domain System. Each replication pair configured on a Data Domain system is called a context. With directory replication:
The maximum number of contexts allowed on a DD1xx, DD4xx, or DD5xx system is twenty. The maximum on a DD690 system is sixty. Be sure that the destination Data Domain system has enough network bandwidth and disk space to handle all traffic from the originators. A destination Data Domain System must have available storage capacity that is at least the size of the expected maximum size of the source directory. The destination must have adequate space. When directory replication is initialized or when using the replication resync operation, the total number of replicated source files for all contexts can be no more than one million with DD4xx, DD530, and DD510 Data Domain systems and no more than two million with DD560 and larger Data Domain systems. A single destination Data Domain system can receive backups from both CIFS clients and NFS clients as long as separate directories are used for CIFS and NFS. Do not mix CIFS and NFS data under the same directory. Source or destination directories may not overlap. A destination directory that does not already exist is created automatically when replication is initialized. After replication is initialized, ownership and permissions of the destination directory are always identical to those of the source directory. In the replication command options, a specific replication pair is always identified by the destination.
Apply to all replication pairs and all network interfaces on a system. Each throttle setting affects all replication pairs and network interfaces equally. Affect only outbound network traffic. Calculate the proper tcp buffer size for replication usage, using bandwidth and delay settings together.
Using Context
Except for the replication add operation, all replication commands that can use a destination variable can take either the complete destination specification or a context number. Context numbers appear in the output from a number of commands, such as replication status.
250
Configure Replicator
Look for the number in a command outputs first column that has the heading CTX. To use the context number, preface the number with rctx://. For example, to display statistics for the destination labeled as context 2, use the following command: # replication show stats rctx://2
Configure Replicator
When configuring replication, please note the following two things: Note 1. When putting in the path, don't put in the mount point you see on your media servers. Ex. If the media server shows the path as /ddata1/dir1 -then the path is actually /backup/dir1 on the appliance. The /ddata1 is your NFS mount point, and on the appliance all the directories you've created off your mount point are under the /backup directory. Note 2. Before setting up replication you need to ensure the hostname that you have on your appliances is on the network, and that each appliance can see each other across the network. If all appliances are connected to their network switches you won't have a problem, but if you have direct connections from media server to data domain appliance, then you need to be careful about what your hostname resolves to.
Example: Suppose you don't hook all the LAN cards on our appliances to a switch, but instead you have cross-connected them directly to the media servers, and you only have 1 interface on the network (the gui manager). You need to change the hostname to that ip address on both boxes. To configure a Replicator pair, use the replication add operation on both the source and destination Data Domain Systems. Administrative users only. replication add source source destination destination
The source and destination host names must be exactly the same as the names returned by the hostname command on the source and destination Data Domain Systems. When a Data Domain system is at or near full capacity, the command may take 15 to 20 seconds to finish. For collection replication: The destination directory must be empty. Enter the filesys disable command on both the source and destination. On the destination only, enter the filesys destroy command.
Replication - CLI
251
Start the source and destination variables with col://. For example, enter a command similar to the following on the source and destination Data Domain Systems: replication add source col://hostA destination col://hostB
Enter the filesys enable command on both the source and destination. The Data Domain System file system must be enabled. The source directory must exist. The destination directory should be empty. Start the source and destination variables with dir:// and include the directory that is the replication target. For example, enter a command similar to the following on the source and destination Data Domain Systems: replication add source dir://hostA/backup/dir2 destination dir://hostB/backup/hostA/dir2
When the host name for a source or destination does not correspond to the network name through which the Data Domain Systems will communicate, use replication modify connection -host command on the other system to direct communications to the correct network name. A sub-directory that is under a source directory in a replication context cannot be used in another replication context. Any directory can be in only one context at a time.
All these types of directory replication are the same (except for the destination name limitation below) when configuring replication and when using the replication command set. Examples in this chapter that use dir:// are also valid for pool://. (To avoid exposing the full directory names to the VTL cartridges, we created the UNI pool as a shorthand [UNI stands for User to Network Interface].) Replicating vtl pools and tape cartridges does not require the VTL license on the destination Data Domain system. Destination name limitation: The pool name must be unique on the destination, and the destination cannot include levels of directories between the destination hostname and the pool name. For example, a destination of pool://hostB/hostA/pool2 is not allowed.
252
Start Replication
Start the source and destination variables with pool:// and include the pool that is the replication target. For example, enter a command similar to the following on both Data Domain Systems: Version of the command using pool: replication add source pool://hostA/pool2 destination pool://hostB/pool2 Version of the command using dir: replication add source dir://hostA/backup/vtc/pool destination dir://hostB/backup/vtc/pool2
Start Replication
To start replication between a source and destination, use the replication initialize operation on the source. The command checks that the configuration and connections are correct and returns error messages if any problems appear. If the source holds a lot of data, the initialize operation can take many hours. Consider putting both Data Domain Systems in the Replicator pair in the same location with a direct link to cut down on initialization time. A destination variable is required. Administrative users only. replication initialize destination For a successful initialization with directory replication:
The source directory must exist. The destination directory must be empty.
Run the filesys destroy command on the destination. Configure replication on the source and on the destination. Run the filesys enable command on the destination. Run the replication initialize command on the source.
Test environments at Data Domain give the following guidelines for estimating the time needed for replication initialization. Note that the following are guidelines only and may not be accurate in specific production environments. Directory Replication Initialization
Over a T3, 100ms WAN, performance is about 40 MiB/sec. of pre-compressed data, which gives data transfer of: 40 MiB/sec. = 25 seconds/GiB = 3.456 TiB/day
Replication - CLI
253
Suspend Replication
Note MiB=MibiBytes, the base 2 equivalent of Megabytes. GiB=GibiBytes, the base 2 equivalent of Gigabytes. TiB=TibiBytes, the base 2 equivalent of Terabytes.
Over a gibibit (the base 2 equivalent of gigabit) LAN, performance is about 80 MiB/sec. of pre-compressed data, which gives data transfer of about double the rate for a T3 WAN.
Over a WAN, performance depends on the line speed. Over a gibibit LAN, performance is about 70 MiB/sec. of compressed data.
Suspend Replication
To temporarily halt the replication of data between source and destination, use the replication disable operation on either the source or the destination. On the source, the operation stops the sending of data to the destination. On the destination, the operation stops serving the active connection from the source. If the file system is disabled on either Data Domain System when replication is disabled, replication remains disabled even after the file system is restarted. Administrative users only. The replication disable command is for short-term situations only. A filesys clean operation may proceed very slowly on a replication context when that context is disabled, and cannot reclaim space for files that are deleted but not yet replicated. Use the replication break command to permanently stop replication and to avoid slowing filesys clean operations. replication disable {destination | all} Note Using the command "replication break" on a collection replication replica or recovering originator will require a "filesys destroy" on that machine before the file system can be enabled on it again.
Resume Replication
To restart replication that is temporarily halted, use the replication enable operation on the Data Domain System that was temporarily halted. On the source, the operation resumes the sending of data to the destination. On the destination, the operation resumes serving the active connection from the source. If the file system is disabled on either Data Domain System when replication is enabled, replication is enabled when the file system is restarted. Administrative users only. replication enable {destination | all}
254
Remove Replication
Note If the source Data Domain system received large amounts of new or changed data during the halt, resuming replication may significantly slow down filesys clean operations.
Remove Replication
To remove either the source or destination Data Domain System from a Replicator pair or to remove all Replicator configurations from a Data Domain system, use the replication break operation. A destination variable or all is required.
Always run the filesys disable command before the break operation and the filesys enable command after. With collection replication, a destination is left as a stand-alone read/write Data Domain System that can then be used as a source. With collection replication, a destination cannot be brought back into the replication pair or used as a destination for another source until the file system is emptied with the filesys destroy command. With directory replication, a destination directory must be empty to be used again (whether with the original source or with a different source), or, alternatively, replication resync must be used. replication break {destination | all}
Note Using the command "replication break" on a collection replication replica or recovering originator will require a "filesys destroy" on that machine before the file system can be enabled on it again.
Replication - CLI
255
With collection replication, first use the filesys disable and filesys destroy operations on the new source. With directory replication, the target directory on the source must be empty. See Procedure: Set Up and Start Many-to-One Replication on page 275. Do not use the operation on a destination. If the replication break command was run earlier, the destination cannot be used to recover a source. A destination variable is required. Also see Procedure: Replace a Directory Source - New Name on page 275 for an example of using the recover option when replacing a source Data Domain System.
Use the replication watch command to display the progress of the recovery process.
Note If you try to replicate to a Data Domain system that has retention-lock enabled, and the destination isnt empty, replication resync wont work.
Abort a Resync
To stop an ongoing resync operation, use the replication abort resync command on both the source and destination directory replication Data Domain systems. replication abort resync destination
Replication - CLI
257
258
Throttling
Add a Scheduled Throttle Event
To change the rate of network bandwidth used by replication, use the throttle add operation. The default network bandwidth use is unlimited. replication throttle add sched-spec rate The sched-spec must include:
One or more three-letter days of the week (such as mon, tue, or wed) or the word daily (to set the schedule every day of the week). A time of day in 24 hour military time.
The rate includes a number or the word unlimited. The number can include a tag for bits or bytes per second. Do not use a space between the number and the bits or bytes specification. For example, 2000KiB. The default rate is bits per second. In the rate variable:
bps or b equals raw bits per second Kibps, Kib, or K equals 1024 bits per second Bps or B equals bytes per second KiBps or KiB equals 1024 bytes per second
Note Kib=Kibibits, the base 2 equivalent of Kb or Kilobits. KiB=Kibibytes, the base 2 equivalent of KB or Kilobytes. The rate can also be 0 (the zero character), disable, or disabled. Each stops replication until the next rate change. For example, the following command limits replication to 20 kibibytes per second starting on Mondays and Thursdays at 6:00 a.m. # replication throttle add mon thu 0600 20KiB
Replication - CLI 259
Throttling
Replication runs at the given rate until the next scheduled change or until new throttle commands force a change. The default rate with no scheduled changes is to run as fast as possible at all times. The add operation may change the current rate. For example, if on Monday at Noon, the current rate is 20 KiB, and the schedule that set the current rate started on mon 0600, a new schedule change for Monday at 1100 at a rate of 30 KiB (mon 1100 30KiB) makes the change immediately. Note The system enforces a minimum rate of 98,304 bits per second (12 KiB).
bps or b equals raw bits per second Kibps, Kib, or K equals 1024 bits per second Bps or B equals bytes per second KiBps or KiB equals 1024 bytes per second
Note Kib=Kibibits, the base 2 equivalent of Kb or Kilobits. KiB=Kibibytes, the base 2 equivalent of KB or Kilobytes. The rate can also be 0 (the zero character), disable, or disabled. Each stops replication until the next rate change. As an example, the following command sets the rate to 2000 kibibytes per second: # replication throttle set current 2000KiB Note The system enforces a minimum rate of 98,304 bits per second (12 KiB).
260
Throttling
One or more three-letter days of the week (such as mon, tue, or wed) or the word daily to delete all entries for the given time. A time of day in 24 hour military time.
For example, the following command removes an entry for Mondays at 1100: # replication throttle del mon 1100 The command may change the current rate. For example, assume that on Monday at Noon, the current rate is 30 KiB (Kibibytes, the base 2 equivalent of KB or Kilobytes), and the schedule that set the current rate started on mon 1100. If you now delete the scheduled change for Monday at 1100 (mon 1100), the replication rate immediately changes to the next previous scheduled change, such as mon 0600 20KiB.
bps or b equals raw bits per second Kibps, Kib, or K equals 1024 bits per second Bps or B equals bytes per second KiBps or KiB equals 1024 bytes per second
Note Kib=Kibibits, the base 2 equivalent of Kb or Kilobits. KiB=Kibibytes, the base 2 equivalent of KB or Kilobytes. The rate can also be 0 (the zero character), disable, or disabled. Each stops replication until the next rate change. As an example, the following command sets the rate to 2000 kibibytes per second: # replication throttle set override 2000KiB Note The system enforces a minimum rate of 98,304 bits per second (12 KiB).
Replication - CLI
261
Throttling
A reset of current removes the rate set by the replication throttle set current command. The rate returns to a scheduled rate or to the default if no rate is scheduled. A reset of override removes the rate set by the replication throttle set override command. The rate returns to a scheduled rate or to the default if no rate is scheduled. The default network bandwidth use is unlimited. The reset of schedule removes all scheduled change entries. The rate remains at a current or override setting, if either is active, or returns to the default of unlimited. The reset of all removes any current or override settings and removes all scheduled change entries, returning the system to the default, which is unlimited.
262
If you set bandwidth or delay you MUST set both. Bandwidth and delay must be set on both sides of the connection. For a destination with multiple sources, use the values with the maximum product.
Replication - CLI
263
1. Prepare: Find the actual bandwidth for each server. Find the actual network delay values for each server (for example, by using the ping command). 2. Disable replication on all servers: replication disable all 3. For each server, wait until replication status reports disconnected: replication status 4. For each server, set the bandwidth to its actual value, in Bytes per second: replication option set bandwidth value Note The "replication option set" of bandwidth and network delay only needs to be executed once on any Data Domain system even with multiple replication server contexts. The setting is global to the box. 5. For each server, set the network delay to its actual value, in milliseconds: replication option set delay value 6. Re-enable replication on all servers: replication enable all
264
connection-host command. The destination host name may not resolve to the correct IP address for the connection when connecting to an alternate interface on the destination or when a connection passes through a firewall. Enabled The replication process is yes (enabled and available to replicate data) or no (disabled and not available to replicate data). Display To display the configuration parameters, use the show config operation. replication show config [destination | all] The display with no destination variable or all option is similar to the following: # replication show config all CTX Source --- ----------------------------------1 dir://host2.company.com/backup/dir2 2 dir://host3.company.com/backup/dir3 --- ----------------------------------Destination ----------------------------------dir://host3.company.com/backup/dir3 dir://host2.company.com/backup/dir2 ----------------------------------Enabled ------Yes Yes ------On the replica, the per-context display is modified to include an asterisk; if at least one context was marked with an asterisk, the footnote "Used for recovery only" is also displayed. Connection Host and Port -----------------------host3.company.com host3.company.com ------------------------
The display with a destination variable is similar to the following. The all option returns a similar display for each context. # replication show config dir://host3.company.com/backup/dir2 CTX: 2 Source: dir://host2.company.com/backup/host2 Destination: dir://host3.company.com/backup/host2
Replication - CLI
265
266
Display Performance
Display Performance
To display current replication activity, use the replication show performance command. The default interval is two seconds. Network (KB/s) is the amount of compressed data per second transfered over the network. replication show performance {destination | all} [interval sec] [count count] For example: # replication show performance rctx://2 05/02 09:00:38 rctx://2 Pre-comp Network (KB/s) (KB/s) --------- --------163469 752 163469 777 170054 756 176351 824
Replication - CLI
267
Display Status
To display Replicator configuration information and the status of replication operations, use the replication status operation. replication status [destination | all] With no option, the display is similar to the following: # replication status CTX Destination Enabled --- ----------------------------------- ------268 Data Domain Operating System User Guide
Display Status
1 dir://host2.company.com/backup/dir2 yes 2 dir://host3.company.com/backup/dir3 yes --- ----------------------------------- ------Connected ----------------Thu Jan 12 17:06 disconnected ----------------Lag -----00:00 698:32 ------
Enabled The enabled state (yes or no) of replication for each replication pair. Connected The most recent connection date and time or connection state for a replication pair. Lag Backup data on a replication source is given a time stamp when the data is received from the originating client. The difference between that time and the time the same data is received by the replication destination is the lag. Lag is not the time needed to complete replication. Lag is a record of how long the most recently replicated data was on the source before being sent to the destination. Lag can immediately drop from a high to a low number if the last record processed was on the source for a long time before being replicated. If data was on the source for less than five minutes before being replicated or if the source is not sending new data, a generic message of Less than 5 minutes appears. Output from the replication status command shows whether or not any data remains to be sent from the source. With a destination variable, the display is similar to the following. The all option returns a similar display for each context. The displays include the information above plus: # replication status dir://host2.company.com/backup/dir2 Mode: source Destination: dir://ccm34.datadomain.com/backup/dir2 Enabled: yes Local filesystem status: enabled Connection: connected since Thu Jan 12 17:06:41 State: normal Error: no error Lag: less than 5 minutes Current throttle: unlimited Mode The role of the local system: source or destination. Local Filesystem Status The enabled/disabled status of he local file system. Connected Includes both the state and the date and time of the last change in the connection state. State The state of the replication process. Error A listing of any errors in the replication process. Current Throttle The current throttle setting.
Replication - CLI
269
Display Statistics
Display Statistics
Replication statistics give the following information: CTX: The context number for directory replication or a 0 (zero) for collection replication. Destination: The replication destination. Network bytes sent: the count of bytes sent over the network. Does not include TCP/IP headers. Does include internal replication control information and metadata, as well as filesystem data. Post-compressed bytes sent: same as network bytes sent Pre-compressed bytes sent: the sum of the size(s) of the file(s) replicated on this context. Note: this includes logical bytes associated with the current file thats being replicated. Post-compressed bytes received: network bytes (as defined above) received. Syncd-as-of-Time: the timestamp of the replication log record most recently executed on the replica. The timestamp indicates when the log record was generated on the originator. Pre-compressed bytes remaining (directory replication only): the sum of the size(s) of the file(s) remaining to be replicated for this context. Note: this includes the *entire* logical size of the current file being replicated, so if a very large file is being replicated, this number may not change for a noticeable period of timeit only changes after the current file finishes. Compression ratio: the ratio of pre-compressed bytes transferred to network bytes transferred. Compressed data remaining (collection replication only): the amount of compressed filesystem data remaining to be sent. Display To display Replicator statistics for all replication pairs or for a specific destination pair, use the replication show stats operation. replication show stats [destination | all] The display is similar to the following: # replication show stats
270
Display Statistics
To display statistics for the destination labeled as context 1, use the following command: # replication show stats rctx://1 The display is similar to the following: # replication show stats rctx://1
CTX: Destination: Network bytes sent: Pre-compressed bytes sent: Compression ratio: Sync'ed-as-of time: Pre-compressed bytes remaining: 1 dir://33.company.com/backup/rig14_8 3,904 612 0.0 Tue Dec 11 18:30 0
Replication - CLI
271
Hostname Shorthand
Hostname Shorthand
With all Replicator commands that use a hostname to identify the source or destination, the hostname can be left out if the hostname refers to the local system. Use the same three slashes ( /// ) that would bracket the hostname if the hostname was included. For example, the replication add command when given on the source Data Domain system could be entered in either of the following ways: replication add source dir://hostA/backup/dir2 destination dir://hostB/backup/dir2
272
replication add source dir:///backup/dir2 destination dir://hostB/backup/dir2 The same command given on the destination Data Domain system could be done in either of the following ways: replication add source dir://hostA/backup/dir2 destination dir://hostB/backup/dir2 replication add source dir://hostA/backup/dir2 destination dir:///backup/dir2 Use the same format with collection replication. Add a third slash, even though a third slash is not otherwise used with collection replication. For example, the replication add command for collection replication entered on the source could be done in either of the following ways: replication add source col://hostA destination col://hostB replication add source col:/// destination col://hostB
Run the following command on both the source and destination Data Domain Systems: replication add source dir://hostA/backup/dir2 destination dir://hostB/backup/dir2
Run the following command on the source. The command checks that both Data Domain Systems in the pair can communicate and starts all Replicator processes. If a problem appears, such as that communication between the Data Domain Systems is not possible, you do not need to re-initialize after fixing the problem. Replication should begin as soon as the Data Domain Systems can communicate. replication initialize
Replication - CLI
273
Run the following command on both the source and destination Data Domain Systems: filesys disable
Run the following command on both the source and destination Data Domain Systems. See Configure Replicator on page 251 for the details of using the command: replication add source col://hostA destination col://hostB
Run the following command on both the source and destination Data Domain Systems: filesys enable
Run the following command on the source. The command checks that both Data Domain Systems in the pair can communicate and starts all Replicator processes. If a problem appears, such as that communication between the Data Domain Systems is not possible, you do not need to re-initialize after fixing the problem. Replication should begin as soon as the Data Domain Systems can communicate. replication initialize
Run both of the following commands on hostA and hostB: replication add source dir://hostA/backup/dir2 destination dir://hostB/backup/dir2 replication add source dir://hostB/backup/dir1 destination dir://hostA/backup/dir1
274
Run the following command on hostA and hostC: replication add source dir://hostA/backup/dir2 destination dir://hostC/backup/dir2
Run the following command on hostB and hostC: replication add source dir://hostB/backup/dir1 destination dir://hostC/backup/dir1
If the new source has any data in the target directories, delete all data from the directories. Run the following commands on the destination: filesys disable replication modify dir://hostB/backup/dir2 source-host hostC replication reauth dir://hostB/backup/dir2 filesys enable Run the following commands on the new source: replication add source dir://hostC/backup/dir2 destination dir://hostB/backup/dir2 replication recover dir://hostB/backup/dir2
Replication - CLI
275
Use the following command to see when the recovery is complete. Note the State entry in the output. State is normal when recovery is done and recovering while recovery is in progress. Also, a messages log file entry, replication recovery completed is sent when the process is complete. The byte count may be equal on both sides, but the recovery is not complete until data integrity is verified. The recovering directory is read-only until recovery finishes. # replication status dir://hostC/backup/dir2 CTX: 2 Mode: source Destination: dir://hostC/backup/dir2 Enabled: yes Local filesystem status: enabled Connection: connected since Sat Apr State: recovering Error: no error Destination lag: less than 5 minutes Current throttle: unlimited
8 23:38:11
If the new source was using the VTL feature, use the following command on the source: vtl disable
Run the following command on the destination and the new source: filesys disable
Run the following command only on the new source to clear all data from the file system: filesys destroy
Run the following commands on the new source: replication add source col://hostA destination col://hostB replication recover See the last bullet in the previous procedure for checking the progress of the recovery.
276
On the source and destination Data Domain systems, run commands similar to the following: filesys disable replication break dir://hostB/backup/dir2 filesys destroy filesys enable On the destination, run a file system cleaning operation: filesys clean
On both the source and destination, add back the original context: replication add source dir://hostA/backup/dir2 destination dir://hostB/backup/dir2
On the source, run a replication resynchronization operation for the target context: replication resync dir://hostB/backup/dir2
Over a T3, 100ms WAN, performance is about 100 MiB/sec., which gives data transfer of: 100 MiB/sec. = 10 seconds/GiB = 8.6 TiB/day
Note MiB=MibiBytes, the base 2 equivalent of Megabytes. GiB=GibiBytes, the base 2 equivalent of Gigabytes. TiB=TibiBytes, the base 2 equivalent of Terabytes.
Over a gibibit (the base 2 equivalent of gigabit) LAN, performance is about 120 MiB/sec., which gives data transfer of: 120 MiB/sec. = 8.3 seconds/GiB = 10.3 TiB/day
Replication - CLI
277
Procedure: Seeding
Use the following procedure to convert a collection replication pair (source is hostA, destination is hostB) to directory replication.
Run commands similar to the following on both of the collection replication systems: filesys disable replication break col://hostB filesys destroy filesys enable Run a command similar to the following on both systems: replication add source dir://hostA/backup destination dir://hostB/backup/hostA
Use the replication watch command to display the progress of the conversion process.
Procedure: Seeding
A Data Domain System that already holds data in its file system can be used as a source Data Domain System for replication. Part of setting up replication with such a Data Domain System is to transfer the current data on the source Data Domain System to the destination Data Domain System. The procedure for the transfer is called seeding. As seeding over a WAN may need large amounts of bandwidth and time, Data Domain provides alternate seeding procedures for the following replication configurations:
One-to-one One source Data Domain System replicates data to one destination Data Domain System. Replication can be collection or directory type. Bidirectional A source Data Domain System, such as ddr01, replicates data to the destination ddr02. At the same time, ddr02 is a source for replication to ddr01. Each Data Domain System is a source for its own data and a destination for the other Data Domain Systems data. Bidirectional replication can be directory replication only. Many-to-one More than one source Data Domain System replicates data to a single destination Data Domain System. Many-to-one replication can be directory replication only.
278
Procedure: Seeding
One-to-One
For collection replication, the destination Data Domain System file system must be empty. In the following example, ddr01 is the source Data Domain System and ddr02 is the destination. 1. Ship the destination Data Domain System (ddr02) to the source Data Domain System (ddr01) site. 2. Follow the standard Data Domain installation process to install the destination Data Domain System. 3. Connect the Data Domain Systems with a direct link to cut down on initialization time. 4. Boot up the destination Data Domain System. (The source Data Domain System should already be in service.) 5. Enter the following command on both Data Domain Systems: # filesys disable 6. Enter a command similar to the following on both Data Domain Systems: # replication add source col://ddr01.company.com destination col://ddr02.company.com 7. Enter the following command on both Data Domain Systems: # filesys enable 8. On the source, enter a command similar to the following. If the source holds a lot of data, the initialize operation can take many hours. # replication initialize col://ddr02.company.com 9. Wait for initialization to complete. Output from the replication initialize command details initialization progress. 10. On the destination, enter the following command: # system poweroff 11. Move the destination Data Domain System to its permanent location, company2.com in this example. 12. Boot up the destination Data Domain System.
Replication - CLI
279
Procedure: Seeding
13. On the destination Data Domain System, run the config setup command and make any needed changes. For example, the system hostname is a fully-qualified domain name that may be different in the new location.
14. On ddr02, enter commands similar to the following to change the replication destination host to the new hostname: # filesys disable # replication modify col://ddr02.company.com destination-host ddr02.company2.com # filesys enable 15. :On ddr01, enter commands similar to the following to change the destination hostname: # filesys disable # replication modify col://ddr02.company.com destination-host ddr02.company2.com # filesys enable For directory replication, the source directory must exist and the destination directory must be empty. In the following example, ddr01 is the source Data Domain System and ddr02 is the destination. 1. Ship the destination Data Domain System (ddr02) to the source Data Domain System (ddr01) site, company.com in this example. 2. Follow the standard Data Domain installation process to physically install ddr02. 3. Connect the Data Domain Systems with a direct link to cut down on initialization time. 4. Boot up ddr02. (The source Data Domain System should already be in service.) 5. Configure ddr02 using the standard Data Domain process. 6. Enter a command similar to the following on both Data Domain Systems:
280
Procedure: Seeding
# replication add source dir://ddr01.company.com/backup/data01 destination dir://ddr02.company.com/backup/data01 7. On ddr01, enter a command similar to the following. If the source holds a lot of data, the initialize operation can take many hours. # replication initialize dir://ddr02.company.com/backup/data01 8. Wait for initialization to complete. Output from the replication initialize command details initialization progress.
9. On ddr02, enter the following command: # system poweroff 10. Move ddr02 to its permanent location, company2.com in this example. 11. Boot up the destination Data Domain System. 12. On ddr02, run the config setup command and make any needed changes, such as hostname, which is a fully-qualified domain name that may be different in the new location. 13. On ddr02, enter commands similar to the following to change the replication destination host to the new hostname: # filesys disable # replication modify dir://ddr02.company.com/backup/data01 destination-host ddr02.company2.com # filesys enable 14. On ddr01, enter commands similar to the following to change the destination host to the new hostname: # filesys disable # replication modify dir://ddr02.company.com/backup/data01 destination-host ddr02.company2.com # filesys enable
Replication - CLI
281
Procedure: Seeding
Bidirectional
With bidirectional replication, the seeding process uses three Data Domain Systems: one permanent Data Domain System at each customer site and one temporary Data Domain System that is physically moved from one site to another. Bidirectional replication must use directory-type replication. For directory replication, the source directory must exist and the destination directory must be empty. The instructions below use the name ddr01 for the first permanent Data Domain System that is replicated, ddr02 for the second permanent Data Domain System that is replicated, and ddr-temp for the Data Domain System that is moved from one site to another. Bidirectional replication is done in eight phases:
Copy source data from the first permanent Data Domain System (ddr01) to the temporary Data Domain System (ddr-temp). Move ddr-temp to the site of the second permanent Data Domain System (ddr02). Transfer the ddr01 source data from ddr-temp to ddr02. Setup and start replication between ddr01 and ddr02 for ddr01 source data. Copy the ddr02 source data to ddr-temp Move ddr-temp back to the ddr01 site. Transfer the ddr02 source data to ddr01. Setup and start replication between ddr02 and ddr01 for ddr02 source data.
Copy source data from the first Data Domain System (ddr01): 1. Ship the temporary Data Domain System (ddr-temp) to the ddr01 site, company.com in this example. 2. Follow the standard Data Domain hardware installation process to physically setup ddr-temp. 3. Connect ddr01 and ddr-temp with a direct link to cut down on initialization time. 4. Boot up ddr-temp. (Ddr01 should already be in service.) 5. Configure ddr-temp using the standard Data Domain command config setup. 6. Enter a command similar to the following on both Data Domain Systems. Note the use of an added temp directory for the destination. # replication add source dir://ddr01.company.com/backup/data01 destination dir://ddr-temp.company.com/backup/temp/data01
282 Data Domain Operating System User Guide
Procedure: Seeding
7. On ddr01, enter a command similar to the following. # replication initialize dir://ddr-temp.company.com/backup/temp/data01 8. Wait for initialization to finish. If ddr01 holds a lot of data, the initialize operation can take many hours. Output from the replication initialize command details initialization progress. 9. On ddr01 and ddr-temp, enter commands similar to the following to break replication: # filesys disable # replication break dir://ddr-temp.company.com/backup/temp/data01 # filesys destroy # filesys enable
10. On ddr-temp, enter the following command: # system poweroff Move the temporary Data Domain System. 1. Move ddr-temp to the ddr02 site, company2.com in this example. 2. Follow the standard Data Domain hardware installation process to physically set up ddr-temp. 3. Connect ddr02 and ddr-temp with a direct link to cut down on initialization time. 4. Boot up ddr-temp. (Ddr02 should already be in service.) 5. On ddr-temp, run the config setup command and make any needed changes, such as hostname, which is a fully-qualified domain name that may be different in the new location.
Replication - CLI
283
Procedure: Seeding
1. Set up replication with ddr-temp as the source and ddr02 as the destination. Enter a command similar to the following on both ddr-temp and ddr02. Note that the added temp directory is used for both source and destination. # replication add source dir://ddr-temp.company2.com/backup/temp/data01 destination dir://ddr02.company2.com/backup/temp/data01 2. On ddr-temp, enter a command similar to the following to transfer data to ddr02: # replication initialize dir://ddr02.company2.com/backup/temp/data01 3. Wait for initialization to finish. If ddr-temp holds a lot of data, the initialize operation can take many hours. Output from the replication initialize command details initialization progress. 4. On ddr-temp and ddr02, enter commands similar to the following to break replication: # filesys disable # replication break dir://ddr02.company2.com/backup/temp/data01 # filesys destroy # filesys enable Setup and start replication between ddr01 and ddr02 for data from ddr01. Note that the temp directory is NOT used for either the source or the destination. 1. Enter a command similar to the following on both ddr01 and ddr02 to set up replication: # replication add source dir://ddr01.company.com/backup/data01 destination dir://ddr02.company2.com/backup/data01 2. On ddr01, enter a command similar to the following to initialize replication. The initialization process should take a short time as the process transfers only metadata and backup application data that is new since data was transferred to ddr-temp. The metadata goes to the specified location on ddr02, in this example: /backup/data01. Backup application data that was transferred from ddr-temp to ddr02 remains on ddr02 and is not replicated again. # replication initialize dir://ddr02.company2.com/backup/data01 3. Wait for initialization to finish. Output from the replication initialize command details initialization progress.
284
Procedure: Seeding
4. If ddr-temp has space for the current ddr01 data and space for the ddr02 data, leave ddr-temp as is. Take into account that any common data between the two data sets gets compressed on ddr-temp, using less space. If ddr-temp does not have enough space for both sets of data, mount or map the ddr-temp directory /backup from another system and delete /temp. Copy the ddr02 source data to ddr-temp. ddr-temp should still be installed at the ddr02 site and communicating with ddr02. 1. Enter a command similar to the following on both Data Domain Systems. Note the use of the added temp directory for both the source and the destination. # replication add source dir://ddr02.company2.com/backup/temp/data02 destination dir://ddr-temp.company2.com/backup/temp/data02 2. On ddr02, enter a command similar to the following. # replication initialize dir://ddr-temp.company2.com/backup/temp/data02 3. Wait for initialization to finish. If ddr02 holds a lot of source data, the initialize operation can take many hours. Output from the replication initialize command details initialization progress. 4. On ddr01 and ddr-temp, enter commands similar to the following to break replication: # filesys disable # replication break dir://ddr-temp.company2.com/backup/temp/data02 # filesys destroy # filesys enable 5. On ddr-temp, enter the following command: # system poweroff Move the temporary Data Domain System. 1. Move ddr-temp back to the ddr01 site. 2. Follow the standard Data Domain hardware installation process to physically setup ddr-temp.
Replication - CLI
285
Procedure: Seeding
3. Connect ddr01 and ddr-temp with a direct link to cut down on initialization time. 4. Boot up ddr-temp. (Ddr01 should already be in service.) 5. On ddr-temp, run the config setup command and make any needed changes, such as hostname, which is a fully-qualified domain name that may be different in the current location.
Transfer the ddr01 source data from ddr-temp to ddr02. 1. Set up replication with ddr-temp as the source and ddr01 as the destination. Enter a command similar to the following on both ddr-temp and ddr01. Note that the added temp directory is used for both source and destination. # replication add source dir://ddr-temp.company.com/backup/temp/data02 destination dir://ddr01.company.com/backup/temp/data02 2. On ddr-temp, enter a command similar to the following to transfer the ddr02 source data to ddr01: # replication initialize dir://ddr01.company.com/backup/temp/data02 3. Wait for initialization to finish. If ddr-temp holds a lot of data, the initialize operation can take many hours. Output from the replication initialize command details initialization progress. 4. On ddr01 and ddr-temp, enter commands similar to the following to break replication: # filesys disable # replication break dir://ddr01.company.com/backup/temp/data02 # filesys destroy # filesys enable Setup and start replication between ddr02 and ddr01 for data from ddr02. Note that the temp directory is NOT used for either the source or the destination. 1. Enter a command similar to the following on both ddr02 and ddr01 to set up replication: # replication add source dir://ddr02.company2.com/backup/data02 destination dir://ddr01.company.com/backup/data02
286
Procedure: Seeding
2. On ddr02, enter a command similar to the following to initialize replication. The initialization process should take a short time as the process transfers only metadata and backup application data that is new since data was transferred to ddr-temp. The metadata goes to the specified location on ddr01, in this example: /backup/data02. Backup application data that was transferred from ddr-temp to ddr01 remains on ddr01 and is not replicated again. # replication initialize dir://ddr01.company.com/backup/data02 3. Wait for initialization to finish. Output from the replication initialize command details initialization progress. 4. On ddr02, mount or map the directory /backup from another system and delete /temp. 5. On ddr01, mount or map the directory /backup from another system and delete /temp.
Many-to-One
With many-to-one replication, the seeding process uses a temporary Data Domain System to receive data from each source Data Domain System site. The temporary Data Domain System is physically moved from one source site to another and then moved to the destination Data Domain System site. Many-to-one replication must use directory-type replication. For directory replication, the source directory must exist and the destination directory must be empty. The instructions below use the name ddr01 for the first Data Domain System that is replicated, ddr02 for the second Data Domain System that is replicated, ddr-dest for the single destination Data Domain System, and ddr-temp for the Data Domain System that is moved from site to site. Many-to-one replication is done in six phases for the example in this section:
Copy source data from the first source Data Domain System (ddr01) to the temporary Data Domain System (ddr-temp). Move ddr-temp to the second source Data Domain System (ddr02) site. Copy source data from ddr02 to ddr-temp. Move ddr-temp to the site of the destination Data Domain System (ddr-dest). Transfer the ddr01 and ddr02 source data from ddr-temp to ddr-dest. Setup and start replication between ddr01 and ddr-dest and between ddr02 and ddr-dest.
Copy source data from the first Data Domain System (ddr01):
Replication - CLI
287
Procedure: Seeding
1. Ship the temporary Data Domain System (ddr-temp) to the ddr01 site, company.com in this example. 2. Follow the standard Data Domain hardware installation process to physically setup ddr-temp. 3. Connect ddr01 and ddr-temp with a direct link to cut down on initialization time. 4. Boot up ddr-temp. (Ddr01 should already be in service.) 5. Configure ddr-temp using the standard Data Domain command config setup. 6. Enter a command similar to the following on both Data Domain Systems. Note the use of an added temp directory for the destination. # replication add source dir://ddr01.company.com/backup/data01 destination dir://ddr-temp.company.com/backup/temp/data01 7. On ddr01, enter a command similar to the following. # replication initialize dir://ddr-temp.company.com/backup/temp/data01 8. Wait for initialization to finish. If ddr01 holds a lot of data, the initialize operation can take many hours. Use the replication initialize command to see initialization progress. 9. On ddr01 and ddr-temp, enter commands similar to the following to break replication: # filesys disable # replication break dir://ddr-temp.company.com/backup/temp/data01 # filesys destroy # filesys enable 10. On ddr-temp, enter the following command: # system poweroff Move the temporary Data Domain System to the second (ddr02) source site. 1. Move ddr-temp to the ddr02 site, company2.com in this example. 2. Follow the standard Data Domain hardware installation process to physically set up ddr-temp. 3. Connect ddr02 and ddr-temp with a direct link to cut down on initialization time.
288 Data Domain Operating System User Guide
Procedure: Seeding
4. Boot up ddr-temp. (Ddr02 should already be in service.) 5. On ddr-temp, run the config setup command and make any needed changes, such as hostname, which is a fully-qualified domain name that may be different in the new location.
Copy source data from the second source Data Domain System (ddr02): 1. Enter a command similar to the following on ddr-temp and ddr02. Note the use of an added temp directory for the destination. # replication add source dir://ddr02.company2.com/backup/data02 destination dir://ddr-temp.company2.com/backup/temp/data02 2. On ddr02, enter a command similar to the following. # replication initialize dir://ddr-temp.company2.com/backup/temp/data02 3. Wait for initialization to finish. If ddr02 holds a lot of data, the initialize operation can take many hours. Output from the replication initialize command details initialization progress. 4. On ddr02 and ddr-temp, enter commands similar to the following to break replication: # filesys disable # replication break dir://ddr-temp.company2.com/backup/temp/data02 # filesys destroy # filesys enable 5. On ddr-temp, enter the following command: # system poweroff
Move the temporary Data Domain System to the destination (ddr-dest) site. 1. Move ddr-temp to the ddr-dest site, company3.com in this example. 2. Follow the standard Data Domain hardware installation process to physically set up ddr-temp. 3. Connect ddr-dest and ddr-temp with a direct link to cut down on initialization time.
Replication - CLI
289
Procedure: Seeding
4. Boot up ddr-temp. (Ddr-dest should already be in service.) 5. On ddr-temp, run the config setup command and make any needed changes, such as hostname, which is a fully-qualified domain name that may be different in the new location.
Transfer the ddr01 and ddr02 source data from ddr-temp to ddr-dest. 1. Set up a replication context with ddr-temp as the source and ddr-dest as the destination. Enter a command similar to the following on both ddr-temp and ddr-dest. Note that the added temp directory is used for both sources and destinations. # replication add source dir://ddr-temp.company3.com/backup/temp destination dir://ddr-dest.company3.com/backup/temp 2. On ddr-temp, enter a command similar to the following to transfer the ddr01 and ddr02 source data to ddr-dest: # replication initialize dir://ddr-dest.company3.com/backup/temp 3. Wait for initialization to finish. If ddr-temp holds a lot of data, the initialize operation can take many hours. Output from the replication initialize command details initialization progress. 4. On ddr-dest and ddr-temp, enter commands similar to the following to break replication: # filesys disable # replication break dir://ddr-dest.company3.com/backup/temp # filesys destroy # filesys enable Setup and start replication between ddr01 and ddr-dest and between ddr02 and ddr-dest. Note that the temp directory is NOT used for either the sources or the destinations. 1. Enter a command similar to the following on both ddr01 and ddr-dest to set up ddr01 replication: # replication add source dir://ddr01.company.com/backup/data01 destination dir://ddr-dest.company3.com/backup/data01
290
Migration
2. Enter a command similar to the following on both ddr02 and ddr-dest to set up ddr02 replication: # replication add source dir://ddr02.company2.com/backup/data02 destination dir://ddr-dest.company3.com/backup/data02 3. On ddr01, enter a command similar to the following to initialize replication. The initialization process should take a short time as the process transfers only metadata and backup application data that is new since data was transferred to ddr-temp. The metadata goes to the specified location on ddr-dest, in this example: /backup/data01. Backup application data that was transferred from ddr-temp to ddr-dest remains on ddr-dest and is not replicated again. # replication initialize dir://ddr-dest.company3.com/backup/data01 4. On ddr02, enter a command similar to the following to initialize replication. The initialization process should take a short time as the process transfers only metadata and backup application data that is new since data was transferred to ddr-temp. The metadata goes to the specified location on ddr-dest, in this example: /backup/data02. Backup application data that was transferred from ddr-temp to ddr-dest remains on ddr-dest and is not replicated again. # replication initialize dir://ddr-dest.company3.com/backup/data02 5. Wait for initialization to finish. Output from the replication initialize command details initialization progress. 6. On ddr-dest, mount or map the directory /backup from another system and delete the temporary directory.
Migration
The migration command copies all data from one Data Domain system to another and may also copy replication contexts (configurations). Use the command when upgrading to a larger capacity Data Domain system. Migration is usually done in a LAN environment. See the procedures at the end of this section for using migration with a Data Domain system that is part of a replication pair.
All data under /backup is always migrated and exists on both systems after migration. After migrating replication contexts, the migrated contexts still exist on the migration source. After migrating a context, break replication for that context on the migration source. Do not run backup operations to a migration source during a migration operation. A migration destination does not need a replication license unless the system will use replication.
291
Replication - CLI
Migration
The migration destination must have a capacity that is the same size as or larger than the migration source. The migration destination must have an empty file system. Any setting of the systems replication throttle feature also applies to migration. If the migration source has throttle settings, use the replication throttle set override command to set the throttle to the maximum (unlimited) before starting migration.
Only on the migration destination. Before entering the migration send command on the migration source. After running the filesys disable and filesys destroy operations on the destination.
The command syntax is: migration receive source-host src-hostname For example, to prepare a destination for migration from a migration source named hostA: # filesys disable # filesys destroy # migration receive source-host hostA Note When preparing the destination, DO NOT run the filesys enable command.
Only on the migration source. Only when no backup data is being sent to the migration source. After entering the migration receive command on the migration destination. migration send obj-spec-list destination-host dest-hostname
292
Migration
The obj-spec-list is /backup for systems that do not have a replication license. With replication, the obj-spec-list is one or more contexts from the migration source. After migrating a context, all data from the context is still on the source system, but the context configuration is only on the migration destination. A context in the obj-spec-list can be:
The destination string as defined when setting up replication. Examples are: dir://hostB/backup/dir2 col://hostB pool://hostB/pool2 The context number as shown in output from the replication status command. For example: rctx://2
The keyword all, which migrates all contexts from the migration source to the destination.
Backup jobs to the Data Domain system should be stopped during the first migration phase as write access is blocked during the first phase. Backup jobs can be resumed during the second phase. The first phase takes a maximum of 30 minutes for a Data Domain system with a full /backup file system. Use the migration watch command to track the first migration phase. New data written to the source is marked for migration until you enter the migration commit command. New data written to the source after a migration commit command is not migrated. Note that write access to the source is blocked from the time a migration commit command is given until the migration process finishes. The migration send command stays open until a migration commit command is entered. In the following examples, remember that all data on the migration source is always migrated, even when a single directory replication context is specified in the command.
To start migration of data only (no replication contexts, even if replication contexts are configured) to a migration destination named hostC, use a command similar to the following: # migration send /backup destination-host hostC
To start a migration that includes a collection replication context (replication destination string) of col://hostB: # migration send col://hostB destination-host hostC To start migration with a directory replication context of dir://hostB/backup/dir2: # migration send dir://hostB/backup/dir2 destination-host hostC To start migration with two replication contexts using context numbers 2 and 3: # migration send rctx://2 rctx://3 destination-host hostC
Replication - CLI
Migration
294
Migration
Replication - CLI
295
Migration
# migration status CTX: Mode: Destination: Enabled: Local file system status Connection State: Error: Destination lag: Current throttle: Contexts under migration:
0 migration source hostB yes enabled connected since Tue Jul 17 15:20:09 migrating 3/3 60% no error 0 unlimited dir://hostA/backup/dir2
296
Migration
1. On hostC (the migration destination), run the following commands. # filesys disable # filesys destroy # migration receive source-host hostA 2. On hostA (the migration and replication source), run the following command. Note that the command also disables the file system. # migration send dir://hostB/backup/dir2 destination-host hostC 3. On the source migration host, run the following command to display migration progress: # migration watch 4. First on hostA and then on hostC, run the following command. Note that the command also disables the file system. # migration commit 5. on hostB (the replication destination), run commands similar to the following to change the replication source to hostC: # filesys disable # replication modify dir://hostB/backup/dir2 source-host hostC # filesys enable
Replication - CLI
297
Migration
298
299
300
NFS Management
The nfs command manages NFS clients and displays NFS statistics and status.
21
A Data Domain System exports the directories /ddvar and /backup. /ddvar contains Data Domain System log files and core files. Add clients from which you will administer the Data Domain System to /ddvar. /backup is the target for data from your backup servers. The data is compressed before being stored. Add backup servers as clients to /backup. If you choose to add a client to /backup and to /ddvar, consider adding the client as read-only to /backup to guard against accidental deletions of data.
Shorthand steps:
In this example: bee = initial client UNIX system kay = second client UNIX system which requires secure access to the Data Domain system ddsys = Data Domain system All three systems are defined appropriately so that their IP addresses resolve correctly. 1) ensure '/backup' can be seen as an export: bee# showmount -e ddsys Export list for ddsys: /backup *
301
2) create a directory on 'bee' to mount '/backup' from 'ddsys' onto: bee# mkdir /mnt-ddsys 3) mount the directory bee# mount -o hard,bg,intr,rsize=32768,wsize=32768,nolock, proto=tcp,vers=3 ddsys:/backup /mnt-ddsys NOTE: On Sun Solaris, use "llock" instead of "nolock". The other parameters are explained in the man page for your particular UNIX platform. 4) Create the desired subdirectory bee# mkdir /mnt-ddsys/NBU-mediasvr1 5) If desired, set the correct ownership and mode on the directory bee# chown bkup-operator /mnt-ddsys/NBU-mediasvr1 bee# chmod 700 /mnt-ddsys/NBU-mediasvr1 6) Done, now dismount bee# umount /mnt-ddsys bee# rmdir /mnt-ddsys This example creates an new sub-directory that will allow full access only by the 'bkup-operator' userid. If this is not required and access should be available to any user on 'kay', then set the mode to 777 instead of 700. Now we go to the Data Domain system and create an export entry so that only the system "kay" can access the sub-directory just created on the Data Domain system. 1) access the Data Domain system command line, usually using "ssh" and login as an administrator (usually "sysadmin") 2) create the desired export: sysadmin@ddsys# nfs add client /backup/NBU-mediasvr1 kay For security purposes, the '/backup' directory should only be reachable by specific clients required to create sub-directories following the methods above. If '/backup' is left exported to everyone then any workstation can mount that directory and have full view of all sub-directories below it. Therefore, it's a good idea to restrict this access: sysadmin@ddsys# nfs del /backup * sysadmin#ddsys# nfs add /backup <list of admin hosts> If "Permission denied" is returned to any of these commands, check: a) mount command: client and "secure" export setting on the Data Domain system sysadmin@ddsys# nfs show clients
302 Data Domain Operating System User Guide
b) creating the sub-directory: "squash" settings on the Data Domain system sysadmin@ddsys# nfs show clients Example output of "nfs show clients" sysadmin@ddsys# nfs show clients
path -------------/backup /ddvar /backup/oracle client ------------192.168.28.30 b2-rh-nb2 192.168.28.50 options ---------------------------------------(rw,no_root_squash,no_all_squash,secure) (rw,no_root_squash,no_all_squash,secure) (rw,no_root_squash,no_all_squash,secure)
NFS Management
303
Remove Clients
anonuid=id Set an explicit user-ID for the anonymous account. The id is an integer bounded from -65635 to 65635. anongid=id Set an explicit group-ID for the anonymous account. The id is an integer bounded from -65635 to 65635. For example, to add an NFS client with an IP address of 192.168.1.02 and read/write access to /backup: with the secure option: # nfs add /backup 192.168.1.02 (rw,secure) Netmasks, as in the following examples, are supported: # nfs add /backup 192.168.1.02/24 (rw,secure) # nfs add /backup 192.168.1.02/255.255.255.0 (rw,secure)
Remove Clients
To remove NFS clients that can access the Data Domain System, use the nfs del export client-list operation. A client can be removed from access to /ddvar and still have access to /backup. The client-list can contain IP addresses, hostnames, and an asterisk (*) and can be comma-separated, space separated, or both. nfs del {/ddvar | /backup[/subdir]} client-list For example, to remove an NFS client with an IP address of 192.168.1.02 from /ddvar access: # nfs del /ddvar 192.168.1.02
Enable Clients
To allow access for NFS clients to a Data Domain System, use the nfs enable operation. nfs enable
Disable Clients
To disable all NFS clients from accessing the Data Domain System, use the nfs disable operation. nfs disable
304
NFS Management
305
Display Statistics
Display To display all NFS clients, use the nfs show clients operation or click NFS in the left panel of the Data Domain Enterprise Manager. nfs show clients The display is similar to the following: # nfs show clients NFS Client List path client options) ------------------------------------------------------/ddvar jsmith (rw,root_squash,no_all_squash,secure) /backup djones (rw,no_root_squash,no_all_squash,secure) -------------------------------------------------------
Display Statistics
To display NFS statistics for a Data Domain System, use the nfs show stats operation. nfs show stats The following example shows relevant entries, but not all possible entries: # nfs show stats NFS statistics: NFSPROC3_NULL NFSPROC3_GETATTR NFSPROC3_SETATTR NFSPROC3_LOOKUP NFSPROC3_ACCESS NFSPROC3_READLINK NFSPROC3_READ NFSPROC3_WRITE NFSPROC3_CREATE NFSPROC3_MKDIR NFSPROC3_SYMLINK NFSPROC3_MKNOD NFSPROC3_REMOVE NFSPROC3_RMDIR NFSPROC3_RENAME
306
: : : : : : : : : : : : : : :
[0] [0] [0] [24] [0] [0] [0] [0] [0] [0] [0] [0] [0] [0] [1]
: : : : : : : :
0 0 0 0 0 0 0 6081406
FH statistics: There are currently (2) exported filesystems. Stats for export point [/backup]: File system Type = SFS Number of cached entries = 28 Number of file handle lookups = 6083544 (cache miss = 28) Max allowed file cache size = 200, max streams = 64 Number of authentication failures = 0 Number of currently open file streams = 1 Stats for export point [/ddvar]: File system Type = UNIX Number of cached entries = 0 Number of file handle lookups = 0 (cache miss = 0) Max allowed file cache size = 200, max streams = 64 Number of authentication failures = 0 Number of currently open file streams = 0
Display Status
To display NFS status for a Data Domain System, use the nfs status operation. nfs status The display looks similar to the following:
NFS Management
307
# nfs status The NFS system is currently active and running Total number of NFS requests handled = 6160900
308
CIFS Management
22
The cifs command manages CIFS (Common Internet File system) backups and restores from and to Windows clients, and displays CIFS statistics and status. CIFS system messages on the Data Domain System go to a CIFS log directory. The location is: /ddvar/log/windows Note When configuring a destination Data Domain System as part of a Replicator pair, configure the authentication mode, WINS server (if needed), and other entries as with the originator in the pair. The exceptions are that a destination does not need a backup user and will probably have a different backup server list (all machines that can access data on the destination).
CIFS Access
A CIFS client can map to two shares on a Data Domain System. Use the cifs add command (see Add a Client on page 311) to make a share available to a client. A client is typically a Windows workstation, not a user.
/ddvar is the share for administrative tasks, such as looking at a log file. /backup is the share used by a Windows backup account for data storage and retrieval.
Any user that logs in to a Data Domain System is put into one of two groups. The user group is limited to commands that display statistics and status. The admin group can make configuration changes and use the display commands.
If the Data Domain System and a user account are in the same domain (or in a related trusted domain), the user can login to the Data Domain System through a client that is known to the Data Domain System. If the user has no matching local account on the Data Domain System, the user is part of the user group. If the user has a matching local account on the Data Domain System and the local account is part of the admin group, the user is logged in as part of the admin group.
309
CIFS Access
If the Data Domain System is in a workgroup, a user can login to the Data Domain System through a client that is known to the Data Domain System. The user must have a matching account (name and password) added to the Data Domain System as a local user account (see Add a User below). The user is logged in as part of the group specified for the local account, user or admin.
For access to the Data Domain System command line interface, use the SSH (or TELNET if enabled) utility to log into the Data Domain System or use a web-based browser to connect to the Data Domain Enterprise Manager graphical user interface. Note Permissions changes made to /backup or /ddvar from a CIFS administrative account may cause unexpected limitations in access to the Data Domain System and may not be reversible from the CIFS account. By default, folders are created with permission bits of 755 and files with permission bits of 744.
Add a User
To add a user, use the command user add user-name. The command asks for a password and confirmation or you can include the password as part of the command. Users added to the Data Domain System can have a privilege level of admin or user. The default is user. user add user-name [password password][priv admin | user] All user accounts on a Data Domain System act as CIFS local (built-in) accounts, which means that the user name can access data in /backup on the Data Domain System, and the user name can log in to the Data Domain System and use the Data Domain System command set for managing the system. See the Data Domain System command adminaccess for the available access protocols. For example, to add a user with a name of backup22, a password of usr256, and user privilege: # user add backup22 password usr256 For a Windows client that needs file access to a Data Domain System, enter a command similar to the following from a command prompt on the Windows client (usually a Windows media server). The example below maps /backup from Data Domain System rstr02 to drive H on the Windows system and gives user backup22 access to /backup: > net use H: \\rstr02\backup /USER:rstr02\backup22 For administrative access from Windows users in the same domain as the Data Domain system, see Allow Access from Windows on page 110.
310
CIFS Access
Add a Client
Each Windows backup server that will do backup and restore operations with a Data Domain System must be added as a backup client. To add a backup client that hosts a backup user account, use the cifs add /backup command. Each Windows machine that will host an administrative user for a Data Domain System must be added as an administrative client. Administrative clients use the /ddvar directory on a Data Domain System. To add a Windows machine that hosts an administrative user account as a client on the Data Domain System, use the cifs add /ddvar command. List entries can be comma-separated, space-separated, or both. To give access to all clients, the client-list can be an asterisk (*). cifs add /backup client-list cifs add /ddvar client-list The client-list can contain class-C IP addresses, IP addresses with either netmasks or length, hostnames, or an asterisk (*) followed by a domain name, such as *.yourcompany.com. For example, to add a client named srvr24 that will do backups and restores with the Data Domain System: # cifs add /backup srvr24 Netmasks, as in the following examples, are supported: # cifs add /backup 192.168.1.02/24 # cifs add /backup 192.168.1.02/255.255.255.0
CIFS Management
311
CIFS Commands
2. Copy the CA certificate to the location /ddvar/releases/cacerts on the Data Domain System and give the certificate file the name ca.cer. 3. If you earlier set authentication to the workgroup mode, use the cifs reset authentication command on the Data Domain System to return to the default of no mode. 4. On the Data Domain System, run the following command: # cifs option set start-tls enabled With the CA certificate on the Data Domain System, use the cifs set authentication command to join the Data Domain System to an active-directory domain only. See Set the Authentication Mode on page 316.
CIFS Commands
The cifs command enables and disables access, sets the authentication mode, and displays status and statistics. All cifs operations are available only to administrative users.
CIFS Commands
CIFS Management
313
CIFS Commands
cifs share create share-name path path {max-connections number | clients client-list | browsing {enabled | disabled} | writeable {enabled | disabled} | users user-names | comment comment} share-name Use a descriptive name for the share. path The path to the target directory. max-connections The maximum number of connections to the share that are allowed at one time. client-list A list of the clients that can access the share. The client list is a comma-separated list of clients that are allowed to access the share. Other than the comma delimiter, there should not be any whitespace (blank, tab) characters. The list must be enclosed in double quotes. Some valid client lists are: "host1,host2" "host1,10.24.160.116" Some invalid client lists are: "host1 "
"host1 ,host2" "host1, 10.24.160.116" "host1 10.24.160.116" browsing The share can be seen (enabled, which is the default) or not seen (disabled) by web browsers. writeable Make the share writeable (enabled, the default) or not writeable (disabled). user-names The user names list is a comma-separated list of user names. Other than the comma delimiter, any whitespace (blank, tab) characters are treated as part of the user name, because aWindows user name can have a space character anywhere in the name. The list must be enclosed in double quotes. All users from the client-list can access the share unless you give one or more user names. With one or more names, only the listed names can access the share. In the list of user names, group names can occur. Group names must have an at (@) symbol before them. Group names and user names should be separated only by commas, not spaces. There can be spaces inside the name of a group, but there should not be spaces between groups. In the example below, there are two groups followed by two users. Some valid user names lists are: "user1,user2" "user1,@group1"
314 Data Domain Operating System User Guide
CIFS Commands
" user-with-one-leading-space,user2" "user1,user-with-two-trailing-spaces "user1,@CHAOS\Domain Admins" comment A descriptive comment about the share. For example: # cifs share create dir2 path /backup/dir2 clients * users dsmith,jdoe Note As of the DD OS 4.5.0.0 release, DD OS supports the MMC (Microsoft Management Console) features: - Share management, except for browsing when adding a share and the changing of the default Offline settings of manual. - Session management. - Open file management, except for deleting files. - Local users and groups can be displayed, but not added, changed, or removed. "
Delete a share
To delete a share, use the cifs share destroy operation. cifs share destroy share-name
Enable a Share
To enable a share, use the cifs share enable operation. cifs share enable share-name
Disable a Share
To disable a share, use the cifs share disable operation. cifs share disable share-name
Modify a Share
To modify a share, use the cifs share modify operation.
CIFS Management
315
CIFS Commands
cifs share modify share-name {max-connections number | clients client-list | browsing {enabled | disabled} | writeable {enabled | disabled} | users user-names} share-name Use a descriptive name for the share. path The path to the target directory. max-connections The maximum number of connections to the share that are allowed at one time. client-list A list of clients that can access the share. All existing clients for the share are overwritten with the new client-list. The list can be client names or IP addresses. With more than one entry in the list, use double quotes ( ) around the list and commas (not spaces) between each entry. The list must be enclosed in double quotes. For example: # cifs share modify backup clients a,b,c,d browsing The share can be seen (enabled, which is the default) or not seen (disabled) by web browsers. writeable Make the share writeable (enabled, the default) or not writeable (disabled). user-name All users from the client-list can access the share unless you give one or more user names. With one of more names, only the listed names can access the share. The list must be enclosed in double quotes.
316
CIFS Commands
The domain mode puts the Data Domain System into an NT4 domain. Include a domain name and optionally, a primary domain controller or backup and primary domain controllers or all ( * ). cifs set authentication domain domain [[pdc [bdc]] | *] The workgroup mode means that the Data Domain System verifies user passwords. cifs set authentication workgroup wg-name
CIFS Management
317
CIFS Commands
318
When you enter the command "cifs set authentication active directory", the Data Domain system automatically adds a host entry to the DNS server, so it is not necessary to pre-create the DNS host entry for the Data Domain system. If you set nb-hostname (using "cifs set nb-hostname"), then the entry is created for nb-hostname instead of the system hostname, otherwise it uses the system hostname. See also the command "cifs option set organizational-unit", which is used in conjunction with "cifs set authentication active-directory".
CIFS Management
319
The default Data Domain System group dd admin group1 is mapped to the Windows group Domain Admins. The default Data Domain System group dd admin group2 is mapped to a Windows group named Data Domain that you create on a Windows domain controller. Access is through SSH, Telnet, and FTP. CIFS administrative access must be enabled with the adminaccess command.
320
Display
Display
Display CIFS Statistics
To display CIFS statistics for total operations, reads, and writes, use the cifs show stats operation. cifs show stats For example: # cifs show stats SMB total ops : SMB reads : SMB writes : 31360 165 62
Display
Tue Jan 13 12:10:55 2004 Tue Jan 13 12:09:36 2004 Tue Jan 13 12:10:59 2004
Locked files: Pid DenyMode Access R/W Oplock Name ------------------------------------------------------------566 DENY_WRITE 0x20089 RDONLY NONE /loopback/setup.iso Tue Jan 13 12:11:53 2004 566 DENY_ALL 0x30196 WRONLY NONE /loopback/RH8/ psyche-i386-disc1.iso Tue Jan 13 12:12:23 2004
CIFS Management
323
Display
Display Use the cifs show config operation or click CIFS in the left panel of the Data Domain Enterprise Manager to display CIFS configuration details. cifs show config For example: # cifs show config -----------------Mode Workgroup WINS Server NB Hostname -----------------------------Workgroup WORKGROUP 192.168.1.7 server26 -------------
Display
Display Shares
To display all shares or an individual share on a Data Domain system, use the cifs share show command. cifs share show [share-name]
326
NTP must be configured on the domain controller. To configure NTP, see the documentation for the Windows software version and service pack that is running on your domain controller. The following example is for Windows 2003 SP1 (use your ntp-server-name): C:\>w32tm /config /syncfromflags:manual /manualpeerlist: ntp-server-name C:\>w32tm /config /update C:\>w32tm /resync
After NTP is configured on the domain controller, run the following commands on the Data Domain System using your domain-controller-name: # ntp add timeserver domain-controller-name # ntp enable
On the Data Domain System, add the list of clients that can access the share. For example: # cifs add /backup srvr24 srvr25
On a CIFS client, browse to \\ddr\backup and create the share directory, such as dir2. On the CIFS client, set share directory permissions or security options. On the Data Domain System, create the share and add users that will come from the clients added earlier. For example: DDOS# cifs share create dir2 path /backup/dir2 clients * users domain\user5,domain\user6
CIFS Management
327
328
1. Log in as administrator
F1 g9 ie u r : Log in as administrator.
CIFS Management
329
4. Select 'connect to another computer...' 5. Specify the name or ip address of a Data Domain System.
330
CIFS Management
331
332
CIFS Management
333
c. Select "Administrators have full access; other users have read-only access".
334
d. Click "Finish".
CIFS Management
335
e. The newshare folder now appears in the Computer Management screen. 8. Shared sessions and shared open files can be managed similarly, through the folders Sessions and Open Files in the left panel of the Computer Management screen.
The DDFS also supports storage and retrieval of audit ACLs (SACLs - Security ACLs). However, neither enforcing the audit ACL (SACL) nor generating audit events is implemented.
CIFS Management
337
Figure 29: Windows Explorer GUI (Properties -> Security -> Advanced -> Permissions
The DACL can be viewed through the CIFS protocol either through commands, or by using the Windows Explorer GUI.
338
Figure 30: Windows Explorer GUI (Properties -> Security -> Advanced -> Auditing
The SACL can be viewed through the CIFS protocol either through commands, or by using the Windows Explorer GUI.
Owner SID
The owner SID can be viewed through the CIFS protocol either through commands, or by using the Windows Explorer GUI (Properties -> Security -> Advanced -> Owner). This is shown in Figure 31.
CIFS Management
339
Figure 31: Windows Explorer GUI (Properties -> Security -> Advanced -> Owner
Windows-based backup/restore tools such as ntbackup can be used on DACL- and SACL-protected files, to backup those files to the Data Domain System and restore them from it. For more information on ACLs and their use, see the Windows Operating System documentation.
340
Both options can only be set when CIFS is not enabled. If CIFS is running, CIFS services should be disabled first to set these options. Whenever the idmap type is changed, file system metadata conversion may need to be performed for correct file access. Without any conversion, the user may not be able to access the data. There is a tool available to perform the metadata conversion. The tool is obtained by using the following command on the Data Domain system: dd-aclutil -m <root directory where userid/groupid are to be changed> Note When CIFS ACLs are disabled via 'cifs option set ntfs-acls disabled', the Data Domain System will generate an ACL that approximates the UNIX permissions, regardless of the presence of a previously set CIFS ACL.
If this is an existing installation, with pre-existing CIFS data residing on the system:
1. cifs disable (Block CIFS clients from connecting.) 2. cifs option set ntfs-acls enabled 3. cifs enable (Allow CIFS clients to connect.) 4. Create ACLs on existing files, as explained under the section "How to set ACL Permissions/Security" above.
CIFS Management
341
342
23
The ost command allows a DDR (Data Domain system) to be a storage server for Symantecs NetBackup OpenStorage feature. OST stands for Open STorage. That is, DataDomains OST command set provides a user interface to Symantecs OpenStorage, which is itself an API between NetBackup and disk storage. NetBackup docs are available on the web at http://entsupport.symantec.com. The ost command allows the creation and deletion of logical storage units on the storage server, and the display of space utilization for the same. OpenStorage is a Data Domain licensed feature. There is one license for the "basic" OpenStorage feature of backing up and restoring image data. A replication license is also required for optimized duplication, for both the source and destination Data Domain systems. Definitions: LSU (Logical Storage Unit): The logical storage unit (LSU) represents an abstraction of physical storage. For Data Domain, an LSU is a ddfs directory. Storage Server: OpenStorage defines a storage server as an entity that writes data to and reads data from disk storage. For Data Domain, a Storage Server is a Data Domain system. Image: An OpenStorage image is an entire backup data set, a single fragment from a single backup data set, or multiple fragments from multiple backup data sets. The OpenStorage application writes an image to a single LSU on a single storage server. For Data Domains purposes, OpenStorage image data will be stored in a ddfs file. The OpenStorage API does not have the capability to create and delete LSUs. This functionality is available only via the Data Domain system. Hence the user interface includes CLIs to manage the LSUs. LSUs are created under the /backup/ost directory. The ost directory is flat namespace: all LSUs are created under this directory. The enable command creates the ost directory and exports this directory for the OpenStorage plugin.
343
For performance and status monitoring, the Data Domain system also manages active OpenStorage or plugin connections. An OpenStorage connection between a plugin and DDR requires authentication. When enabling OpenStorage on the DDR, a user name must be supplied. The user name is created using the current user add command. All OST LSUs and images are created using this user's credentials (ie uid and gid). For performance reasons, the Data Domain system limits the number of active connections to 32. When OpenStorage is disabled on the Data Domain system, existing OpenStorage LSUs and their images remain. Image data can be accessed once OpenStorage is re-enabled. If OpenStorage is disabled, an error will be returned to subsequent OpenStorage operations. Any active operation already in the pipeline will continue until completion. There may be certain circumstances when a customer may want to remove all LSUs and images, for which purpose the ost destroy command exists. This command asks the user for a sysadmin password, otherwise it will not be carried out.
344
Reset the ost user back to the default (no user set)
To reset the ost user back to the default (no user set), use the ost reset user-name command. (This command can be executed while ost is enabled.) ost reset user-name
OST enabled. If the user changes, it takes effect at the next 'ost enable'. If uid and gid change, all images and LSUs are changed at the next 'ost enable'.
Delete an LSU
346 Data Domain Operating System User Guide
The ost lsu delete lsu-name operation deletes all images in the logical storage unit with the given lsu-name. Corresponding NetBackup Catalog entries must be manually removed (expired). A prompt asks for the sysadmins password, which must be entered in order to proceed. Administrative users only. ost lsu delete lsu-name For example, to empty the lsu lsu66 of all its contents: # ost lsu delete lsu66
LSU_NBU_ARCHIVE LSU_TM1 TEST # ost lsu show LSU_NBU1 List of images in LSU_NBU1:
zion.datadomain.com_1184350349_C1_HDR:1184350349:jp1_policy1:4:1:: zion.datadomain.com_1184350349_C1_F1:1184350349:jp1_policy1:4:1::
[ rest not shown ... ] SE@jp1## ost lsu show compression List of LSUs and their compression info: LSU_NBU1: Total files: 4; bytes/storage_used: 206.6 Original Bytes: 437,850,584 Globally Compressed: 2,149,216 Locally Compressed: 2,113,589 Meta-data: 6,124 LSU_NBU2: Total files: 57; bytes/storage_used: 168.6 Original Bytes: 69,198,492,217 Globally Compressed: 507,018,955 Locally Compressed: 409,057,135 Meta-data: 1,411,828 [ rest not shown ... ] SE@jp1## ost lsu show compression LSU_NBU1 List of images in LSU_NBU1 and their compression info:
zion.datadomain.com_1184350349_C1_HDR:1184350349:jp1_policy1:4:1::: Total files: 1; bytes/storage_used: 9.1 Original Bytes: 8,872 Globally Compressed: 8,872 Locally Compressed: 738 Meta-data: 236 zion.datadomain.com_1184350349_C1_F1:1184350349:jp1_policy1:4:1::: Total files: 1; bytes/storage_used: 1.0 Original Bytes: 114,842,092 Globally Compressed: 114,842,092 348 Data Domain Operating System User Guide
Show ost statistics for the Data Domain system Locally Compressed: Meta-data: 112,106,468 382,576
[ rest not shown ... ] Note Note: Use ctrl-c to interrupt the above command, whose output can be very long.
For each statistic displayed, the number of errors encountered for that operation is displayed next to it in brackets. Example: # ost show stats
07/23 12:01:05 OST statistics: OSTGETATTR OSTLOOKUP OSTACCESS OSTREAD OSTWRITE OSTCREATE OSTREMOVE OSTREADDIR OSTFSSTAT FILECOPY_START Open STorage (OST) : : : : : : : : : : 4 13 0 0 329 2 0 0 20 0 [0] [9] [0] [0] [0] [0] [0] [0] [0] [0] 349
Show ost statistics for the Data Domain system over an interval FILECOPY_ABORT FILECOPY_STATUS OSTQUERY OSTGETPROPERTY : : : : Count ---------2 0 10,756,096 0 ---------0 0 11 14 Errors -----0 0 0 0 0 -----[0] [0] [0] [0]
------------------Image creates Image deletes Total bytes written Total bytes read Other -------------------
Show ost statistics for the Data Domain system over an interval
The ost show stats interval seconds operation shows various ost statistics for the Data Domain system. ost show stats interval seconds This command displays OST statistics, namely, the number of Kibibytes read and written per the given interval of time. Note This command is different from the ost show stats command, which shows a different set of ost stats. For example: # ost show stats interval 1 07/23 12:03:35 Write KB/s Read KB/s ---------- --------87,925 0 69,474 0 84,080 0 76,410 0 4,339 0 2,380 0 17,281 0 21,854 0 27,018 0 26,682 0 21,899 0
350 Data Domain Operating System User Guide
11,667 0 25,236 0 21,898 0 25,700 0 12,972 0 07/23 12:03:54 Write KB/s Read KB/s ---------- --------15,796 0 27,414 0 27,893 0 18,388 0 3,245 0 27,194 0
351
Name of the file. Total number of logical bytes to transfer. Number of logical bytes already transferred. Number of real bytes transferred.
Sample workflow sequence: zion.datadomain.com_1184802025_C2_F1:1184802025:jp1_policy1:4:1:: Logical bytes received 1,800,000 Real bytes received 900,000 Outbound image name zion.datadomain.com_1184802025_C1_F1:1184802025:jp1_policy1:4:1:: Logical bytes to transfer 4,000,000 Logical bytes already transferred 2,000,000 Real bytes transferred 1,000,000
LSU_NBU3 LSU_NBU_OPT_DUP LSU_NBU_ARCHIVE SE@jp1## ost lsu show LSU_NBU1 List of images in LSU_NBU1:
zion.datadomain.com_1184350349_C1_HDR:1184350349:jp1_policy1:4:1:: zion.datadomain.com_1184350349_C1_F1:1184350349:jp1_policy1:4:1::
[ rest not shown ... ] SE@jp1## ost lsu show compression List of LSUs and their compression info: LSU_NBU2: Total files: 57; bytes/storage_used: 168.6 Original Bytes: 69,198,492,217 Globally Compressed: 507,018,955 Locally Compressed: 409,057,135 Meta-data: 1,411,828 LSU_NBU1: Total files: 54; bytes/storage_used: 49.5 Original Bytes: 24,647,055,768 Globally Compressed: 1,441,351,596 Locally Compressed: 493,870,761 Meta-data: 4,536,592 [ rest not shown ... ] SE@jp1## ost lsu show compression LSU_NBU2
List of images in LSU_NBU2 and their compression info: zion.datadomain.com13542_1182889273_C1_HDR:1182889273:PrequalPolicy:4:1::: Total files: 1; bytes/storage_used: 11.5 Original Bytes: 17,064 Globally Compressed: 17,064 Locally Compressed: 1,218 Meta-data: 264 zion.datadomain.com13542_1182889273_C1_F1:1182889273:PrequalPolicy:4:1::: Total files: 1; bytes/storage_used: 993.8 Original Bytes: 4,227,773,676 Globally Compressed: 12,917,108 Locally Compressed: 4,219,441 Meta-data: 34,508
354
SE@jp1## ost lsu delete LSU_NBU2 Please enter sysadmin password to confirm this command: The 'ost lsu delete' command will delete all images in the lsu. Are you sure? (yes|no|?) [no]: y ok, proceeding. LSU LSU_NBU2 destroyed. SE@jp1## ost lsu delete LSU_NBU_ARCHIVE Please enter sysadmin password to confirm this command: LSU LSU_NBU_ARCHIVE destroyed.
355
356
24
The Data Domain VTL features are divided into two chapters. This chapter covers the CLI (Command Line Interface). For information on the GUI (Graphical User Interface), see the other chapter, entitled Virtual Tape Library (VTL) - GUI. The Data Domain VTL feature allows backup applications to connect to and manage a Data Domain System as though the Data Domain System were a stand-alone tape library. All of the functionality supported with tape is available with a Data Domain System. Also, as with a physical stand-alone tape library, the movement of data from a system using VTL to a physical tape must be managed by backup software, not by the Data Domain system. Virtual tape drives are accessible to backup software in the same fashion as physical tape devices. Devices appear to backup software as SCSI tape drives. A virtual tape library appears to software as a SCSI robotic device accessed through standard driver interfaces. The VTL feature:
Communicates between a backup server and a Data Domain System through a Fibre Channel interface. The Data Domain System must have a Fibre Channel interface card in the PCI card array. Is compatible with all Data Domain DD400 and above (DD500, DD600, etc.) series Data Domain Systems. Supports the tape drive model IBM LTO-1. Supports the tape library personalities StorageTek L180 and RESTORER-L180. See the Data Domain Technical Note for your backup software for the model name that you should use.
Note Use tape and library drivers that are supplied by your backup software vendor and that support the IBM LTO-1 drive and StorageTek L180 library. The RESTORER-L180 works with the same drivers as the StorageTek L180.
The number of recommended concurrent virtual tape drive instances is platform dependent and is the same as the number of recommended streams between a Data Domain system and a backup server. Note that the number is system-wide and includes all streams from all sources, such as VTL, NFS, and CIFS. See Data Streams Sent to a Data Domain system on page 5 for platform limits.
357
Supports 16 libraries (16 concurrently active virtual tape library instances). Access to VTLs and tape drives can be managed with the Access Grouping feature. See Access Groups (for VTL Only) on page 377. Supports up to 64 tape drives (64 concurrently active virtual tape drive instances). Supports up to 100,000 tapes (cartridges) of up to 800 GiB for an individual tape (Gibibytes, the base 2 equivalent of Gigabytes). Includes a pool feature for replication of tapes by defined pools. See Pools on page 387 and the VTL command output examples in this chapter. See Replicating VTL Tape Cartridges and Pools on page 252 for replication details. Includes internal Data Domain system data structures for each virtual data cartridge. The structures have a fixed amount of space that is optimized for records of 16 KiB (Kibibytes, the base 2 equivalent of Kilobytes) or larger. Smaller records use the space at the same rate per record as larger records, leading to a virtual cartridge marked as full when the amount of data is less than the defined size of the cartridge.
Note Data Domain strongly recommends that backup software be set up to use a minimum record (block) size of 64 KiB or larger. Larger sizes usually give faster performance and better data compression. If you change the size after initial configuration, data written with the original size becomes un-readable.
Supports replication between Data Domain Systems. A source Data Domain System exports received virtual tapes (each tape is seen as a file) into a virtual vault and leaves the tapes in the vault. On the destination, each tape (file) is always in a virtual vault. Does not protect virtual tapes from a Data Domain System filesys destroy command. The command deletes all virtual tapes. Handles data received by a Data Domain System during a power loss so that backup software sees the data in the same way as with tape drives in a power loss situation. The strategy your backup software uses to protect data during a loss of power to tape drives gives the same results with a loss of power to a Data Domain System. Responds to the mtx status command from a 3rd-party physical storage system in the same way as would a tape library. If the Data Domain System virtual library has registered any change since the last contact from the 3rd-party physical storage system, the first use of the mtx status command returns incorrect results. Use the command a second time for valid results. Supports simultaneous use of tape library and file system (NFS/CIFS/OST) interfaces. Is a licensed feature for a Data Domain System. Contact your Data Domain representative for licensing details.
358
Compatibility Matrix
Compatibility Matrix
For specific backup software and hardware configurations tested and supported by Data Domain, see the VTL matrices at the Data Domain Support web site: https://support.datadomain.com/compat_matrix.php
Enable VTLs
To start the VTL process and enable all libraries and drives, use the vtl enable option. Administrative users only. vtl enable
Create a VTL
To create a virtual tape library, use the vtl add operation. The VTL process must be enabled (use the vtl enable command) to allow the creation of a library. Administrative users only. vtl add vtl_name [model model] [slots num_slots] [caps num_caps] If incorrect values are entered for any of the command variables, a list of valid values is displayed.
vtl_name A name of your choice. model Is a tape library model name. The current supported model names are L180 and RESTORER-L180. See the Data Domain Technical Note for your backup software for the model name that you should use. If using RESTORER-L180, your backup software may require an update. num_slots The number of slots in the library. The number of slots must be equal to or greater than the number of drives. The maximum number of slots for all VTLs on a Data Domain System is 10000. The default is 20 slots. num_caps is the number of cartridge access ports. The default is 0 (zero) and the maximum is 10 (ten).
To create a virtual tape library, use the vtl add operation. The VTL process must be enabled (use the vtl enable command) to allow the creation of a library. Administrative users only. vtl add vtl_name [model model] [slots num_slots] [caps num_caps] For example, to create a VTL with 25 slots and two cartridge access ports: # vtl add VTL1 model L180 slots 25 caps 2
Virtual Tape Library (VTL) - CLI 359
Delete a VTL
After adding a VTL, client systems may not see the VTL. To make an unseen VTL visible, try the following:
Do a rescan operation on the client. This is the least disruptive action. Use the vtl reset hba command on the Data Domain System. Active backup sessions may be disrupted and fail. Use the vtl disable and vtl enable commands on the Data Domain System. Disabling and enabling take longer than the vtl reset hba command, so active backup sessions are very likely to fail. Reboot the Data Domain System or the client or both. Active backup sessions fail.
Delete a VTL
To remove a previously created virtual tape library, use the vtl del option. If the library name is not valid, a list of valid library names is displayed. Administrative users only. vtl del vtl_name
Disable VTLs
To disable all VTL libraries and shutdown the VTL process, use the vtl disable option. Administrative users only. vtl disable
360
Number of Drives The number of tape drives in the library. The maximum number of drives for all VTLs on a Data Domain System is 64, no matter how many VTLs it has. model Is a tape library model name. The current supported model names are L180 and RESTORER-L180. Note: the maximum number of libraries possible is 16. vtl drive add vtl_name [count num_drives] [model model]
Remove Drives
Use the vtl drive del option to remove drives from a VTL. Administrative users only.
drive_number is the first drive to delete. num_to_delete allows you to delete more than one drive at a time, starting with drive_number. vtl drive del vtl_name drive drive_number [count num_to_del]
Use a Changer
Each VTL Library has exactly 1 media changer, although it can have several tape drives. The word device refers to changers and tape drives. A Changer has a Model Name (for example, L180). Each changer can have a maximum of 1 LUN (Logical Unit Number). The following CLI commands use changers or display information about them: # vtl group create # vtl group del # vtl group modify # vtl group use # vtl group show
361
To display a summary of all tapes on a Data Domain system, use the vtl tape show all summary option. vtl tape show all summary To display a summary of information on a particular device, use the vtl tape show <device> summary option. vtl tape show pool pool-name summary vtl tape show vault vtl-name summary
362
barcode The 8-character barcode must start with six numeric or upper-case alphabetic characters (i.e. from the set {0-9, A-Z}), and end in a two-character tag of L1, LA, LB, or LC for the supported LT0-1 tape type, where: L1 represents a tape of 100 GiB capacity, LA represents a tape of 50 GiB capacity, LB represents a tape of 30 GiB capacity, and LC represents a tape of 10 GiB capacity.
(These capacities are the default sizes used if the capacity option is not included when creating the tape cartridge. If capacity is included, then that is used and it overrides the two-character tag.) The numeric characters immediately to the left of L set the number for the first tape created. For example, a barcode of ABC100L1 starts numbering the tapes at 100. A few representative sample barcodes: 000000L1 creates tapes of 100 GiB capacity and can accept a count of up to 1,000,000 tapes (from 000000 to 999999). AA0000LA creates tapes of 50 GiB capacity and can accept a count of up to 10,000 tapes (from 0000 to 9999). AAAA00LB creates tapes of 30 GiB capacity and can accept a count of up to 100 tapes (from 00 to 99). AAAAAALC creates one tape of 10 GiB capacity. You can only create one tape with this name and not increment. AAA350L1 creates tapes of 100 GiB capacity and can accept a count of up to 650 tapes (from 350 to 999). 000AAALA creates one tape of 50 GiB capacity. You can only create one tape with this name and not increment. 5M7Q3KLB creates one tape of 30 GiB capacity. You can only create one tape with this name and not increment. Note GiB = Gibibyte, the base 2 equivalent of GB, Gigabyte.
Virtual Tape Library (VTL) - CLI 363
Import Tapes
To make use of automatic incrementing of the barcode when creating more than one tape, we do the following: Start at the 6th character position, just before L. If a digit then increment it. If an overflow occurs, 9 to 0, then move one position to the left. If a digit then increment that. If alpha stop. Data Domain recommends only creating tapes with unique bar codes. Duplicate bar codes in the same tape pool create an error. Although no error is created for duplicate bar codes in different pools, duplicate bar codes may cause unpredictable behavior in backup applications and can lead to operator confusion.
capacity The number of gigabytes of size for each tape (overrides the barcode capacity designation). The upper limit is 800. For the efficient reuse of Data Domain System disk space after data is obsolete, Data Domain recommends setting capacity to 100 or less. count The number of tapes to create. The default is 1 (one). pool Put the tapes into a pool. The pool is Default if none is given. A pool must already exist to use this option. Use the vtl pool add command to create a pool. # vtl tape add TST010L1 count 5
Import Tapes
To move existing tapes from the vault to a slot, drive, or cartridge access port (CAP), use the vtl import option. Administrative users only. Rules for number of tapes imported: The number of tapes that you can import at one time is limited by:
The number of empty slots. (In no case can you import more tapes than--at a maximum--the number of currently empty slots.) The number of slots that are empty and that are not reserved for a tape that is currently in a drive. If a tape is in a drive and the tape origin is known to be a slot, the slot is reserved. If a tape is in a drive and the tape origin is unknown (slot or CAP), a slot is reserved. A tape that is known to have come from a CAP and that is in a drive does not get a reserved slot. (The tape returns to the CAP when removed from the drive.)
To sum up: The number of tapes you can import equals: (the number of empty slots, minus the number of tapes that came from slots, minus the number of tapes of unknown origin).
# of empty slots
364
Import Tapes - # of tapes that came from slots (we reserve the slot of each) - # of tapes of unknown origin (we reserve a slot for each) ------------------------= # of tapes you can import
If a tape is in a pool, you must use the pool option to identify the tape. Use the vtl tape show vtl-name command to display currently available slots. The same command can be used to display the slots that are currently used. Use the vtl tape show vault command to display barcodes for all tapes in the vault. Use backup software commands from the backup server to move VTL tapes to and from drives. vtl import vtl_name barcode barcode [count count] [pool pool] [element {slot | drive | cap}] [address addr] For example, to import 5 tapes starting with a barcode of TST010L1 into the library VTL1:
# vtl import VTL1 barcode TST010L1 count 5
Default values: The default value of element=slot. The default value of address=1. Therefore the above command is equivalent to:
# vtl import VTL1 barcode TST010L1 count 5 element slot address 1
Examples of importing:
Import 3 tapes to a CAP:
# vtl import vtl2 barcode HHH000L1 count 3 element cap address 1 ... imported 3 tape(s)... Processing Barcode -------HHH000L1 HHH001L1 HHH002L1 -------Comp ---0x 0x 0x ---tapes.... Pool ------Default Default Default -------
Used (%) --------------0.0 GiB (0.00%) 0.0 GiB (0.00%) 0.0 GiB (0.00%) ---------------
# vtl import vtl2 barcode HHH000L1 count 2 element slot address 31 imported 2 tape(s)...
Import from vault to slots 31 and 32 then display only those two barcodes:
Virtual Tape Library (VTL) - CLI 365
Export Tapes
vtl tape show vtl2 barcode HHH00*L1 Processing tapes.... Barcode Pool Location ------------------------HHH000L1 Default vtl2 slot 31 HHH001L1 Default vtl2 slot 32 ------------------------Comp ---0x 0x ---ModTime ------------------2007/10/08 14:28:55 2007/10/08 14:28:55 -------------------
count 2 Type ----LTO-1 LTO-1 ----Size ------100 GiB 100 GiB ------Used (%) --------------0.0 GiB (0.00%) 0.0 GiB (0.00%) ---------------
VTL Tape Summary ---------------Total number of tapes: Total pools: Total size of tapes: Total space used by tapes:
Export Tapes
Remove tapes from a slot, drive, or cartridge access port. Use the vtl tape show vtl-name command to match slots and barcodes. The removed tapes revert to the vault. Address is the number of the slot, drive, or cartridge access port. To export tapes, use the command: vtl export vtl_name {slot | drive | cap} address [count count] For example, to export 5 tapes starting from slot 1 from the library VTL1: # vtl export VTL1 slot 1 count 5
Remove Tapes
To remove one or more tapes from the vault and delete all of the data in the tapes, use the vtl tape del option. The tapes must be in the vault, not in a VTL. Use the vtl tape show vault command to display barcodes. If count is used, remove that number of tapes in sequence starting at barcode.
If a tape is in a pool, you must use the pool option to identify the tape. If count is used, remove that number of tapes in sequence starting at barcode. After a tape is removed, the physical disk space used for the tape is not reclaimed until after a file system clean operation.
366
Move Tape
Note On a destination Data Domain System, manually removing a tape is not permitted. vtl tape del barcode [count count] [pool pool] For example, to remove 5 tapes starting with a barcode of TST010L1: # vtl tape del TST010L1 count 5
Move Tape
Only one tape can be moved at a time, from one slot/drive/cap to another. To move a tape, use the vtl tape move command: vtl tape move vtl-name source {slot|drive|cap} src-address destination {slot|drive|cap} dest-address
Search Tapes
The VTL GUI user can search for tapes using the Search Tapes window. This is reached from anywhere the Search Tapes button appears, for example Virtual Tape Libraries...VTL Service...Libraries...click Search Tapes button. A window appears, allowing the user to search for tapes by Location, Pool, and/or Barcode. Count gives the number of tapes from a given starting tape the user wishes to view, and only makes sense when the Barcode field is filled in.
367
Enable Auto-Eject
Use the vtl option enable auto-eject operation to cause any tape that is put into a cartridge access port to automatically move to the virtual vault, unless the tape came from the vault, in which case the tape stays in the cartridge access port (CAP). vtl option enable auto-eject Note With auto-eject enabled, a tape moved from any element to a CAP will be ejected to the vault unless an ALLOW_MEDIUM_REMOVAL was issued to the library to prevent the removal of the medium from the CAP to the outside world.
Enable Auto-Offline
Backup software and some diagnostic tools may sometimes not move a tape to the state of offline before trying to move the tape out of a drive. The backup or diagnostic operations can then hang. If your site experiences such behavior, you can use the vtl option enable auto-offline command to automatically offline a tape when a move operation is generated. vtl option enable auto-offline
Disable Auto-Eject
Use the vtl disable auto-eject operation to allow a tape in a cartridge access port to remain in place. vtl option disable auto-eject
Disable Auto-Offline
Use the vtl option disable auto-offline command to disable automatically offlining a tape when a move operation is generated. vtl option disable auto-offline
368 Data Domain Operating System User Guide
The display is similar to the following: # vtl show config Library Name Library Model -----------------------VTL1 10001 -----------------------Drive Model ----------1 ----------Slots/Caps ----120 -----
Size --------100.0 GiB 100.0 GiB 100.0 GiB 100.0 GiB 100.0 GiB
Used(%) ---------------35.9 GiB (35.9%) 35.8 GiB (35.8%) 0.0 GiB (0.0%) 42.0 GiB (42.0%) 0.0 GiB (0.0%)
370
The Size column displays the configured data capacity of the tape in GiB (Gibibytes, the base 2 equivalent of GB, Gigabytes). The Used(%) column displays the amount of data sent to the tape (before compression) and the percent of space used on the virtual tape. The Comp column displays the amount of compression done to the data on a tape. The ModTime column gives the most recent modification time.
Size --------100.0 GiB 100.0 GiB 100.0 GiB 100.0 GiB 100.0 GiB
Used(%) ---------------35.9 GiB (35.9%) 35.8 GiB (35.8%) 0.0 GiB (0.0%) 42.0 GiB (42.0%) 0.0 GiB (0.0%)
371
The Used(%) column displays the amount of data sent to the tape (before compression) and the percent of space used on the virtual tape. The Comp column displays the amount of compression done to the data on a tape. The ModTime column gives the most recent modification time.
Size --------100.0 GiB 100.0 GiB 100.0 GiB 100.0 GiB 100.0 GiB
Used(%) ---------------35.9 GiB (35.9%) 35.8 GiB (35.8%) 0.0 GiB (0.0%) 42.0 GiB (42.0%) 0.0 GiB (0.0%)
372
0x 18x 0x -
2007/04/16 13:15:43 2007/04/16 13:15:43 2007/04/16 13:15:43 The Pool column displays which pool holds the tape. The Default pool holds all tapes that are not assigned to a user-created pool. The Location column displays whether tapes are in a user-created library (and which drive number) or in the virtual vault. The Size column displays the configured data capacity of the tape in GiB (Gibibytes, the base 2 equivalent of GB, Gigabytes). The Used(%) column displays the amount of data sent to the tape (before compression) and the percent of space used on the virtual tape. The Comp column displays the amount of compression done to the data on a tape. The ModTime column gives the most recent modification time.
Read KiB/s
Write KiB/s
Soft Errors
373
----1 2
---1a 1b 1b
----250 0 76
---------112972 0 9150
---------75493 0 76490
----------2 0 0
Hard Errors ----------0 0 1 Note KiB = Kibibyte, the base 2 equivalent of KB, Kilobyte. The Drive column gives a list of the drives by name. The name is of the form Drive #, where # is a number between 1 and n that represents the address or location of the drive in the list of drives. The Port column gives a list of the ports on the drive, by port number, where the port number is a number followed by a lowercase alphabetic character, for example 3a. The ops/s column gives the number of operations per second currently or recently being achieved by the port. The Read KiB/s column gives the number of KibiBytes per second read by the port. The Write KiB/s column gives the number of KibiBytes per second written by the port. The Soft Errors column gives the number of errors that the system recovered from. Nothing needs to be done about these. No preventative measures or maintenance actions are necessary. If there are thousands of soft errors in a short period of time such as an hour, while they are being recovered from, the only cause for concern is that performance may be being affected. The Hard Errors column gives the number of errors that the system was unable to recover from. The user shouldn't be seeing hard errors. In case of a hard error, the user should view the logs to determine whether any action needs to be taken, and if so, what action is appropriate. To view the logs, see the Data Domain Enterprise Manager GUI for the system, click the link "Log Files" in the left menu bar, and click the file vtl.info to open and view it. In addition, it may be helpful to view the files kern.info and kern.error through the CLI (see the chapter Log File Management).
374
VTL Tape Summary ---------------Total number of tapes: Total pools: Total size of tapes: Total space used by tapes: Average Compression:
# vtl export libr01 drive 1 ... exported 1 tapes... Note GiB = Gibibyte, the base 2 equivalent of GB, Gigabyte.
Virtual Tape Library (VTL) - CLI 375
Backup application behavior for handling replicated tapes varies. To minimize unexpected behavior or error conditions, virtual tapes should remain imported in the destination libraries only for as long as needed. After importing a replicated tape at the destination, follow your backup applications procedures to utilize the replicated tape and then export the tape from the destination library. The objective is to ensure that at any time, only one instance of a replicated tape is visible to the backup application. The following generic procedure allows you to configure a VTL for replication and retrieve data from a virtual tape that was replicated to a destination Data Domain System. See Replicating VTL Tape Cartridges and Pools on page 252 for further replication detail and consult your backup application documentation for specific backup procedures. 1. On the source Data Domain system, create the VTL and tapes. Use the vtl add command. 2. Perform and verify one or more backups to the source Data Domain system. 3. Configure replication for the pool to be replicated (for example: /backup/vtc/Default or /backup/vtc/pool-name) using the replication add command. 4. Verify that any tapes targeted for replication from the destination reside in the vault and not in a library. Use the vtl tape show command. 5. Initialize replication for the targeted pool using the replication initialize command. Wait for initialization to complete. 6. As required, perform additional backups to the source. Wait for outstanding backups to complete. 7. Identify the tapes that you need to retrieve from the destination system and have the list available at the destination location. 8. On the source, enter the command replication sync for the target pool to ensure that the source tape and destination tape are consistent. Wait for the command to complete. 9. If the replicated tapes to be retrieved at the destination are still accessible at the source, export the tapes from the source system and, using the backup application, inventory the source VTL.
376
10. On the destination, create a VTL if one does not already exist. Use the vtl add command. The destination VTL configuration does not have to match the library on the source Data Domain System. 11. Import the tape or tapes to the library using the vtl import command. The replicated tapes should now reside in the destination VTL. From the backup application, inventory the destination VTL. For some configurations or backup application versions, you may need to import the catalog (the backup application database) to use replicated tapes. 12. Read the tapes from the destination systems VTL in the same way that you would read tapes from a library on the source and perform required backup application operations such as cloning to physical tape. 13. After using the replicated tapes, export the tapes from the destination using the vtl export command. 14. If necessary, import the replicated tapes from the source system using the vtl import command. The replicated tapes should now reside in the source systems VTL. 15. From the backup application, inventory the destination VTL.
A GROUP is a container which consists of initiators and devices (drives or media changer). An initiator can be a member of only one GROUP. A GROUP can contain multiple initiators. A device can be a member in as many groups as desired. But a device cannot be a member of the same GROUP more than once. GROUP names are case-insensitive, can be 256 characters in length and consist of characters from the range A-Za-z0-9_-. The names: Default, TapeServer, all, summary and vtl are reserved and cannot be created, deleted, or have initiators or devices assigned to them. A GROUP can contain 92 initiators.
377
Devices:
A Device can be a member of as many GROUPs as needed/wanted, but it occurs only once in a given GROUP. It is the Device name (or id) that is used to determine membership in a GROUP, not the LUN assigned. A device may have a different LUN assigned in each GROUP it is a member of. When adding a device to a group, the FC Ports that the device should be visible on can also be specified. Port names are two characters, namely: a digit representing the physical slot the HBA resides in and a character representing the port on the HBA. 3a would be port a on the HBA in slot 3. Acceptable port names are: none, all or a list of port names separated by commas (3a,4b for example).
Create a VTL on the Data Domain system. See Create a VTL on page 359. Enable the VTL with the vtl enable command. Add a group with the vtl group add command (see below). Add an initiator with the vtl initiator set alias command (see below). Map a client as an Access Grouping initiator (see below). Create an Access Group. See the commands in this section and Procedure: Create an Access Group on page 384.
Note Avoid making Access Grouping changes on a Data Domain system during active backup or restore jobs. A change may cause an active job to fail. The impact of changes during active jobs depends on a combination of backup software and host configurations.
378
A vtl Access Group (a.k.a. group, vtl group, or Access Group) is a collection of initiator wwpns or aliases (see VTL Initiator) and the devices they are allowed to access. This set of commands deal with the group container. Populating the container with initiators and devices is done with VTL Initiator and VTL group. When setting up Access Groups on a Data Domain system:
A given device may appear in more than one group when using features such as Shared Storage Option (SSO) etc.
379
The option primary-port specifies a set of ports that the device will be visible on. We call them primary ports. If the option is omitted, the device is visible on all ports. If all is provided the device is visible on all ports. If none is provided the device is visible on none of the ports.
The option secondary-port allows the user to specify a second set of ports this device is visible on when the vtl group use secondary command is executed. Of course there is vtl group use primary to fall back to the primary port list. (See also the VTL group use section below in this chapter.) If vtl secondary-port is not specified it will default to the value of "port".
380
The port-list is a comma-separated list of physical port numbers. A port number is a string in the form of "<numeric_number><alphabet>", Where the "numeric_number" denotes the PCI-slot and "alphabet" denotes the port on a PCI-card. Examples are "1a", "1b", or "2a", "2b". It is illegal to provide a port number that does not currently exist on the system. Now that the command accepts a list of virtual devices, it may fail before completing in its entirety. In this case, we undo the changes on the devices that have been processed. All other rules remain the same. (The group must first be created by a "vtl group add", no duplicate LUNs can be assigned to a group, etc.) The new Access Groups are saved in the registry. For example, the following two commands add groups for the group group22 for drive 3 and drive 4 (note the space in each name) with a LUN number of 22 for drive 4. # vtl group add vtl01 drive drive 3 group group22 # vtl group add vtl01 drive drive 4 group group22 lun 22
381
The vtl-name is one of the libraries that you have created. Use the vtl show config command to display all library names. The all option modifies all devices in the vtl-name that are grouped for the group. The drive-list is a comma-separated list of virtual tape drives as reported to an initiator. Use the vtl group show command on the Data Domain system to list drive names and the groups grouped to each drive. The initiator is a Data Domain system client that you have mapped as an initiator on the Data Domain system. Use the vtl initiator show command to list known initiators. The changeable fields are LUN assignment, primary ports, and secondary ports. If any field is omitted the current value remains unchanged. If the LUN assignment is to be modified, it only applies to a a single drive. Proving "all" or a list of drives is illegal. Some changes can result in the current Access Group being removed from the system causing the loss of any current sessions and a new Access Group being created. The registry will be updated with the changed Access Groups.
382
-----
0 ---
1a -------
1a ---------
1a ------
The output of vtl group show all is even more different: # vtl group show all Group: curly Initiators: None Devices: None Group: group2 Initiators: Initiator Alias --------------moe --------------Devices: Device Name -----------VTL1 changer VTL1 drive 1 -----------UPGRADE NOTE: If, on startup, the VTL process discovers initiator entries in the registry, but no group entries, it is assumed the system has been recently upgraded. In this case a group is created with the same name as each initiator and that initiator is added to the newly created group. In release 4.4.x or later, the LUN masking feature from 4.3 and earlier is replaced by the access groups feature. If LUN masking was configured, the upgrade process from 4.3 to 4.4 converts the LUN masking configuration to an access group that is applied to all VTL Fibre Channel ports and that has the initiators WWNN as a member. In the same way, the default LUN mask in 4.3 is no longer available in 4.4 and later. For devices in the default mask in 4.3, you must create an access group in 4.4 and move the devices into the group for initiators to see the targets. LUN --0 1 --Initiator WWPN ----------------------00:00:00:00:00:00:00:04 ----------------------Primary Ports ------------all all ------------Secondary Ports --------------all all --------------In-use Ports -----------all all ------------
383
vtl group use <group-name> vtl vtl-name {all | changer | drive drive-list} {primary | secondary} vtl group use group group-name {primary | secondary} The vtl-name is one of the libraries that you have created. Use the vtl show config command to display all library names. The all option modifies all devices in the vtl-name that are grouped for the group. The drive-list is a comma-separated list of virtual tape drives as reported to an initiator. Use the vtl group show command on the Data Domain system to list drive names and the groups grouped to each drive. The port list that the virtual device is visible on is the in-use port list, no matter whether it is the primary or secondary port list. The lists are persistently saved in the registry so that after a DDR reboot or VTL crash/restart this configuration can be restored. A group is a collection of initiator wwpns or aliases (see VTL Initiator) and the devices they are allowed to access.
4. Broadcast VTL changes so they are visible to clients. (Warning: may cause active backup sessions to fail, so it is best to do this when there are no active backup sessions.) # vtl reset hba 5. Create an empty group group2 as a container . # vtl group create group2 6. Give the initiator 00:00:00:00:00:00:00:04 the convenient alias moe.
384 Data Domain Operating System User Guide
# vtl initiator set alias moe wwpn 00:00:00:00:00:00:00:04 7. Put the initiator moe into the group group2. # vtl group add group2 initiator moe 8. List the Data Domain systems known clients and world-wide node names (WWNNs). The WWNN is for the Fibre Channel port on the client. # vtl initiator show
Initiator ---------------------moe 01:01:01:01:01:01:01:01 ---------------------Group -----group2 group2 n/a -----WWNN ----------------------00:00:00:00:00:00:00:04 00:00:00:00:00:00:00:05 21:00:00:e0:8c:11:33:04 00:00:00:00:00:00:7a:bf ----------------------Port Status ---1a 1b 1a 1b ---------Online Online Online Offine -------
Initiator Product Vendor / ID / Revision ----------------------- -----------------------------------moe Emulex LP10000 FV1.91A5 DV8.0.16.27 01:01:01:01:01:01:01:01 Emulex LP10000 FV1.91A5 DV8.0.16.27 ----------------------- ------------------------------------
9. Create an Access Group. This Access Group puts VTL1 drive 1 in group2. By doing so, it allows any initiator in group2 to see VTL1 drive 1. # vtl group add VTL1 drive 1 group group2 10. Use the vtl group show command to display VTLs and device numbers. # vtl group show vtl ccm2a Device Group LUN Primary Ports ------------- ----- --- ------ccm2a drive 1 Moe 6 1a,1b ------------- ----- --- ------Secondary Ports --------1a,1b --------In-use Ports -----1a,1b ------
After mapping a client as an initiator and before adding an Access Group for the client, the client cannot access any data on the Data Domain system.
385
After adding an Access Group for the initiator/client, the client can access only the devices in the Access Group. A client can have Access Groups for multiple devices. A maximum of 128 initiators can be configured.
The initiator-name is an alias that you create for Access Grouping. The name can have up to 256 characters. Data Domain suggests using a simple, meaningful name. The wwpn is the world-wide port name of the Fibre Channel port on the client system. Use the vtl initiator show command on the Data Domain system to list the Data Domain systems known clients and WWPNs. The wwpn must use colons ( : ).
The following example uses the client name and port number as the alias to avoid confusion with multiple initiators that may have multiple ports: # vtl initiator set alias client22_2a wwpn 21:00:00:e0:8c:11:33:04
386
Display Initiators
Use the vtl initiator show command to list one or all named initiators and their WWPNs. vtl initiator show [initiator initiator-name | port port_number] For example: # vtl initiator show
Initiator ----------------------21:00:00:e0:8b:9d:3a:a5
Group
Status
WWNN
Offline 20:00:00:e0:8b:9d:3a:a5 ----------------------Initiator ----------------------21:00:00:e0:8b:9d:3a:a5 --------------------------------------------- ------- ----------------------Symbolic Port Name ------------------
Pools
The Data Domain pool feature for VTL allows replication by groups of VTL virtual tapes. The feature also allows for the replication of VTL virtual tapes from multiple replication originators to a single replication destination. For replication details, see Replicating VTL Tape Cartridges and Pools on page 252.
A pool name can be a maximum of 32 characters. A pool name with the restricted names all, vault, or summary cannot be created or deleted. A pool can be replicated no matter where individual tapes are located. Tapes can be in the vault, a library, or a drive. You cannot move a tape from one pool to another. Two tapes in different pools on one Data Domain system can have the same name. A pool sent to a replication destination must have a pool name that is unique on the destination. Data Domain system pools are not accessible by backup software.
387
No VTL configuration or license is needed on a replication destination when replicating pools. Data Domain recommends only creating tapes with unique bar codes. Duplicate bar codes in the same tape pool create an error. Although no error is created for duplicate bar codes in different pools, duplicate bar codes may cause unpredictable behavior in backup applications and can lead to operator confusion.
Add a Pool
Use the vtl pool add command to create a pool. The pool-name cannot be all, vault, or summary. Max of 32 characters. vtl pool add pool-name
Delete a Pool
Use the vtl pool del command to delete a pool. The pool must be empty before the deletion. Use the vtl tape del command to empty the pool. vtl pool del pool-name
Display Pools
Use the vtl pool show command to display pools. vtl pool show {all | pool-name} For example, to display the tapes in pl22: # vtl pool show pl22 ... processing tapes... Barcode Pool Location -------- ------- -------A00000L1 pl22 VTL1 A00004L1 pl22 VTL1 A00001L1 pl22 VTL1 A00003L1 pl22 VTL1
388
Port -- the physical port number. Connection Type Link Speed Port ID Enabled -- the port operational state. Status -- shows whether the port is up and capable of handling traffic.
Note GiBps = Gibibytes per second, the base 2 equivalent of GBps, Gigabytes per second.
The output is similar to: Port ---1a 1b ---Model ------QLE2462 ------Firmware -------3.03.19 3.03.19 -------WWNN ----------------------21:00:00:e0:8b:1b:dc:10 21:01:00:e0:8b:3b:dc:10 -----------------------
WWPN ----------------------20:00:00:e0:8b:1b:dc:10 20:01:00:e0:8b:3b:dc:10 ----------------------# vtl port show stats [ port { <port list> | all } ] [ interval <secs> ] [ count <count> ] This command shows a summary of the statistics of all the drives in all the VTLs on all the ports where the drives are visible. If the optional port list is absent, the command output is the total traffic stats of all the devices on all the VTL ports. If the port list is specified, the command output is the detailed stats information of the devices that are accessible on the specified VTL ports.
# vtl port show stats port all This command shows detailed stats information for all the drives in all the VTLs on all the ports where the drives are visible. # vtl port show detailed-stats Shows the following information.
390
Control Commands -- non read/write commands Write Commands -- number of WRITE commands Read Commands -- number of READ commands In -- number of megabytes written Out -- number of megabytes read Link Failures -- count of link failures LIP count -- number of LIPs
Data Domain Operating System User Guide
Sync Losses -- number of times sync loss was detected Signal Losses -- number of times loss of signal was detected. Prim Seq Proto Errors -- count of errors in primitive sequence protocol Invalid Tx Words -- number of invalid tx words Invalid CRCs -- number of frames received with bad CRC
The output is similar to: Port Control Write Read In (KiB) Out (KiB) Commands Commands Commands ---- -------- -------- -------- -------- --------1a 32 10 5 1024 1024 1b 42 10 5 1024 1024 ---- -------- -------- -------- --------------Link LIP Sync Signal Prim Seq Proto Invalid Failures Count Losses Losses Errors Tx Words -------- ----- ------ ------ -------------- -------0 2 0 0 0 0 0 0 0 0 0 0 -------- ----- ------ ------ -------------- -------Invalid CRCs ------0 0 ------Note KiB = KibiByte, the base 2 equivalent of KB, KiloByte.
391
392
25
The NDMP (Network Data Management Protocol) feature allows direct backup and restore operations between an NDMP Version 2 data server (such as a Network Appliance filer with the ndmpd daemon turned on), and a Data Domain System. NDMP software on the Data Domain System acts, through the command line interface, to provide Data Management Application (DMA) and NDMP server functionality for the filer. The ndmp command on the Data Domain System manages NDMP operations.
Add a Filer
To add to the list of filers available to the Data Domain System, use the ndmp add operation. The user name is a user on the filer and is used by the Data Domain System when contacting the filer. The password is for the user name on the filer. With no password, the command returns a prompt for the password. Note that any add operation for a filer name that already exists replaces the complete entry for that filer name. A password can include any printable character. Administrative users only. ndmp add filer_name user username [password password] For example, to add a filer named toaster5 using a user name of back2 with a password of pw1212: # ndmp add toaster5 user back2 password pw1212
Remove a Filer
To remove a filer from the list of servers available to the Data Domain System, use the ndmp delete operation. Administrative users only. ndmp delete filer_name For example, to delete a filer named toaster5: # ndmp delete toaster5
393
Restore to a Filer
To restore data from a Data Domain System to a filer, use one of the ndmp put operations. Note that a filer may report a successful restore even when one or more files failed restoration. For details, always review the LOG messages sent by the filer. Administrative users only. ndmp put src_file filer_name:dst_path ndmp put partial src_file subdir filer_name:dst_path partial Restore a particular directory or file from within a backup file on the Data Domain System. Give the path to the file or subdirectory. src_file The file on the Data Domain System from which to do a restore to a filer. The src_file argument must always begin with /backup. filer_name The NDMP server to which to send the restored data.
394
dst_path The destination for the restored data on the NDMP server. Some filers require that subdir be relative to the path used during the ndmp get that created the backup. For example, if the get operation was for everything under the directory /a/b/c in a tree of /a/b/c/d/e, then the put partial subdirectory argument should start with /d. On some filers, dst_path must end with subdir. The following command restores data from the Data Domain System file /backup/toaster5/week0 to /vol/vol0 on the filer toaster5. # ndmp put /backup/toaster5/week0 toaster5:/vol/vol0 The following command restores the file .../jsmith/foo from the week0 backup. # ndmp put partial jsmith/foo /backup/toaster5/week0 toaster5:/vol/vol0/jsmith/foo
395
# ndmp status
PID MiB Copied --- --------715 4219
397
398
Enterprise Manager
Graphical User Interface
26
Through the browser-based Data Domain Enterprise Manager graphical user interface, you can do the initial system configuration, make a limited set of configuration changes, and display system status, statistics, and settings. The supported browsers for web-based access are Netscape 7 and above, Microsoft Internet Explorer 6.0 and above, FireFox 0.9.1 and above, Mozilla 1.6 and above, and Safari 1.2.4. The console first asks for a login and then displays the Data Domain System Summary page (see Figure 32 on page 400). Some of the individual displays on various pages have a Help link to the right of the display title. Click on the link to bring up detailed online help about the display. To bring up the interface: 1. Open a web browser. 2. Enter a path such as http://rstr01/ for Data Domain System rstr01 on a local network. 3. Enter a login name and password.
399
The bar at the top displays the Data Domain System host name. The grey bar immediately below the host name displays the file system status, the number of current alerts, and the system uptime. The Current Status and Space Graph tabs toggle the display. Figure 32 shows Current Status. See Display the Space Graph on page 402 for the Space Graph display and explanation. The left panel lists the pages available in the interface. Click on a link to display a page. Below the list, find the current login, a logout button, and a link to Data Domain Support. The main panel shows current alerts and the space used by Data Domain System file system components. A line at the bottom of the page displays the Data Domain System software release and the current date.
Data Domain Operating System User Guide
400
The page links in the left panel display the output from Data Domain System commands that are detailed throughout this manual. Configuration Wizard gives the same system configuration choices as the config setup command. See Login and Configuration on page 14. System Stats Opens a new window and displays continuously updated graphs showing system usage of various resources. See Display Detailed system Statistics on page 73. Group Manager Opens a window that allows basic system monitoring for multiple Data Domain Systems. See Monitor Multiple Data Domain Systems on page 405. Autosupport shows current alerts, the email lists for alerts and autosupport messages, and a history of alerts. See Display Current Alerts on page 131, Display the Email List on page 132, Display the Autosupport Email List on page 137, and Display the Alerts History on page 132. Admin Access lists every access service available on a Data Domain System, whether or not the service is enabled, and lists every hostname allowed access through each service that uses a list. See Display Hosts and Status on page 113. CIFS displays CIFS configuration choices and the CIFS client list. Disks shows statistics for disk reliability and performance and lists disk hardware information. See Display Disk Reliability Details on page 185, Display Disk Performance Details on page 183, and Display Disk Type and Capacity Information on page 178. File System displays the amount of space used by Data Domain System file system components. See Display File system Space Utilization on page 215. Licenses shows the current licenses active on the Data Domain System. See Display Licenses on page 125. Log Files displays information about each system log file. Network displays settings for the Data Domain System Ethernet ports. See Display Interface Settings on page 101 and Display Ethernet Hardware Information on page 102. NFS lists client machines that can access the Data Domain System. See Display Allowed Clients on page 305. SNMP displays the status of the local SNMP client and SNMP configuration information. Support allows you to create a support bundle of log files and lists existing bundles. See Collect and Send Log Files on page 139. System shows system hardware information and status. Replication lists configured replication pairs and replication statistics. Users lists the users currently logged in and all users that are allowed access to the system. See Display Current Users on page 117 and Display All Users on page 118.
Enterprise Manager
401
Data Collection The total amount of disk storage in use on the Data Domain System. Look at the left vertical axis of the graph. Data Collection Limit The total amount of disk storage available for data on the Data Domain System. Look at the left vertical axis of the graph. Pre-compression The total amount of data sent to the Data Domain System by backup servers. Pre-compressed data on a Data Domain System is what a backup server sees as the total un-compressed data held by a Data Domain System-as-storage-unit. Look at the left vertical axis of the graph. Compression factor The amount of compression the Data Domain System has done with all of the data received. Look at the right vertical axis of the graph for the compression ratio.
Two activity boxes below the graph allow you to change the data displayed on the graph. The vertical axis and horizontal axis change as you change the data set.
The activity box on the left below the graph allows you to choose which data shows on the graph. Click the check boxes for Data Collection, Data Collection Limit, Pre-compression, or Compression factor to remove or add data. The activity box on the right below the graph allows you to change the number of days of data shown on the graph.
Display When first logging in to the Data Domain Enterprise Manager or when you click on the Home link in the left panel of the Data Domain Enterprise Manager, the Space Graph tab is on the far right of the right panel. Click the words Space Graph to display the graph. Figure 33 shows an example of the display with all four types of data included. In the example, the Data Collection and Data Collection Limit values show as constants because of the relatively large scale needed for Pre-compression on the left axis.
402
Removing one or more types of data can give useful information as the axis scales change. For example, Figure 34 shows the graph for the same Data Domain System and the same data collection as in Figure 33 on page 403. The difference is that the Pre-compression check box in the left-side activity box at the bottom of the display was clicked to remove pre-compression data from the graph. (The scale of Compression Factor at right remains unchanged.)
Enterprise Manager
403
The left axis scale in Figure 34 on page 404 is such that the Data Collection and Data Collection Limit give useful information. Also, comparing each of the three lines with the other two lines gives information. Data Collection (the amount of disk space used) at one point goes nearly to the Data Collection Limit, which means that the system was running out of disk space. A file system cleaning operation on about May 30 (see the scale along the bottom of the graph) cleared enough disk space for operations to continue.
404
The Data Collection line rises with new data written to the Data Domain System and falls steeply with every file system clean operation. Note that the Compression factor line falls with new data and rises with clean operations. The graph also displays a vertical grey bar for each time the system runs a file system cleaning process. The minimum width of the bar on the X axis is six hours. If the cleaning process runs for more than six hours, the width increases to show the total time used by the process.
The Group Manager display gives information about multiple Data Domain Systems. Figure 36 on page 406 is an example. See Figure 37 on page 407 for adding systems to the display.
Enterprise Manager
405
Manage Hosts Click to bring up a screen that allows adding Data Domain Systems to or deleting Data Domain Systems from the display. See Figure 37 on page 407 for details. The Total Pre-compression and Total Data amounts are the combined amounts of data for all displayed systems (five Data Domain Systems in the example). Update Now Click to update the main table of information and the status for each Data Domain System displayed. Status Displays OK in green or the number of alerts in red for each Data Domain System. Restorer Displays the name of each Data Domain System monitored. Click on a name to see more information about a Data Domain System. See Figure 38 on page 408 for an example. Pre-compression GiB The amount of data sent to the Data Domain System by backup software. Data GiB The amount of disk space used on the Data Domain System. % Used A bar graph of the amount of disk space used for compressed data. Compression The amount of compression achieved for all data on the Data Domain System.
406
Figure 37 shows the Manage Hosts window for adding and deleting systems from the main display. Enter either hostnames or IP addresses for the Data Domain Systems that you want to monitor.
Click the Save button to save changes. Click the Cancel button to return to the main display with no changes.
Enterprise Manager
407
Figure 38 shows the display after clicking on a name in the Data Domain System column. Connect to GUI brings up the login screen for the monitored system if the GUI is enabled on the monitored system. Whichever protocol the current GUI (the one hosting the display) is using, HTTP or HTTPS, is also used to connect to the GUI on the monitored system.
408
27
For general information on VTL or VTL CLI, see the chapter Virtual Tape Library (VTL) - CLI.
From the main DDR GUI page, click on the VTL link at lower left in the sidebar to bring up the VTL GUI. The VTL GUI main page is shown in Figure 39.
409
The VTL GUI gives the user the advantage of approaching tape storage from four different points of view:
These are the Stack Menu choices at left in the Side Panel, and they are visible at all times. (The Stack Menu is so called because it is like a stack of individual menus, any one of which can be brought to the top and made visible by clicking on it.) The panel at right is called the Main Panel or Information Panel. This panel displays information about whatever menu item in the tree menu in the Side Panel at left is selected. The Action Buttons are actions that can be performed on the objects selected either in the Main Panel or the Side Panel. The Refresh button in the top bar (the icon is two arrows) can be used if changes were made (for example through the CLI) that are not showing up in the GUI. The button is always visible. The Help button in the top bar (the icon is a question mark) can be clicked from any screen to give context-sensitive online help about that screen. The Logout button in the top bar (the icon is a padlock) can be clicked to logout from the Data Domain system. Note For a step-by-step example of how to create and use a VTL Library, see the section near the middle of this chapter entitled Procedure: Use a VTL Library / Use an Access Group. Note Context-sensitive online help can be reached by clicking the question mark (?) icons. The online help also has a Table of Contents button that allows the user to view the TOC and content of the entire User Guide.
Enable VTLs
To start the VTL process and enable all libraries and drives, navigate as follows: MenuVirtual Tape LibrariesVTL Service...Virtual Tape Library Service pulldown... choose Enable. Enabling VTL Service may take a few minutes. When service is enabled, the pulldown says Enabled. (Clicking it allows you to choose Disable.)
410
Disable VTLs
To disable all VTL libraries and shutdown the VTL process: Menu Virtual Tape LibrariesVTL ServiceVirtual Tape Library Service pulldown... choose Disable. Disabling VTL Service may take a few minutes. When service is disabled, the pulldown says Disabled. (Clicking it allows you to choose Enable.) Administrative users only.
Create a VTL
To create a virtual tape library, do as follows:
Menu Virtual Tape LibrariesVTL Service LibrariesCreate Library button. Enter the following: Library Name A name of your choice. Must be between 1 and 32 characters long. (This field is required.) Number of Drives Valid values are between 0 and 64. (This field is optional.) Number of Slots The number of slots in the library. The number of slots must be equal to or greater than the number of drives, and must be at least 1. The maximum number of slots for all VTLs on a Data Domain System is 10000. The default is 20 slots. (This field is optional.) Number of CAPs is the number of cartridge access ports. The default is 0 (zero) and the maximum is 10 (ten). (This field is optional.) Changer Model Name Choose from drop-down menu. This s a tape library model name. The current supported model names are L180 and RESTORER-L180. See the Data Domain Technical Note for your backup software for the model name that you should use. If using RESTORER-L180, your backup software may require an update. (This field is optional.)
The VTL process must be enabled (see Enable VTLs just above) to allow the creation of a library. Administrative users only.
Delete a VTL
To remove a previously created virtual tape library, navigate as follows:
411
Menu Virtual Tape LibrariesVTL Service LibrariesDelete Library button. On the popup box, choose which library/libraries to delete by checking the boxes. Select Library (This field is required.)
Click OK. A popup will ask you to confirm. Click OK on the popup.
VTL Drives
The VTL Drives page has columns of information on Drive, Vendor, Product, Revision, Serial #, and Status.
Drive This column gives a list of the drives by name. The name is of the form Drive #, where # is a number between 1 and n that represents the address or location of the drive in the list of drives. Vendor Manufacturer/Vendor of the drive, for example IBM. Product The Product name of the drive, for example ULTRIUM-TD1. Revision The Revision number of the drive product, for example 4561. Serial # The Serial Number of the drive product, for example 6666660001. Status If there is a tape loaded, this column shows the barcode of the loaded tape. If there is no tape loaded in this drive, the Status is shown as empty.
When you click on an individual drive, additional Drive Statistics are provided on each Port of that drive, namely: ops/s, Read KiB/s, Write KiB/s, Soft Errors, and Hard Errors.
Port This column gives a list of the ports on the drive, by port number, where the port number is a number followed by a lowercase alphabetic character, for example 3a. ops/s The number of operations per second currently or recently being achieved by the port. Read KiB/s The number of KibiBytes per second read by the port.
Write KiB/s The number of KibiBytes per second written by the port. Soft Errors This column gives the number of errors that the system recovered from. Nothing needs to be done about these. No preventative measures or maintenance actions are necessary. If there are thousands of soft errors in a short period of time such as an hour, while they are being recovered from, the only cause for concern is that performance may be being affected.
412
Hard Errors This column gives the number of errors that the system was unable to recover from. The user shouldn't be seeing hard errors. In case of a hard error, the user should view the logs to determine whether any action needs to be taken, and if so, what action is appropriate. To view the logs, see the Data Domain Enterprise Manager GUI for the system, and click the link "Log Files" in the left menu bar. The log files to view are vtl.info, kern.info and kern.error.
In addition, a count (Port Count) of the total number of ports on that drive is given.
Remove Drives
Administrative users only. To remove drives, navigate as follows: Menu Virtual Tape LibrariesVTL Service LibrariesSelect a library by clicking it...Expand the library by clicking the + sign to the left of it...Drives...Delete Drive button...check which drives to delete. You can use the links to select All or None. Click OK. Click OK again to confirm. Select Drives - Check the boxes for the drives to delete. (This field is required.) - Select All - None - "All" checks the boxes for all drives. "None"unchecks all the boxes.
413
Use a Changer
Each VTL Library has exactly 1 media changer, although it can have several tape drives. The word device refers to changers and tape drives. A Changer has a Model Name (for example, L180). Each changer can have a maximum of 1 LUN (Logical Unit Number). Changers can be navigated to in the VTL GUI as follows: Menu Virtual Tape LibrariesVTL Service LibrariesSelect a library by clicking it...Expand the library by clicking the + sign to the left of it...Changer.
Information at different levels is found by clicking different levels of the menu hierarchy: VTL Service, Libraries, Changer, Drives, Tapes, Vault, Pools, etc.
414
Barcode
Barcode influences the number of tapes and tape capacity (unless a Tape Capacity is given, in which case the Tape Capacity overrides the Barcode), as follows.
barcode The 8-character barcode must start with six numeric or upper-case alphabetic characters (i.e. from the set {0-9, A-Z}), and end in a two-character tag of L1, LA, LB, or LC for the supported LT0-1 tape type, where: L1 represents a tape of 100 GiB capacity, LA represents a tape of 50 GiB capacity, LB represents a tape of 30 GiB capacity, and LC represents a tape of 10 GiB capacity.
415
(These capacities are the default sizes used if the capacity option is not included when creating the tape cartridge. If capacity is included, then that is used and it overrides the two-character tag.) The numeric characters immediately to the left of L set the number for the first tape created. For example, a barcode of ABC100L1 starts numbering the tapes at 100. A few representative sample barcodes: 000000L1 creates tapes of 100 GiB capacity and can accept a count of up to 1,000,000 tapes (from 000000 to 999999). AA0000LA creates tapes of 50 GiB capacity and can accept a count of up to 10,000 tapes (from 0000 to 9999). AAAA00LB creates tapes of 30 GiB capacity and can accept a count of up to 100 tapes (from 00 to 99). AAAAAALC creates one tape of 10 GiB capacity. You can only create one tape with this name and not increment. AAA350L1 creates tapes of 100 GiB capacity and can accept a count of up to 650 tapes (from 350 to 999). 000AAALA creates one tape of 50 GiB capacity. You can only create one tape with this name and not increment. 5M7Q3KLB creates one tape of 30 GiB capacity. You can only create one tape with this name and not increment. Note GiB = Gibibyte, the base 2 equivalent of GB, Gigabyte. To make use of automatic incrementing of the barcode when creating more than one tape, we do the following: Start at the 6th character position, just before L. If a digit then increment it. If an overflow occurs, 9 to 0, then move one position to the left. If a digit then increment that. If alpha stop. Data Domain recommends only creating tapes with unique bar codes. Duplicate bar codes in the same tape pool create an error. Although no error is created for duplicate bar codes in different pools, duplicate bar codes may cause unpredictable behavior in backup applications and can lead to operator confusion.
Import Tapes
Move existing tapes from the vault into a slot, drive, or cartridge access port. If a tape is in a pool, you must use the pool option to identify the tape. Administrative users only. Note The number of tapes you can import is limited--see Rules for number of tapes imported, immediately below this Note.
416
To sum up: The number of tapes you can import equals: (the number of empty slots, minus the number of tapes that came from slots, minus the number of tapes of unknown origin). # of empty slots - # of tapes that came from slots (we reserve the slot of each) - # of tapes of unknown origin (we reserve a slot for each) ------------------------= # of tapes you can import The pool option is required if the tapes are in a pool. Use the vtl tape show <vtl-name> command to display the total number of slots for a VTL. Use the same command vtl tape show <vtl-name> to display the slots that are currently used. Use backup software commands from the backup server to move VTL tapes to and from drives. Note: element=slot and address=1 are defaults, therefore: vtl import VTL1 barcode TST010L1 count 5 is equivalent to: vtl import VTL1 barcode TST010L1 count 5 element slot address 1 To move existing tapes from the vault to a slot, drive, or cartridge access port, navigate as follows: Menu Virtual Tape LibrariesVTL Service LibrariesSelect a library by clicking it...Expand the library by clicking the + sign to the left of it...Tapes...Import Tape button. At this point: A list of available tapes appears. (If no tapes appear, you may need to Create Tapes, or search for tapes using Location, Pool, Barcode or Count (where Count is the number of tapes returned by the search).
417
Check the checkboxes for the tapes to be imported. Click the OK Button. Click the OK Button again to confirm.
The fields are: Pool Choose from the drop-down menu. The pool is Default if none is given. A pool must already exist to use this option. If necessary, see Add A Pool. Valid names are between 1 and 32 characters long. (This field is required, however Default is an acceptable value.) Barcode Barcode is for searching. (This field is optional.) Count The number of tapes returned by the Search. (This field is optional.) Select tapes Using Checkboxes. (This field is required.) Select - All - None - "All" checks the boxes for all drives. "None" unchecks all the boxes. Device Slot, Drive, or CAP. (This field is required.) Tapes Per Page. (This field is the number of results on the search page.) Start Address (This field is optional.)
Export Tapes
To export tapes, navigate as follows: Menu Virtual Tape LibrariesVTL Service LibrariesSelect a library by clicking it...Expand the library by clicking the + sign to the left of it...Tapes...Export Tape button. The dialog box for Export tapes is similar to that for Import Tapes, but without the Select Destination fields at the bottom of the screen. At this point: A list of available tapes appears. (If no tapes appear, you may need to search for tapes using Location, Pool, Barcode or Count (where Count is the number of tapes returned by the search). Check the checkboxes for the tapes to be exported. Click OK. Click OK again to confirm.
The fields are: Pool Choose from the drop-down menu. The pool is Default if none is given. A pool must already exist to use this option. If necessary, see Add A Pool. Valid names are between 1 and 32 characters long. (This field is required, however Default is an acceptable value.) Barcode For searching. (This field is optional.)
Data Domain Operating System User Guide
418
Count The number of tapes returned by the Search. (This field is optional.) Select tapes Using Checkboxes. (This field is required.) Select - All - None - "All" checks the boxes for all drives. "None" unchecks all the boxes. Device Slot, Drive, or CAP. (This field is required.) Tapes Per Page. (This field is the number of results on the search page.) Start Address (This field is optional.)
Remove Tapes
To remove one or more tapes from the vault and delete all of the data in the tapes, Menu Virtual Tape LibrariesVTL ServiceVault...Delete Tapes button...check the boxes of the tapes you want to delete...click OK...click OK again to confirm. (The screen for Move tapes is effectively the same as that for Export Tapes.)
Count is used only for the number of tapes returned by a search. In order to delete the tapes, their boxes must be checked. The tapes must be in the vault, not in a VTL. If a tape is in a pool, you may have to use the pool to identify the tape. After a tape is removed, the physical disk space used for the tape is not reclaimed until after a file system clean operation.
Note In the case of replication, on a destination Data Domain System, manually removing a tape is not permitted. The fields are: Pool Choose from the drop-down menu. The pool is Default if none is given. A pool must already exist to use this option. If necessary, see Add A Pool. Valid names are between 1 and 32 characters long. (This field is required, however Default is an acceptable value.) Barcode For searching. (This field is optional.) Count The number of tapes returned by the Search. (This field is optional.) Select tapes Using Checkboxes. (This field is required.) Select - All - None - "All" checks the boxes for all drives. "None" unchecks all the boxes. Tapes Per Page. (This field is the number of results on the search page.)
419
Move Tape
Only one tape can be moved at a time, from one slot/drive/cap to another. (The screen for Move tapes is effectively the same as that for Import Tapes.) To move a tape, Menu Virtual Tape LibrariesVTL ServiceLibrarieschoose a libraryclick Move Tape buttonselect which tape to move using the check boxes...Choose a destination Drive, Slot, or CAP....Enter a destination Start Address...click OK.
Start Address is the number of the Drive, Slot, or CAP. Valid values are numbers. (This field is required.) -
The fields are: Pool Choose from the drop-down menu. The pool is Default if none is given. A pool must already exist to use this option. If necessary, see Add A Pool. Valid names are between 1 and 32 characters long. (This field is required, however Default is an acceptable value.) Barcode For searching. (This field is optional.) Count The number of tapes returned by the Search. (This field is optional.) Select one tape Using Checkbox. (This field is required.) Device Slot, Drive, or CAP. (This field is required.) Tapes Per Page. (This field is the number of results on the search page.) Start Address (This field is optional.)
Search Tapes
The VTL GUI user can search for tapes using the Search Tapes window. This is reached from anywhere the Search Tapes button appears, for example: Virtual Tape Libraries...VTL Service...Libraries...click Search Tapes button. The Search Tapes dialog box appears, allowing the user to search for tapes by Location, Pool, and/or Barcode. The fields are: Location Choose from the drop-down menu. The pulldown allows the user to specify a the vault, or a particular library. (This field is optional. The Default is All.) Pool Choose from the drop-down menu. The pool is Default if none is given. A pool must already exist to use this option. If necessary, see Add A Pool. Valid names are between 1 and 32 characters long. (This field is required, however Default is an acceptable value.) Barcode For searching. (This field is optional.)
Data Domain Operating System User Guide
420
Count The number of tapes returned by the Search. (This field is optional.) Tapes Per Page. This field is the number of results on the search page. (This field is optional.)
The asterisk wild-card character can be used in Barcode at the beginning or end of a string to search for a range of tapes.
421
Set/Enable Auto-Eject
Enable Auto-Eject to cause any tape that is put into a cartridge access port to automatically move to the virtual vault, unless the tape came from the vault, in which case the tape stays in the cartridge access port (CAP). VTL stack menu...Virtual Tape Libraries...VTL ServiceSet Option...change auto-eject to enabled...click Set Options. Note With auto-eject enabled: A tape moved from any element to a CAP will be ejected to the vault unless an ALLOW_MEDIUM_REMOVAL was issued to the library to prevent the removal of the medium from the CAP to the outside world.
Reset/Disable Auto-Eject
Disable Auto-Eject to allow a tape in a cartridge access port to remain in place, as follows: VTL stack menu...Virtual Tape Libraries...VTL ServiceSet Option...change auto-eject to disabled...click Set Options. Alternatively, you can reset Auto-Eject to its default value of disabled, as follows: VTL stack menu...Virtual Tape Libraries...VTL ServiceVTL stack menu...Virtual Tape Libraries...VTL ServiceReset Option...check the loop-id box...Click Reset Options.
422
Tape Distribution: The Device column labels the row information as referring to Drives, Slots, and CAPs.
423
The # of Loaded column shows the number of Drives, Slots, and CAPs that are loaded. The # of Empty column shows the number of Drives, Slots, and CAPs that are empty. The Total column shows the number of Drives, Slots, and CAPs that there are in total.
424
Count The number of tapes returned by the Search. (This field is optional.) Tapes Per Page. This field is the number of results on the search page. (This field is optional.)
Create a VTL on the system. See Create a VTL on page 411. Enable the VTL.. Add a group (see below). Add an initiator (see below). Map a client as an Access Grouping initiator (see below). Create an Access Group. See Create an Access Group and Procedure: Use an Access Group below.
Note Avoid making Access Grouping changes on a Data Domain system during active backup or restore jobs. A change may cause an active job to fail. The impact of changes during active jobs depends on a combination of backup software and host configurations. This set of actions deals with the group container. Populating the container with initiators and devices is done with VTL Initiator and VTL group. When setting up Access Groups on a Data Domain system:
Usually each Data Domain System device (media changer or drive) can have amaximum of 1 Access Group, however, with multi-initiator, devices may appear in more than one group when using features such as Shared Storage Option (SSO) etc.
425
TapeServer, all and summary are reserved and cannot be used as group names. (TapeServer is reserved for the functionality in a future release and is currently unused.)
To remove/delete a group, you must first empty it. See Delete From an Access Group below.
TapeServer, all and summary are reserved and cannot be used as group names. (TapeServer is reserved for the functionality in a future release and is currently unused.) Allows renaming a group without going through the laborious process of first deleting and then re-adding all initiators and devices. New Group Name must not exist and must also conform to the name restrictions under VTL Group Add. A rename will not interrupt any active sessions.
426
VTL Stack Menu...Access Groups...Groups...click on a group...click Add Initiators or Add LUNs...check the boxes for the things you want to add...click OK...click OK again to confirm. Add Initiators: Group. Choose from the drop-down menu. (This field is optional.) Select Initiator. (This field is required.)
Add LUNs: Group. Choose from drop-down menu. (This field is optional.) Library Name. Choose from drop-down menu. (This field is optional.) Starting LUN. A device address. The maximum number (LUN) is 255. A LUN can be used only once within a group, but can be used again within another group. VTL devices added to a group must use contiguous LUN numbers. (This field is optional.) Devices. (This field is required.) Primary Ports. The primary ports on which the device is visible. (This field is optional.) The last checkbox is for None. Secondary Ports. The secondary ports on which the device is visible. (This field is optional.) The last checkbox is for None.
Usually primary and secondary ports are different. For example, typical usage might be to make 5a and 6a primary ports, and 5b and 6b secondary ports.
Delete LUNs: Group. Choose from the drop-down menu. (This field is optional.) Library Name. Choose from the drop-down menu. (This field is optional.) Device. (This field is required.) Select - All - None - "All" checks the boxes for all drives. "None" unchecks all the boxes.
427
428
In-Use Ports. This shows which ports are currently in use and which are secondary for the Access Group. Primary Ports. The primary ports on which the devices are visible to initiators within the group. Secondary Ports. The secondary ports which the devices within the group may be visible on after using the Set In-Use Ports button. Secondary ports provide a quick means for administrators to apply Access Group access to secondary ports in the event of primary port(s) failure; this may be done without permanently modifying the Access Group.
A LUN count of the total number of LUNs is also shown. Initiators - for each initiator, the following is shown: The initiator-name is an alias that you create for Access Grouping. The WWPN is the World-Wide Port Name of the Fibre Channel port in the media server(s).
UPGRADE NOTE:
If, on startup, the VTL process discovers initiator entries in the registry, but no group entries, it is assumed the system has been recently upgraded. In this case a group will be created with the same name as each initiator and that initiator added to the newly created group. After upgrading to 4.4.x from 4.3.x or earlier, the LUN masking configuration will no longer work. As a result, the initiator will not see any LUNs from the Restorer. In release 4.4.x or later, the LUN MASKING feature is replaced by the ACCESS GROUPS feature. If LUN masking configuration was configured, the upgrade process will create an access group which has the initiators WWNN as a member without any LUNs. Thus, the solution is to add all LUNs to this access group so that the initiator and LUNS can see each other. This can be done via either the GUI with any browser or the command line. [In the same way, the Default LUN mask in 4.3.x is no longer available in 4.4.x. If devices are in the Default mask, once an upgrade to 4.4.x happens the Default LUN mask disappears and a new access group must be created in order for the initiators to now see the targets.]
429
Notice that Port listed in the In-Use Ports column has changed to the Secondary Port (or Primary if that was the one selected). (The error At least one value must be selected refers to devices: choose a device by checking its box.) Group. Choose from the drop-down menu. (This field is optional.) Library Name. Choose from the drop-down menu. (This field is optional.) Devices. (This field is required.) Select - All - None - "All" checks the boxes for all drives. "None" unchecks all the boxes. Primary Ports or Secondary Ports. (This field is required.)
After the above choices are made, click the OK button. 3. Create a new virtual drive for the tape library VTL1.. Menu Virtual Tape LibrariesVTL Service LibrariesSelect a library by clicking it...Expand the library by clicking the + sign to the left of it...Drives...Create Drive button. Enter the following information: 430
After the above choices are made, click the OK button. 4. Create an empty group group2 as a container . VTL Stack Menu...Access Groups...Groups...Create Group. Enter the following: Group Name. - group2.
Click the OK button. 5. Give the initiator 00:00:00:00:00:00:00:04 the convenient alias moe. VTL Stack MenuPhysical ResourcesPhysical ResourcesInitiatorsClick the Set Initiator Alias button at top right. Enter the following: WWPN - 00:00:00:00:00:00:00:04. Alias - moe.
Click the OK button. 6. Put the initiator moe into the group group2. VTL Stack Menu...Access Groups...Groups...click on a group...click Add Initiators. Enter the following: Group - choose group2 from the pulldown menu. Alias - Check the box for moe. Click the OK button.
7. View the initiator moe, in order to view the systems known clients and world-wide node names (WWNNs). The WWNN is for the Fibre Channel port on the client. VTL Stack Menu...Physical Resources...Physical Resources...Initiators...moe. 8. Add LUNs to the Access Group group2. Put VTL1 drive 1 through drive 4 and changer in group2. This allows any initiator in group2 to see VTL1 drive 1 through drive 4, and the changer. VTL Stack Menu...Access Groups...Groups...click on group group2...click Add LUNs. Enter the following: Group - choose group2 from the pulldown menu. Library Name - choose vtl1 from the pulldown menu.
431
Physical Resources
Select Devices - Check the boxes for drive 1, drive 2, drive 3, drive 4, and the changer. Click the OK button. Click OK again to confirm.
9. View the changes to group2. VTL Stack Menu...Access Groups...Groups...click on group group2.
Physical Resources
Initiators
Note The terms initiator name and initiator alias mean exactly the same thing and are used interchangeably. An Initiator is any Data Domain system clients HBA world-wide port name (WWPN). The name of the initiator is an alias that maps to a clients world-wide port name (WWPN). For convenience, optionally add an initiator alias before adding a VTL Access Group that ties together the VTL devices and client.
Until you add an Access Group for the client, the client cannot access any data on the Data Domain system. After adding an Access Group for the initiator/client, the client can access only the devices in the Access Group. A client can have Access Groups for multiple devices. A maximum of 128 initiators can be configured.
432
Physical Resources
Alias - an alias that you create for Access Grouping. The name can have up to 32 characters. Data Domain suggests using a simple, meaningful name.
This removes the alias. The initiator can now be referred to only by its WWPN. That is, this resets (deletes) the alias initiator_name from the system. Deleting the alias does not affect any groups the initiator may have been assigned to. Note All Access Groups for the initiator must be deleted before deleting the initiator.
Display Initiators
To list one or all named initiators and their WWPNs, navigate as follows: VTL Stack Menu...Physical Resources...Physical Resources. Information is shown on Initiators and Ports: Initiators: Ports: Port - the physical port number. Port ID. initiator-name is the alias that you create for Access Grouping. wwpn is the world-wide port name of the Fibre Channel port on the client system. Online Ports - each port is shown as Online or Offline.
433
Physical Resources
Enabled -- the port operational state, that is, whether Enabled or Disabled. Status -- whether Online or Offline, that is, whether or not the port is up and capable of handling traffic.
HBA Ports
VTL HBA Ports allow the user to enable or disable all the Fibre-Channel ports in port-list, or to show various VTL information in a per-port format.
You may see no ports that can be enabled, which may mean that all your ports are enabled already. To check a list of the ports that are Enabled, click Disable Ports. You can then Cancel out of Disable Ports.
434
Physical Resources
VTL Stack Menu...Physical Resources...Physical Resources...HBA Ports...Disable Ports button. Check the boxes for the ports you want to disable. Click OK. Click OK again. Ports to Disable. (This field is required.)
You may see no ports that can be disabled, which may mean that all your ports are disabled already. To check a list of the ports that are Disabled, click Enable Ports. You can then Cancel out of Enable Ports.
Under Ports, the following information is shown: Port -- the physical port number. Connection Type
435
Physical Resources
Link Speed Port ID Enabled -- the port operational state. Status -- shows whether the port is up and capable of handling traffic.
Under Port, the following information is shown: Port -- the physical port number. Connection Type Link Speed State -- Enabled or Disabled -- the port operational state. Status -- Online or Offline -- shows whether the port is up and capable of handling traffic.
Port number # of Control Commands -- non read/write commands # of Read Commands -- number of READ commands # of Write Commands -- number of WRITE commands
Data Domain Operating System User Guide
Pools
In (MiB) -- number of MebiBytes written Out (MiB) -- number of MebiBytes read # of Error PrimSeqProtocol -- count of errors in Primitive Sequence Protocol # of Link Fail -- count of link failures # of Invalid CRC -- number of frames received with bad CRC # of Invalid TxWord -- number of invalid tx words # of LIP -- number of LIPs # of Loss Signal -- number of times loss of signal was detected. # of Loss Sync -- number of times sync loss was detected
Note MiB = MebiByte, the base 2 equivalent of MB, MegaByte. Note KiB = KibiByte, the base 2 equivalent of KB, KiloByte.
Pools
The Data Domain pools feature for VTL allows replication by pools of VTL virtual tapes. The feature also allows for the replication of VTL virtual tapes from multiple replication originators to a single replication destination. For replication details, see the chapter on replication and its section Replicating VTL Tape Cartridges and Pools on page 252.
A pool name can be a maximum of 32 characters. A pool name with the restricted names all, vault, or summary cannot be created or deleted. A pool can be replicated no matter where individual tapes are located. Tapes can be in the vault, a library, or a drive. You cannot move a tape from one pool to another. Two tapes in different pools on one Data Domain system can have the same name. A pool sent to a replication destination must have a pool name that is unique on the destination. Data Domain system pools are not accessible by backup software. No VTL configuration or license is needed on a replication destination when replicating pools. Data Domain recommends only creating tapes with unique bar codes. Having duplicate bar codes in the same tape pool creates an error. Although no error is created for duplicate bar codes in different pools, duplicate bar codes may cause unpredictable behavior in backup applications and can lead to operator confusion.
437
Pools
Add a Pool
To create a pool, navigate as follows: VTL stack menu Virtual Tape LibrariesVTL ServiceVaultCreate Pool...enter a Pool Name...click OK. Pool-name cannot be all, vault, or summary. Max of 32 characters. (This field is required.)
You can also create a pool under Pools, as follows: VTL stack menu PoolsPoolsCreate Pool...enter a Pool Name...click OK.
Delete a Pool
To delete a pool, do the following: First, the pool must be empty before the deletion. To empty the pool: VTL stack menu Virtual Tape LibrariesVTL ServiceVaultClick on the pool you want to empty...click Delete Tapes. Click Select: All or Select all items found. Click OK. Click OK again. Now, to delete the pool, do: VTL stack menu Virtual Tape LibrariesVTL ServiceVaultClick on the pool you want to empty...click Delete Pool. Click OK. Click OK again. Select a Pool. (This field is required.)
Display Pools
To display pools: VTL stack menu Pools. Or, as an alternative: VTL stack menu Virtual Tape LibrariesVTL ServiceVault. The Location column gives the name of each pool. The Default pool holds all tapes that are not assigned to a user-created pool. The # of Tapes column gives the number of tapes in each pool. The Total Size column gives the total configured data capacity of the tapes in that pool in GiB (Gibibytes, the base 2 equivalent of GB, Gigabytes). The Total Space Used column displays the amount of space used on the virtual tapes in that pool.
438
Pools
The Average Compression column displays the average amount of compression achieved on the data on the tapes in that pool.
439
Pools
440
Replication - GUI
28
For general information on Replication or Replication CLI commands, see the chapter Replication - CLI. The figure below shows the Replication GUI Main Page.
441
Key to figure: Replication GUI Main Page 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. Performance Panel Overview Bar Open/Close Refresh Toggle Configuration Panel Bar Title Overview Box Sort Pairs Replication Pairs Bars Opened Status Panel Help Button Collection Replication Icon Directory Replication Icon Statuses are color-coded
From the main DDR GUI page, click on the Replication link at lower left in the sidebar to bring up the Replication GUI. The Replication GUI main page is shown in Figure 40. Note Context-sensitive online help can be reached by clicking the question mark (?) icons that appear in various places, for instance on the Status and Configuration boxes. The online help also has a Table of Contents button that allows the user to view the TOC and content of the entire User Guide. In unexpanded form, the boxes appear as bars. To expand them into boxes, click on the plus sign at the left end of the bar. To go from expanded back to unexpanded, click on the minus sign at the left end of the bar. The Overview box has four sections: Title Bar, Topology Panel (a graphic with an arrow for each replication pair), Performance Panel, and Configuration Panel.
The Title Bar appears at the top of the box. The left end of the Title Bar is a Control Bar, with three buttons. The leftmost button (+ or -) is an Expand/Unexpand button. Clicking plus (+) causes the bar to expand into a box. Clicking minus (-) causes the box to return to its unexpanded form, a bar. The middle button (two arrows circling each other) is a Refresh button. Note that while refreshing is in progress, a spinning daisy-shaped wheel appears on the topology panel near the arrow of the replication pair that has a refresh in progress. The third button on the Control Bar (the icon looks like a gear) is the Configuration Button. Clicking it causes the Configuration panel to toggle between open and closed.
442
The right end of the Title Bar is a Status Bar, indicating how many replication pairs are in normal, warning or error state. Note the colors (green for normal, yellow for warning, red for error, light gray for zero value).
The Topology Panel at left is a graphic showing the topology or configuration of the overall network related to the selected Data Domain system. It shows the various nodes involved in replication, with arrows between them. A Link (or arrow) represents one or more replication pairs. It can be one actual pair, or one folder that contains multiple directory replication pairs. Depending on its status, it is displayed as normal (green), warning (yellow), or error (red). Users can access the pair either by double-clicking the arrow, or by right-clicking it and selecting from the dropdown menu. The Performance Panel displays three historical charts: pre-compressed written, post-compressed replicated, and post-compressed remaining. Unlike performance graphs of a replication pair, they present statistics per the selected Data Domain system. This means aggregated statistics including all replication pairs related to this Data Domain system. The duration (x-axis) is 8 days by default. The y-axis is in GibiBytes or MebiBytes (the binary equivalents of GigaBytes and MegaBytes). The Configuration Panel: Less frequently used information such as configuration can be accessed by clicking the Configuration Button (the icon looks like a gear) from the Title Bar. The Configuration Panel contains throttle settings, bandwidth, and network delay. The Throttle, Bandwidth, and Configuration settings are applicable only to the replication pairs whose source is the selected Data Domain system. The Configuration Button appears only for actual collection or directory replication pairs.
The Replication Pairs displayed in the Topology Panel are all represented below it as bars. The Replication Pairs Boxes have almost the same sections as the Overview Box (Title Bar, Performance Panel, and Configuration Panel), except that the effect of the Expand (+) button differs: a Replication Bar shows either sub-bars or a Status Panel.
Effect of the Expand (+) Button: Parent Bar (with children under it): expands to show its child bars. Leaf Bar (has no children under it): expands to show the Status Panel.
That is, a Replication Bar shows either sub-bars or a Status Panel, reached by expanding it with the plus (+) button. Note The icon for collection replication looks like a light gray cylindrical stack of disks. Note The icon for directory replication looks like a yellow folder. The Configuration, Status, and General Configuration screens are explained more fully below in the sections Configuration on page 446, Status on page 447, and General Configuration on page 449.
Replication - GUI 443
444
In order to understand the values referred to in the Performance panels in the figure Overview Versus Replication Pair on page 444, compare it with the figure Data Domain system Versus Replication Pair on page 445. The Overview Performance Panel in the screenshot describes the system dlh6, and refers to the cross-hatched items on the diagram: dlh6, DataIn, ReplIn, and ReplOut. The Replication Pair Panel in the screenshot describesthe replication pair ccm31-dlh6, and refers to the solid dark gray items on the diagram: the pair ccm31-dlh6, DataIn, Replicated, and Remaining.
Replication - GUI
445
Configuration
This screen monitors and shows the configuration of the system (rather than controlling it). This screen is reached by clicking the Configuration button (symbol: a gear) on the Overview bar.
Throttle Settings
Throttle Settings throttle back or restrict the bandwidth at which data goes over the network, to prevent replications using up all of the systems resources. The default network bandwidth used by replication is unlimited. Temporary Override: If an override has been set, it shows here. Permanent Schedule: The rate includes a number or the word unlimited. The number can include a tag for bits or bytes per second. The default rate is bits per second. In the rate variable:
bps or b equals raw bits per second Kibps, Kib, or K equals 1024 bits per second Bps or B equals bytes per second KiBps or KiB equals 1024 bytes per second
Note Kib=Kibibits, the base 2 equivalent of Kb or Kilobits. KiB=Kibibytes, the base 2 equivalent of KB or Kilobytes. The rate can also be 0 (the zero character), or disabled. In each case, replication is stopped until the next rate change. As an example, replication could be limited to 20 kibibytes per second starting on Mondays and Thursdays at 6:00 a.m. Replication runs at the given rate until the next scheduled change or until new throttle commands force a change. The default rate with no scheduled changes is to run as fast as possible at all times. Note The system enforces a minimum rate of 98,304 bits per second (12 KiB). For more information on Throttle Settings, see the Replication - CLI chapter, under Add a Scheduled Throttle Event on page 259.
Bandwidth
The value is the actual bandwidth of the underlying network used for replication. This is used to set the internal tcp buffer size for replication socket. Coupled with "option set delay", the tcp buffer size is calculated and set as "bandwidth * delay / 1000 * 1.25".
446 Data Domain Operating System User Guide
The rate is an integer of bytes/second. For more information on Bandwidth, see the Replication - CLI chapter, under Procedure: Set Replication Bandwidth and Network Delay on page 263.
Network Delay
This is the actual network delay value for the system. Useful when a wide-area-network has long delays in the round-trip time between the replication source and destination. The value is an integer in milliseconds. For more information on Network Delay, see the Replication - CLI chapter, under Procedure: Set Replication Bandwidth and Network Delay on page 263.
Listen Port
The default listen-port for a destination Data Domain System is 2051. This is the port to which the source sends data. A destination can have only one listen port. If multiple sources use one destination, each source must send to the same port. For more information on the listen-port, see the chapter Replication - CLI, under the heading Change a Destination Port on page 258.
Status
The Status Panel only shows for leaf nodes (which have no sub-pairs underneath them). It is reached by expanding a leaf-node Replication Bar using the Expand (+) button.
Current State
Four states/statuses need to be distinguished from one another: Current State, Status, Local Filesystem Status, and Replication Status. Current State is the Replication Pair State. Possible Current States are: Initializing, Replicating, Recovering, Resynching, Migrating, Unititialized, and Disconnected. Status is as follows: For the first five Current States, the Status is Normal (or Warning in the case of unusual delay). For Unititialized, the Status is Warning. For Disconnected, the Status is Error.
The table below Current State shows Local Filesystem Status and Replication Status.
Replication - GUI
447
Local Filesystem Status is the filesystem status for the Source and Destination Data Domain systems. It can take the values: Enabled, N/A, or Disabled. Replication Status is the status for that Replication Context, for the Source and Destination Data Domain systems. It can take the values: Enabled, N/A, or Disabled.
Synchronized as of
Sync-as-of Time The source automatically runs a replication sync operation every hour and displays the time local to the source. If the source and destination are in different time zones, the Sync-as-of Time may be earlier than the time stamp in the Time column. A value of unknown appears during replication initialization. For more information on Synchronized as of, see the chapter Replication - CLI, under the heading Display Replication History on page 266.
Day Dropdown Box: Today, Yesterday, 2 days ago, , 7 days ago. Hour Dropdown Box: 01, , 12. am/pm Dropdown Box: am, pm.
The modified value will be saved after the track button is clicked. This backup completion time will be automatically used for replication status the next time a user logs in or the Refresh button is clicked. Note UI behavior: When an invalid time is specified in Backup Completion Time, the value of Replication Completion Time is "Not available". (Today 06 am is specified for the backup time when the current time is 3am).
448
General Configuration
Less frequently used information such as configuration can be found for any Replication Bar that is a leaf node (has no child bars), by clicking on the Configuration Button (gear symbol) on the Control Bar and expanding the box. This Configuration - General Panel displays source Data Domain system and directory (for directory replication), target Data Domain system and directory (for directory replication), and connection host and port.
Replication - GUI
449
450
Africa/Abidjan Africa/Bamako Africa/Brazzaville Africa/Dakar Africa/Gaborone Africa/Kigali Africa/Luanda Africa/Maseru Africa/Ndjamena Africa/Sao_Tome
Africa/Algiers Africa/Bissau Africa/Casablanca Africa/Douala Africa/Kampala Africa/Libreville Africa/Malabo Africa/Monrovia Africa/Ouagadougou Africa/Tunis
Africa/Asmera Africa/Blantyre Africa/Conakry Africa/Freetown Africa/Khartoum Africa/Lome Africa/Maputo Africa/Nairobi Africa/Porto-Novo Africa/Windhoek
Africa/Dar_es_Salaam Africa/Djibouti Africa/Harare Africa/Kinshasa Africa/Lumumbashi Africa/Mbabane Africa/Niamey Africa/Timbuktu Africa/Johannesburg Africa/Lagos Africa/Lusaka Africa/Mogadishu Africa/Nouakchott Africa/Tripoli
America
America/Dawson_Creek America/Denver
America/Dominica America/Fortaleza America/Grenada America/Halifax America/Iqaluit America/La_Paz America/Managua America/Menominee America/Montserrat America/Noronha
America/Edmonton America/Glace_Bay America/Guadeloupe America/Havana America/Jamaica America/Lima America/Manaus America/Mexico_City America/Nassau America/Panama
America/El_Salvador America/Godthab America/Guatemala America/Indiana America/Jujuy America/Los_Angeles America/Martinique America/Miquelon America/New_York America/Pangnirtung
America/Ensenada America/Goose_Bay America/Guayaquil America/Indianapolis America/Juneau America/Louisville America/Mazatlan America/Montevideo America/Nipigon America/Paramaribo America/Puerto_Rico America/Santiago America/St_Johns
America/Fort_Wayne America/Grand_Turk America/Guyana America/Inuvik America/Knox_IN America/Maceio America/Mendoza America/Montreal America/Nome America/Phoenix America/Rainy_River America/Santo_Domingo America/St_Kitts
America/Port_of_Spain America/Port-au-Prince America/Porto_Acre America/Rankin_Inlet America/Sao_Paulo America/St_Lucia America/Thule America/Virgin America/Regina America/Scoresbysund America/St_Thomas America/Thunder_Bay America/Whitehorse America/Rosario America/Shiprock America/St_Vincent America/Tijuana America/Winnipeg
Antarctica
Antarctica/Casey Antarctica/Palmer
Antarctica/McMurdo
Asia
Asia/Aden
Asia/Alma-Ata
Asia/Amman
Asia/Anadyr
Asia/Aqtau
452
Asia/Aqtobe Asia/Bangkok Asia/Chungking Asia/Dushanbe Asia/Ishigaki Asia/Kabul Asia/Krasnoyarsk Asia/Magadan Asia/Omsk Asia/Riyadh Asia/Taipei Asia/Thimbu Asia/Vientiane
Asia/Ashkhabad Asia/Beirut Asia/Colombo Asia/Gaza Asia/Istanbul Asia/Kamchatka Asia/Kuala_Lumpur Asia/Manila Asia/Phnom_Penh Asia/Saigon Asia/Tashkent Asia/Tokyo Asia/Vladivostok
Asia/Baghdad Asia/Bishkek Asia/Dacca Asia/Harbin Asia/Jakarta Asia/Karachi Asia/Kuching Asia/Muscat Asia/Pyongyang Asia/Seoul Asia/Tbilisi Asia/Ujung_Pandang Asia/Yakutsk
Asia/Bahrain Asia/Brunei Asia/Damascus Asia/Hong_Kong Asia/Jayapura Asia/Kashgar Asia/Kuwait Asia/Nicosia Asia/Qatar Asia/Shanghai Asia/Tehran Asia/Ulan_Bator Asia/Yekaterinburg
Asia/Baku Asia/Calcutta Asia/Dubai Asia/Irkutsk Asia/Jerusalem Asia/Katmandu Asia/Macao Asia/Novosibirsk Asia/Rangoon Asia/Singapore Asia/Tel_Aviv Asia/Urumqi Asia/Yerevan
Atlantic
Atlantic/Bermuda Atlantic/Madeira
Atlantic/Canary Atlantic/Reykjavik
Atlantic/Cape_Verde
Atlantic/Faeroe
Atlantic/South_Georgia Atlantic/St_Helena
Australia
453
Australia/South Australia/Yancowinna
Australia/Sydney
Australia/Tasmania
Australia/Victoria
Australia/West
Brazil
Brazil/Acre
Brazil/DeNoronha
Brazil/East
Brazil/West
Canada
Canada/Central Canada/Newfoundland
Canada/East-Saskatchewan Canada/Pacific
Canada/Eastern Canada/Saskatchewan
Chile
Chile/Continental
Chile/EasterIsland
Etc
454
Europe
Europe/Amsterdam Europe/Berlin Europe/Chisinau Europe/Istanbul Europe/London Europe/Monaco Europe/Riga Europe/Skopje Europe/Vaduz Europe/Zagreb
Europe/Andorra Europe/Bratislava Europe/Copenhagen Europe/Kiev Europe/Luxembourg Europe/Moscow Europe/Rome Europe/Sofia Europe/Vatican Europe/Zurich
GMT
Indian/Antananarivo Indian/Kerguelen
Indian/Chagos Indian/Mahe
Indian/Christmas Indian/Maldives
Indian/Cocos Indian/Mauritius
Indian/Comoro Indian/Mayotte
455
Indian/Reunion
Mexico
Mexico/BajaNorte
Mexico/BajaSur
Mexico/General
Miscellaneous
Pacific
456
Pacific/Rarotonga Pacific/Tongatapu
Pacific/Saipan Pacific/Truk
Pacific/Samoa Pacific/Wake
Pacific/Tahiti Pacific/Wallis
Pacific/Tarawa Pacific/Yap
system V
systemV/CST6CDT systemV/MST7MDT
systemV/EST5 systemV/PST8
US (United States)
US/Central US/Michigan
US/East-Indiana US/Mountain
Aliases GMT=Greenwich, UCT, UTC, Universal, Zulu CET=MET (Middle European Time) US/Eastern=Jamaica US/Mountain=Navajo
457
458
Index
"permission denied" error message 192
Symbols
add
a new shelf to a volume 45 adminaccess command 109 administrative email, display address 123 administrative host, display host name 123 AIX 26 alerts add an email address 130 command 130 display current 131 display current and history 133 display the email list 132 display the history 132 remove an address from the email list 130 set the email list to the default 131 test the list 130 alias add 82 command 81 defaults 82 display 82 remove 82 authentication mode for CIFS 316 autonegotiate, set 100 autosupport command 134 display all parameters 137 display history file 138 display list 137 display schedule 138 remove an email address 135 run the report 135 send a report 135
459
B C
send command output 136 set all parameters to default 137 set list to the default 135 set the schedule 136 set the schedule to the default 137 test report 134
CIFS add a client 311 add a user 310 Add IP address/hostname mappings 317 allow access 110 allow group administrative access 320 allow trusted domain users 319 anonymous user connections 321 certificate authority security 321 configuration set up 22 disable client connections 312 display access settings 114 display active clients 322 display CIFS groups 325 display CIFS users 324 display clients 323 display configuration 323 display group details 326 Display IP address/hostname mappings 324 display statistics 322 display status 325 display user details 325 display valid CIFS options that can be set 322 enable client connections 312 hostname change effects 98 identify a WINS server 318 Increase memory for more user accounts 320 remove a client 312 remove all clients 313 remove all IP address/hostname mappings 317 remove an administrative client 313 Remove one IP address/hostname mapping 317 remove the NetBIOS hostname 313 remove the WINS server 318 reset CIFS options 321
460 Data Domain Operating System User Guide
resolve NetBIOS name 318 restrict administrative access 110 secured LDAP with TLS 311 set a NetBIOS hostname 313 set the authentication mode 316 set the logging level 320 set the maximum transmission size 321 shares, add 313 shares, delete 315 shares, display 325 shares, enable/disable 315 shares, modify 315 SMBD memory 321 user access 309 clean change schedule 222 display amount parameters 223 display schedule 224 display status 224 display throttle 224 monitor operations 225 set schedule to the default 223 set throttle 223 set throttle to the default 223 start 221 stop 222 command output, remote with SSH 114 send output using autosupport command commands listed 10 compression algorithms 225 set for none 225 config command 119 command details 119 configuration basic additions 27 change settings 119 defaults 9 first time 14 context 250 CPU display load 72, 73
136
461
D
data compression 6 integrity checks 5 migration 291 Data Domain Enterprise Manager at system installation 13 introduction 8 system administration with 28 system configuration 14, 120 date display 78 set 66, 78 DDR Manager monitor multiple systems 405 opening and use 399 default gateway change 106 display 107 reset 106 DHCP disable 96 enable 96 disk add disks and LUNs 36, 175 add enclosure command 36 command 173 command format 35 display performance statistics 183 display RAID status 180 display type and capacity 178 estimate use of space 188 failures and spares 32 flash the running light 175 manage use of space 189 reclaim space 190 reliability statistics 185 rescan 36, 175 set statistics to zero 176 set to failed 174 show status 37, 176 spare when add an expansion shelf 45 unfail a disk 175 DNS
462
add server 97 display servers 103 domain name display 98 duplex, set line use 99
enclosure beacon 38 display hardware status 41 fans, display status 39 port connections, display 40, 193, power supply status 41 temperature, display 39 enclosures, list 37 Enterprise Manager 405 Ethernet, display interface settings 101 expansion shelf add 32 disk add enclosure command 175 look for new 36
194, 195
fans
display status 76 fans, display status 39 fastcopy 215 file system compression algorithms 225 delete all data 214 disable 214 display compression 217 display status 217 display uptime 217 display utilization 215 enable 213 full 192 maximum number of files 191 restart 214 filesys command 213 FTP add a host 109 disable 111 display user list 113 enable 111 remove a host 110 set user list to empty 111
463
G
gateway section 1, 61, 127, 171, gateway system add a LUN 57, 174 command differences 51 installation 54 points of interest 51 GB defined 10 GUI, see DDR Manager
H
halt See poweroff hard address, private loop 367, 421 hardware display status 41 host name add 99 delete 100 display 100 hourly status message 138 HTTPS, generate a new certificate 113
I/O, display load 72, 73 inode reporting 191 installation DD460g 54 default directories under /ddvar 9 login and configuration 14 interface autonegotiate 100 change IP address 98 change transfer unit size 97 disable 95 display Ethernet configuration 101 display settings 101 enable 95 overview 7 set line speed 99 IP address, change for an interface 98
K L
124
Data Domain Operating System User Guide
configuration setup 18 display 125 remove 126 remove feature licenses 126 location display 124 set 122 log archive the log 170 command 165 create file bundles 139 list file names 168 remote logging 165 scroll new entries 165 set the CIFS logging level 320 support upload command 139 view all current entries 167 login, first time 14 LUN groups 377, 425 LUN masking add a client 378, 385 add a LUN mask 388 procedure 384, 430 vtl initiator command 378, 385
change server 122 display server 103 display server name 124 maximum transfer unit size, change 97 MB defined 10 migration set up 291 with replication 296 monitor multiple systems 405 MTU, change size 97
name change 98 display 103 ndmp add a filer 393 backup operation 394 display known filers 396
465
display process status 396 remove a filer 393 remove passwords 395 restore operation 394 stop a process 395 stop all processes 395 test for a filer 396 net failover display 90 failover, add physical interfaces 90 failover, delet virtual interface 91 failover, remove physical interface 90 net command 95 net, display Ethernet hardware settings 102 netmask, change 96 network configuration set up 19 display statistics 104 network parameters, reset 99 NFS add client, read/write 303 clear statistics 305 command 301 configuration set up 24 detailed statistics 307 disable client 304 display active clients 305 display allowed clients 305 display statistics 306 display status 307 enable client 304 remove client 304 set client list to default 305 ntp add a time server 84 delete a time server 84 disable service 83 display settings 85 display status 84 enable service 83 reset to defaults 84 synchronize a Windows domain controller 326 NTP, display server 103 NVRAM, display status 79
466
password, change 116 path name length 191 ping a host 97 pools add 388 and replication 253 delete 388 display 388, 438, 439 using 387, 437 port connections display 40, 193, 194, 195 ports display 70 power supply display status 41, 77 poweroff 63 private loop, hard address 367, 421 privilege level, change 116
RAID and a failed disk 175 create a new group 45 display detailed information 181 display status 37, 176, 180 groups 32 type in a restorer 6 with gateway restorers 50 reboot hardware 64 remote command output 114 replication abort a recovery 256 abort a resync 257 change a destination port 259 change a source port 258 change originator name 257 configure 251 context 250 convert to directory from collection directory size 192 display configuration 264 display status 268 display when complete 268 introduced 249 move data to originator 256
277
467
pools 253 remove configuration 255 replace collection source 276 replace directory source 275 reset authorization 255 reset bandwidth 262 reset delay 262 resume 254 resynchronize source and destination seeding 278 bidirectional 282 many-to-one 287 one-to-one 279 set bandwidth 263 setup and start bidirectional 274 setup and start collection 274 setup and start directory 273 setup and start many-to-one 275 start 253 statistics 270 suspend 254 throttle override 261 throttle rate 260 throttle reset 262 throttle, add an event 259 throttle, delete an event 260 throttle, display settings 267 use a network name 258 route add a rule 105 change default gateway 106 command 105 display a route 106 display default gateway 107 display Kernel IP routing table 107 display static routes 106 remove a rule 105 reset default gateway 106 serial number, display 71 shutdown See poweroff snapshot command 231 SNMP
256
468
add community strings 144 add trap hosts 143 delete a community string 144 delete a trap host 143 delete all community strings 144 delete all trap hosts 143 disable 142 display all 145 display community strings 146 display status 145 display the system contact 146 display the system location 146 display trap hosts 145 enable 142 reset all SNMP values 144 reset location 142 reset system location 143 system contact 142 system location 142 software display version 81 site requirements 14 space management 187 space.log, format 168 SSH add a public key 112 display the key file 113 display user list 113 remove a key file entry 112 remove the key file 112 set user list to empty 111 statistics clear NFS 305 disk performance 183 disk reliability 185 display for the network 104 display NFS 306 graphic display 75 NFS detailed 307 set disk to zero 176 status, hourly message 138 support log file bundles 139 upload command 139 system
469
change name 98 command 63 display status 76 display uptime 71 display version 81 location 122 location display 124 ports 70 serial number 71
TB defined 10 TELNET add a host 109 disable 111 display user list 113 enable 111 remove a host 110 set user list to empty 111 temperature, display 39, 77 time display 78 display zone 124 set 66, 78 set zone 123 Tivoli Storage Manager 26 traceroute 106 upgrade software 64 uptime, display 71 users add 115 change a password 116 change a privilege level 116 display all 117, 118 regular 115 remove 115 set list to default 116 sysadmin 115 verify process explanation 6 see when the process is running 73 Virtual Tape Library See VTL
470 Data Domain Operating System User Guide
volume expansion 45 VTL auto-eject feature 368, 422 broadcast changes 360 create a new drive 361, 413 create a VTL 359, 384, 411, 430 create tapes 363, 415 delete a VTL 360, 411 disable 360, 411 display a tape summary 362, 372, 414, 424 display all tapes 370, 423 display configurations 369 display statistics 373 display status 369, 422 display tapes in the vault 372, 424 enable 359, 410 export tapes 366 features and limitations 357 import tape 364, 417 LUN groups 377, 425 private loop hard address 367, 421 remove a drive 361, 413 remove tapes 366, 419 retrieve a tape from a destination 376 tape information by VTL 371, 372, 423 WINS server for CIFS 318 WINS server for CIFS, remove 318
471
472