Académique Documents
Professionnel Documents
Culture Documents
100-002030
COURSE DEVELOPERS Disclaimer
Gail Adey
The information contained in this publication is subject to change without
Jade Arrington notice. VERITAS Software Corporation makes no warranty of any kind
Harry Richards with regard to this guide, including, but not limited to, the implied
warranties of merchantability and fitness for a particular purpose.
LEAD SUBJECT MATTER VERITAS Software Corporation shall not be liable for errors contained
EXPERTS herein or for incidental or consequential damages in connection with the
Bob Lucas furnishing, performance, or use of this manual.
Dave Rogers
Copyright
Stephen Williams
Copyright © 2004 VERITAS Software Corporation. All rights reserved.
TECHNICAL
No part of the contents of this training material may be reproduced in any
CONTRIBUTORS AND form or by any means or be used for the purposes of training or education
REVIEWERS without the written permission of VERITAS Software Corporation.
Chris Amidei
Barbara Ceran Trademark Notice
Connie Economou VERITAS, the VERITAS logo, and VERITAS FirstWatch, VERITAS
Danny Foreman Cluster Server, VERITAS File System, VERITAS Volume Manager,
Bill Havey VERITAS NetBackup, and VERITAS HSM are registered trademarks of
VERITAS Software Corporation. Other product names mentioned herein
Gene Henriksen
may be trademarks and/or registered trademarks of their respective
Harold Holderman companies.
Michael Hsiung
VERITAS Volume Manager 4.0 for UNIX: Operations
Gerald Jackson
Participant Guide
Danqing Jin
Scott Kaiser VERITAS Software Corporation
350 Ellis Street
Stefan Kwiatkowski
Mountain View, CA 94043
Jack Lamirande Phone 650–527–8000
Chris Maino www.veritas.com
Monu Pradhan-Advani
Christian Rabanus
Vance Ray
Sue Rich
Saumyendra “Sam”
Sengupta
Brian Staub
Andrew Tipton
Jiju Vithayathil
Jerry Vochteloo
Brad Willer
Table of Contents
Course Introduction
What Is Storage Virtualization? ................................................................... Intro-2
Storage Management Issues ........................................................................ Intro-2
Defining Storage Virtualization................................................................... Intro-3
How Is Storage Virtualization Used in Your Environment? ....................... Intro-4
Storage-Based Storage Virtualization ......................................................... Intro-5
Host-Based Storage Virtualization .............................................................. Intro-5
Network-Based Storage Virtualization........................................................ Intro-5
Introducing VERITAS Storage Foundation.................................................. Intro-6
What Is VERITAS Volume Manager? ........................................................ Intro-7
What Is VERITAS File System? ................................................................. Intro-7
Benefits of VERITAS Storage Foundation ................................................. Intro-9
VERITAS Storage Foundation Curriculum ................................................ Intro-11
VERITAS Volume Manager for UNIX: Operations Overview .................. Intro-12
Objectives .................................................................................................. Intro-12
Additional Course Resources .................................................................... Intro-13
Table of Contents v
Table of Contents ix
Table of Contents xi
Glossary
Index
• Problem: Customer
Other
Otherissues:
issues: order database cannot
•• Multiple-vendor
Multiple-vendorhardware
hardware access unutilized
•• Explosive
Explosivedata
datagrowth
growth storage.
•• Different
Differentapplication
applicationneeds
needs • Common solution: Add
•• Multiple
Multipleoperating
operatingsystems
systems more storage.
VM40_Solaris_R1.0_20040115
•• Rapid
Rapidchange
change I-3
•• Budgetary
Budgetaryconstraints
constraints
VM40_Solaris_R1.0_20040115
I-4
Switch
Storage
Storage
Storage
VM40_Solaris_R1.0_20040115 I-5
VM40_Solaris_R1.0_20040115 I-6
VERITAS
VERITAS
File
File System
System
(VxFS)
(VxFS)
VERITAS
VERITAS Virtual Storage Resources
Volume
Volume
Manager
Manager
(VxVM)
(VxVM) Volumes
Volumes
JBOD
Brand “A”
VM40_Solaris_R1.0_20040115
Brand “B” I-7
Manageability
• Management of storage and the file system is performed online in real time,
eliminating the need for planned downtime.
• Online volume and file system management can be performed through an
intuitive, easy-to-use graphical user interface that is integrated with the
VERITAS Volume Manager (VxVM) product.
• VxVM provides consistent management across Solaris, HP-UX, AIX, Linux,
and Windows platforms.
• VxFS command operations are consistent across Solaris, HP-UX, AIX, and
Linux platforms.
Availability
• Through RAID techniques, storage remains available in the event of hardware
failure.
• Hot relocation guarantees the rebuilding of redundancy in the case of a disk
failure.
• Recovery time is minimized with logging and background mirror
Performance
• I/O throughput can be maximized by measuring and modifying volume layouts
while storage remains online.
• Performance bottlenecks can be located and eliminated using VxVM analysis
tools.
• Extent-based allocation of space for files minimizes file level access time.
• Read-ahead buffering dynamically tunes itself to the volume layout.
• Aggressive caching of writes greatly reduces the number of disk accesses.
• Direct I/O performs file I/O directly into and out of user buffers.
Scalability
• VxVM runs over a 32-bit and 64-bit operating system.
• Storage can be deported to larger enterprise-class platforms.
• Storage devices can be spanned.
• VxVM is fully integrated with VERITAS File System (VxFS).
• With VxFS, several add-on products are available for maximizing performance
in a database environment.
VERITAS
VERITASVolume
Volume VERITAS
VERITASVolume
Volume VERITAS
VERITAS
Manager
Managerfor
forUNIX:
UNIX: Manager
Managerfor
forUNIX:
UNIX: Enterprise
Enterprise
Operations
Operations Maintenance
Maintenance Storage
StorageSolutions
Solutions
~2 Days ~1 Day ~2 Days
VM40_Solaris_R1.0_20040115 I-9
Recovery Essentials
Encapsulation and Rootability
Reconfiguring Volumes Online
Configuring Volumes
Creating Volumes
Managing Disks and Disk Groups
Installation and Interfaces
Virtual Objects
VM40_Solaris_R1.0_20040115 I-10
Objectives
After completing this training, you will be able to:
• Identify VxVM virtual storage objects and volume layouts.
• Install and configure VxVM and VxFS.
• Configure and manage disks and disk groups.
• Create concatenated, striped, mirrored, RAID-5, and layered volumes.
• Configure volumes by adding mirrors, logs, storage attributes, and file
systems.
• Reconfigure volumes online, resize volumes and file systems, and use the
Storage Expert utility to analyze volume configurations.
• Place the root disk under VxVM control and mirror the root disk.
• Perform basic VxVM recovery operations.
VM40_Solaris_R1.0_20040115 I-11
Glossary
For your reference, this course includes a glossary of terms related to VERITAS
Storage Foundation.
Recovery Essentials
Encapsulation and Rootability
Reconfiguring Volumes Online
Configuring Volumes
Creating Volumes
Managing Disks and Disk Groups
Installation and Interfaces
Virtual Objects
VM40_Solaris_R1.0_20040115 1-2
Introduction
Overview
This lesson describes the virtual storage objects that VERITAS Volume Manager
(VxVM) uses to manage physical disk storage. This lesson introduces common
virtual storage layouts, illustrates how virtual storage objects relate to physical
storage objects, and describes the benefits of virtual data storage.
Importance
Before you install and set up VERITAS Volume Manager, you should be familiar
with the virtual objects that VxVM uses to manage physical disk storage. A
conceptual understanding of virtual objects helps you to interpret and manage the
virtual objects represented in VxVM interfaces, tools, and reports.
VM40_Solaris_R1.0_20040115 1-3
Outline of Topics
• Physical Data Storage
• Virtual Data Storage
• Volume Manager Storage Objects
• Volume Manager RAID Levels
VM40_Solaris_R1.0_20040115 1-4
• The whole disk approach enables you to partition a disk in five ways: the whole disk is
used by a single file system; the whole disk is used as swap area; the whole disk is
used as a raw partition; a portion of the disk contains a file system, and the rest is used
as swap; or the boot disk contains a 2-MB special boot area, the root file system, and a
swap area.
• An LVM data disk consists of four areas: Physical Volume Reserved Area (PVRA);
Volume Group Reserved Area (VGRA); user data area; and Bad Block Relocation
Area (BBRA).
AIX
A native AIX disk does not have a partition table of the kind familiar on many other
operating systems such as Solaris, Linux, and Windows. An application could use the
entire unstructured raw physical device, but the first 512-byte sector normally contains
information including a physical volume identifier (pvid) to support recognition of the
disk by AIX. An AIX disk is managed by IBM’s Logical Volume Manager (LVM) by
default. A disk managed by LVM is called a physical volume (PV). A physical volume
consists of:
• PV reserved area: A physical volume begins with a reserved area of 128 sectors
containing PV metadata, including the pvid.
• Volume Group Descriptor Area (VGDA): One or two copies of the VDGA follows.
The VGDA contains information describing a volume group (VG), which consists of
one or more physical volumes. Included in the metadata in the VGDA is the definition
of the physical partition (PP) size, normally 4 MB.
• Physical partitions: The remainder of the disk is divided into a number of physical
partitions. All of the PVs in a volume group have PPs of the same size, as defined in
the VGDA. In a normal VG, there can be up to 32 PPs in a PV. In a big VG, there can
The term partition is used differently in different operating systems. In many kinds of
UNIX, Linux, and Windows, a partition is a variable sized portion of contiguous disk
space that can be formatted to contain a file system. In LVM, a PP is mapped to a logical
partition (LP), and one or more LPs from any location throughout the VG can be
combined to define a logical volume (LV). A logical volume is the entity that can be
formatted to contain a file system (by default either JFS or JFS2). So a physical partition
compares in concept more closely to a disk allocation cluster in some other operating
systems, and a logical volume plays the role that a partition does in some other operating
systems.
Linux
On Linux, a nonboot disk can be divided into one to four primary partitions. One of these
primary partitions can be used to contain logical partitions, and is called the extended
partition. The extended partition can have up to 12 logical partitions on a SCSI disk and up
to 60 logical partitions on an IDE disk. You can use fdisk to set up partitions on a Linux
disk.
Disk Arrays
Reads and writes on unmanaged physical disks can be a relatively slow process,
because disks are physical devices that require time to move the heads to the
correct position on the disk before reading or writing. If all of the read and write
operations are done to individual disks, one at a time, the read-write time can
become unmanageable.
A disk array is a collection of physical disks. Performing I/O operations on
multiple disks in a disk array can improve I/O speed and throughput.
What Is a Volume?
A volume is a virtual object, created by Volume Manager, that stores data. A
volume is made up of space from one or more physical disks on which the data is
physically stored.
CDS Disk
(Default) Offset 128K
Private Default
Default size
size of
of
OS-reserved
OS-reserved areas
areas Region private
private region:
region:
Metadata 2048
2048 sectors
sectors onon
that
that contain:
contain:
•• Platform
Platform blocks
blocks Solaris,
Solaris, AIX,
AIX,
•• VxVM
VxVM IDID blocks
blocks and
and Linux;
Linux; 1024
1024
•• AIX
AIX and
and HP-UX
HP-UX Public sectors
sectors onon HP-
HP-
co-existence
co-existence labels
labels User Region UX
UX
Data
• Public region: The public region consists of the remainder of the space on the
disk. The public region represents the available space that Volume Manager
can use to assign to volumes and is where an application stores data. Volume
Manager never overwrites this area unless specifically instructed to do so.
VM40_Solaris_R1.0_20040115 1-9
acctdg01-01 acctdg01-02
acctdg03-01
acctdg02-02 acctdg02-01
Plexes acctdg03-02
expvol-01 payvol-01 payvol-02
VM40_Solaris_R1.0_20040115 1-10
Physical Disks
©VM40_Solaris_R1.0_20040115
Copyright 2002 VERITAS 1-10
Subdisks
A VxVM disk can be divided into one or more subdisks. A subdisk is a set of
contiguous disk blocks that represent a specific portion of a VxVM disk, which is
mapped to a specific region of a physical disk. A subdisk is a subsection of a disk’s
public region. A subdisk is the smallest unit of storage in Volume Manager.
Therefore, subdisks are the building blocks for Volume Manager objects.
A subdisk is defined by an offset and a length in sectors on a VxVM disk.
Default subdisk name: DMname-##
A VxVM disk can contain multiple subdisks, but subdisks cannot overlap or share
the same portions of a VxVM disk. Any VxVM disk space that is not reserved or
that is not part of a subdisk is free space. You can use free space to create new
subdisks.
Conceptually, a subdisk is similar to a partition. Both a subdisk and a partition
divide a disk into pieces defined by an offset address and length. Each of those
pieces represent a reservation of contiguous space on the physical disk. However,
while the maximum number of partitions to a disk is limited by some operating
systems, there is no theoretical limit to the number of subdisks that can be attached
to a single plex, but it has been limited by default to a value of 4096. If required,
this default can be changed, using the vol_subdisk_num tunable parameter. For
more information on tunable parameters, see the VERITAS Volume Manager
System Administrator’s Guide.
Plexes
Volume Manager uses subdisks to build virtual objects called plexes. A plex is a
structured or ordered collection of subdisks that represents one copy of the data in
a volume. A plex consists of one or more subdisks located on one or more physical
disks. The length of a plex is determined by the last block that can be read or
written on the last subdisk in the plex. Plex length may not equal volume length to
the exact sector, because the plex is aligned to a cylinder boundary.
Default plex name: volumename-##
Plex types:
• Complete plex: A complete plex holds a complete copy of a volume and
Volumes
A volume is a virtual storage device that is used by applications in a manner
similar to a physical disk. Due to its virtual nature, a volume is not restricted by the
physical size constraints that apply to a physical disk. A VxVM volume can be as
large as the total sum of available, unreserved free physical disk space. A volume
is comprised of one or more plexes.
A volume can span across multiple disks. The data in a volume is stored on
subdisks of the spanned disks. A volume must be configured from VxVM disks
and subdisks within the same disk group.
Default volume name: vol##
You should assign meaningful volume names that reflect the nature or use of the
data in the volumes. For example, two volumes in acctdg can be expvol, a
volume that contains expense data, and payvol, a volume that contains payroll
data.
Data
Data Redundancy
Redundancy
Mirrored RAID-5 Striped and
Mirrored
VM40_Solaris_R1.0_20040115 1-11
Volume Layouts
RAID levels correspond to volume layouts. A volume’s layout refers to the
organization of plexes in a volume. Volume layout is the way plexes are
configured to remap the volume address space through which I/O is redirected at
run-time. Volume layouts are based on the concepts of disk spanning, redundancy,
and resilience.
Disk Spanning
Disk spanning is the combining of disk space from multiple physical disks to form
one logical drive. Disk spanning has two forms:
• Concatenation: Concatenation is the mapping of data in a linear manner
across two or more disks.
In a concatenated volume, subdisks are arranged both sequentially and
contiguously within a plex. Concatenation allows a volume to be created from
multiple regions of one or more disks if there is not enough space for an entire
volume on a single region of a disk.
• Striping: Striping is the mapping of data in equal-sized chunks alternating
across multiple disks. Striping is also called interleaving.
Data Redundancy
To protect data against disk failure, the volume layout must provide some form of
data redundancy. Redundancy is achieved in two ways:
• Mirroring: Mirroring is maintaining two or more copies of volume data.
A mirrored volume uses multiple plexes to duplicate the information contained
in a volume. Although a volume can have a single plex, at least two are
required for true mirroring (redundancy of data). Each of these plexes should
contain disk space from different disks for the redundancy to be useful.
• Parity: Parity is a calculated value used to reconstruct data after a failure by
doing an exclusive OR (XOR) procedure on the data. Parity information can be
stored on a disk. If part of a volume fails, the data on that portion of the failed
volume can be re-created from the remaining data and parity information.
A RAID-5 volume uses striping to spread data and parity evenly across
multiple disks in an array. Each stripe contains a parity stripe unit and data
stripe units. Parity can be used to reconstruct data if one of the disks fails. In
comparison to the performance of striped volumes, write throughput of RAID-
5 volumes decreases, because parity information needs to be updated each time
data is accessed. However, in comparison to mirroring, the use of parity
reduces the amount of space required.
Resilience
A resilient volume, also called a layered volume, is a volume that is built on one or
more other volumes. Resilient volumes enable the mirroring of data at a more
granular level. For example, a resilient volume can be concatenated or striped at
the top level and then mirrored at the bottom level.
A layered volume is a virtual Volume Manager object that nests other virtual
objects inside of itself. Layered volumes provide better fault tolerance by
mirroring data at a more granular level.
VM40_Solaris_R1.0_20040115 1-12
Summary
This lesson described the virtual storage objects that VERITAS Volume Manager
uses to manage physical disk storage. This lesson introduced common virtual
storage layouts, illustrated how virtual storage objects relate to physical storage
objects, and described the benefits of virtual data storage.
Next Steps
You are now familiar with Volume Manager objects and how virtual objects relate
to physical disks when a disk is controlled by Volume Manager. In the next lesson,
you will install and set up Volume Manager. In addition, you install VEA and
explore the other Volume Manager interfaces.
Additional Resources
VERITAS Volume Manager Administrator’s Guide
This guide provides detailed information on procedures and concepts involving
volume management and system administration using VERITAS Volume
Manager.
VM40_Solaris_R1.0_20040115 1-13
Recovery Essentials
Encapsulation and Rootability
Reconfiguring Volumes Online
Configuring Volumes
Creating Volumes
Managing Disks and Disk Groups
Installation and Interfaces
Virtual Objects
VM40_Solaris_R1.0_20040115 2-2
Introduction
Overview
This lesson describes guidelines for a first-time installation of VERITAS Volume
Manager (VxVM). Installation prerequisites and procedures for adding license
keys and adding software packages are covered. This lesson also provides an
introduction to the interfaces used to manage VERITAS Volume Manager.
Importance
Before you install VxVM, you need to be aware of the contents of your physical
disks and decide how you want VxVM to handle those disks. By following these
installation guidelines, you can ensure that you set up VxVM in a way that meets
the needs of your environment. You can use the three interfaces to VxVM
interchangeably to perform administrative functions, which provides flexibility in
how you access and manage VxVM objects.
VM40_Solaris_R1.0_20040115 2-3
Outline of Topics
• Installation Prerequisites
• Adding License Keys
• VERITAS Software Packages
• Installing VxVM
• VxVM User Interfaces
• Installing and Starting VEA
• Managing the VEA Server
VM40_Solaris_R1.0_20040115 2-4
Installation Prerequisites
OS Version Compatibility
Before installing VxVM, you should ensure that the version of VxVM that you are
installing is compatible with the version of the operating system that you are
running. You may need to upgrade your operating system before you install VxVM
4.0. If you are planning to install other VERITAS products, such as VERITAS File
System (VxFS), check OS compatibility for those products as well:
Patches
Patches Alerts
Alerts Search for
for Technotes
Technotes
Email
Email services
services
Support
Support services
services
VM40_Solaris_R1.0_20040115 2-5
© VM40_Solaris_R1.0_20040115
Copyright 2002 VERITAS 2-5
Notes on vLicense
• vLicense currently supports production license keys only. Temporary,
evaluation, or demonstration keys must be obtained through your VERITAS
sales representative.
• Host ID changes cannot be processed through the vLicense system. Contact
VERITAS customer support for more details.
Path of
installed /etc/vx/licenses/lic /etc/vx/elm
licenses
key_string.vxlic feature_no.lic
License key
VM40_Solaris_R1.0_20040115
Example: Example: 2-8
VERITAS
VERITAS Foundation
Foundation VERITAS
VERITAS Storage
Storage
Suite
Suite Foundation
VERITAS
VERITAS Foundation VERITAS
VERITAS Storage
Storage
Suite
Suite HA
HA Foundation
Foundation HA
HA
VERITAS
VERITAS Database
Database VERITAS
VERITAS Storage
Storage
Edition
Edition Foundation
Foundation
for
for Oracle,
Oracle, Sybase,
Sybase, for
for Oracle, Sybase,
or
or DB2
DB2 or DB2
VM40_Solaris_R1.0_20040115 2-9
/etc/fs/vxfs/system.preinstall.
© VM40_Solaris_R1.0_20040115
Copyright 2002 VERITAS 2-11
VM40_Solaris_R1.0_20040115 2-15
Installing VxVM
Methods for Adding VxVM Packages
A first-time installation of VxVM involves adding the software packages and
configuring VxVM for first-time use. You can add VERITAS product packages by
using one of three methods:
Selection Menu:
File Description
Installation log file Contains all commands executed during installation, their output,
and any errors generated by the commands. Used for debugging
installation problems and for analysis by VERITAS Support.
Response file Contains configuration information entered during the procedure.
Can be used for future installation procedures when using a
product installation script with the -responsefile option.
Summary file Contains the output of VERITAS product installation scripts.
Shows products that were installed, locations of log and response
files, and installation messages displayed.
VM40_Solaris_R1.0_20040115 2-18
VM40_Solaris_R1.0_20040115 2-20
VM40_Solaris_R1.0_20040115 2-21
Toolbar
Toolbar
Object
Object Tree
Tree Grid
Grid
Console/
Console/
Task
Task History
History
VM40_Solaris_R1.0_20040115 2-22
Status
Status Area
Area
© VM40_Solaris_R1.0_20040115
Copyright 2002 VERITAS 2-22
Three
Threeways
waystoto 3
access
accesstasks:
tasks:
1.
1.Menu
Menubar
bar
2.
2.Toolbar
Toolbar
3.
3.Context
Context
menu
menu
VM40_Solaris_R1.0_20040115 2-23
© VM40_Solaris_R1.0_20040115
Copyright 2002 VERITAS 2-23
The
The Task
Task History
History window
window contains
contains aa list
list of
of
tasks
tasks performed
performed in the current
current session.
session.
To
To view
view underlying command
command lines,
lines,
right-click
right-click aa task
task and
and select
select Properties.
Properties.
VM40_Solaris_R1.0_20040115 2-24
© VM40_Solaris_R1.0_20040115
Copyright 2002 VERITAS 2-24
VM40_Solaris_R1.0_20040115 2-26
© VM40_Solaris_R1.0_20040115
Copyright 2002 VERITAS 2-26
© VM40_Solaris_R1.0_20040115
Copyright 2002 VERITAS 2-31
VM40_Solaris_R1.0_20040115 2-35
Summary
This lesson described guidelines for a first-time installation of VERITAS Volume
Manager (VxVM). Procedures for adding license keys, adding the VxVM software
packages, and running the VxVM installation program were covered, as well as an
introduction to the three interfaces used to manage VERITAS Volume Manager.
Next Steps
In the next lesson, you begin using Volume Manager by learning how to manage
disks.
Additional Resources
• VERITAS Volume Manager Administrator’s Guide
This guide provides detailed information on volume management and system
administration using VERITAS Volume Manager.
• VERITAS Volume Manager Installation Guide
This guide provides information on installing and initializing VxVM and the
VERITAS Enterprise Administrator graphical user interface.
• VERITAS Volume Manager User’s Guide—VERITAS Enterprise
Administrator
This guide describes how to use the VERITAS Enterprise Administrator
graphical user interface for VERITAS Volume Manager.
• VERITAS Volume Manager Release Notes
This document provides software version release information for VERITAS
Volume Manager.
VM40_Solaris_R1.0_20040115 2-36
Recovery Essentials
Encapsulation and Rootability
Reconfiguring Volumes Online
Configuring Volumes
Creating Volumes
Managing Disks and Disk Groups
Installation and Interfaces
Virtual Objects
VM40_Solaris_R1.0_20040115 3-2
Introduction
Overview
In this lesson, you learn how to perform tasks associated with the management of
disks and disk groups. This lesson describes device-naming schemes, how to add a
disk to a disk group, how to view disk and disk group information, and how to add,
remove, rename, and move a disk. This lesson also describes procedures for
creating, deporting, importing, destroying, and upgrading a disk group.
Importance
Before you can create virtual volumes, you must learn how to configure your
physical disks so that VERITAS Volume Manager (VxVM) can manage the disks.
By bringing physical disks under Volume Manager control and adding those disks
to a disk group, you enable VxVM to use the disk space to create volumes. A disk
group is an organizational structure that enables VxVM to perform disk
management tasks. Managing disk groups is important in effectively managing
your virtual storage environment.
Outline of Topics
• Naming Disk Devices
• Disk Configuration Stages
• Creating a Disk Group
• Viewing Disk and Disk Group Information
• Managing Disks
• Managing Disk Groups
VM40_Solaris_R1.0_20040115 3-4
Fibre
Fibre
Channel englab0
englab0
Switches englab2
englab2
Switches
englab1
englab1
Disk
Disk Enclosures
Enclosures
VM40_Solaris_R1.0_20040115 3-5
Enclosure-Based Naming
With VxVM version 3.2 and later, VxVM provides a new device naming scheme,
called enclosure-based naming. With enclosure-based naming, the name of a disk
is based on the logical name of the enclosure, or disk array, in which the disk
resides. The default logical name of an enclosure is typically based on the vendor
ID. For example:
You can customize logical enclosure names to provide meaningful names, such as
based on the location of an enclosure in a building or lab. For example, you can
rename three T3 disk arrays in an engineering lab as follows:
System
System B
B engdg
vol01
bootdg = nodg
defaultdg = nodg
Volume
Disk Group:
Free Disk Pool datadg
VM40_Solaris_R1.0_20040115 3-9
Stage 3: Assign
Assign disk
VM40_Solaris_R1.0_20040115
space to
to volumes.
volumes. 3-9
Adding Disks
Adding a disk to a disk group makes the disk space available for use in creating
VxVM volumes.
• You can add a single disk or multiple disks to a disk group.
• You cannot add a disk to more than one disk group.
To add a disk to a disk group, you select an uninitialized disk or a free disk. If the
disk is uninitialized, you must initialize the disk before you can add it to a disk
group.
Disk Naming
When you add a disk to a disk group, the disk is assigned a disk media name. The
disk media name is a logical name used for VxVM administrative purposes. The
disk media name must be unique within the disk group. You can assign a
meaningful name or use the default name assigned by VxVM.
Specify
Specifyaaname
namefor
for
the
the new diskgroup.
new disk group.
Add
Addatatleast
least
one disk.
one disk.
Specify
Specifydisk
diskmedia
media
names
names fordisks
for disksthat
that
VM40_Solaris_R1.0_20040115 you
youadd.
add. 3-11
To
To add
add another
another disk:
disk:
Actions—>Add Disk to Disk Group
VM40_Solaris_R1.0_20040115 3-11
When the disk is placed under VxVM control, the Type property changes to
Dynamic, and the Status property changes to Imported.
Note: You cannot add a disk to the free disk pool with VEA.
Free:
Free: Initialized,
Initialized, but
but
not
not in
in aa disk
disk group
group
Imported:
Imported: Initialized
Initialized and
and
added
added to
to aa disk
disk group
group
VM40_Solaris_R1.0_20040115 3-13
Not
Not Setup/Not
Setup/Not initialized:
initialized:
Not
Not under
under VxVM
VxVM control
control
VM40_Solaris_R1.0_20040115 3-13
The
TheDisk
DiskProperties
Properties
window
windowisisdisplayed.
displayed.
Select
Select aa unit
unit to
to
display
display capacity
capacity and
and
unallocated
unallocated space
space in
in
other
other units.
units.
VM40_Solaris_R1.0_20040115 3-14
VM40_Solaris_R1.0_20040115 3-14
Right-click
Right-clickaadisk
diskgroup,
group,and
andselect
selectProperties.
Properties.
Refers
Refersto
todisk
disk
group
groupversioning
versioning
Refers
Refersto
tocluster
cluster
environments
environments
VM40_Solaris_R1.0_20040115 3-15
VM40_Solaris_R1.0_20040115 3-15
Uninitialized
Note: In a shared access environment, when displaying disks, you
should frequently run vxdctl enable to rescan for disk changes. Disks
VM40_Solaris_R1.0_20040115 3-16
Descriptions of Flags
Flag Description
online ready The specified disk is “online” and is “ready” to use.
private The disk has a private region where the configuration
database and kernel log are defined and enabled/disabled.
autoconfig The specified disk is part of a disk group that is
autoconfigured.
autoimport The specified disk is part of a disk group that can be
imported at boot time.
imported The specified disk is part of a disk group that is currently
imported. When the disk group is deported, this field is
empty.
shared The specified disk is part of a cluster “shareable” disk group.
VM40_Solaris_R1.0_20040115 3-19
Managing Disks
Creating a Non-CDS Disk and Disk Group
At times, you may be working with sliced disks and non-CDS disk groups, for
example, if you have not upgraded all of your systems to the latest VxVM version
or are working with a boot disk group.
To create a sliced disk, you add the format=sliced attribute to the
vxdisksetup command.
To create a non-CDS disk group, you add the cds=off attribute to the vxdg
init command.
Removing Disks
If a disk is no longer needed in a disk group, you can remove the disk. After you
remove a disk from a disk group, the disk cannot be accessed. When removing a
disk from a disk group, you have two options:
• Move the disk to the free disk pool. With this option, the disk remains under
Volume Manager control.
• Send the disk back to an uninitialized state. With this option, the disk is no
longer under Volume Manager control.
Note: The remove operation fails if there are any subdisks on the disk. However,
the destroy disk group operation does not fail if there are any volumes in the disk
group.
Evacuating a Disk
Evacuating a disk moves the contents of the volumes on a disk to another disk. The
contents of a disk can be evacuated only to disks in the same disk group that have
sufficient free space.
Select: The disk that contains the objects and data to be moved to another
disk
Navigation path: Actions—>Evacuate Disk
Input: Auto Assign destination disks: VxVM selects the destination
disks to contain the content of the disk to be evacuated.
Manually assign destination disks: To manually select a
destination disk, highlight the disk in the left field and click Add
to move the disk to the right field.
VM40_Solaris_R1.0_20040115 3-22
Note: If you select all disks for removal from the disk group, the disk group is
automatically destroyed.
vxedit rename:
vxedit -g diskgroup rename old_name new_name
Example:
# vxedit -g datadg rename datadg01 datadg03
Notes:
• The new disk name must be unique within the disk group.
• Renaming a disk does not automatically rename subdisks on
that disk.
VM40_Solaris_R1.0_20040115 3-23
Disk
Diskgroup
groupto
to
be
bedeported
deported
Options
Optionsenable
enableyou
youtoto
specify
specifyaanew
newname
name
and
andaanew
newhost
hostfor
forthe
the
disk
diskgroup.
group.
VM40_Solaris_R1.0_20040115 3-25
Disks that were in the disk group now have a state of Deported. If the disk group
was deported to another host, the disk state is Locked.
VM40_Solaris_R1.0_20040115 3-26
VM40_Solaris_R1.0_20040115 3-27
VM disks
Forcing an Import
A disk group import fails if the VxVM configuration daemon cannot find all of the
disks in the disk group. If the import fails because a disk has failed, you can force
the disk group to be imported using options in the VxVM interfaces. Forcing an
import should always be performed with caution.
Options
Optionsinclude:
include:
•• Clearing
Clearinghost
hostIDs
IDs
at import
at import
•• Forcing
Forcingananimport
import
•• Starting
Startingall
allvolumes
volumes
VM40_Solaris_R1.0_20040115 3-28
By default, when you import a disk group by using VEA, all volumes in the disk
group are started automatically.
Note: VEA does not support temporary import of a disk group.
Boot
Boot Disks
Disks
acctdg engdg
vol01 vol01
VM40_Solaris_R1.0_20040115 3-30
Additional
Additional Disks
Disks
acctdg acctdg
Deport
Deport Import
Import
acctdg
VM40_Solaris_R1.0_20040115 3-31
oldnamedg newnamedg
Deport
Deport Import
Import
VM40_Solaris_R1.0_20040115 3-32
VM40_Solaris_R1.0_20040115 3-33
You can upgrade the disk group version using VEA or from the command line. The
vxdiskadm utility does not have an option to upgrade a disk group.
VM40_Solaris_R1.0_20040115 3-35
Note: You cannot upgrade to a specific disk group version by using VEA. You can
only upgrade to the current version. To upgrade to a specific version, use the
command line.
VM40_Solaris_R1.0_20040115 3-36
Summary
In this lesson, you learned how to perform tasks associated with the management
of disks and disk groups. This lesson described device-naming schemes, how to
add a disk to a disk group, how to view disk and disk group information, and how
to add, remove, rename, and move a disk. This lesson also described procedures
for creating, deporting, importing, destroying, and upgrading a disk group.
Next Steps
In the next lesson, you learn how to create a volume.
Additional Resources
• VERITAS Volume Manager Administrator’s Guide
This guide provides detailed information on procedures and concepts
involving volume management and system administration using VERITAS
Volume Manager.
• VERITAS Volume Manager Installation Guide
This guide provides detailed procedures for installing and initializing
VERITAS Volume Manager and VERITAS Enterprise Administrator.
• VERITAS Volume Manager User’s Guide—VERITAS Enterprise
Administrator
This guide describes how to use the VERITAS Enterprise Administrator
graphical user interface for VERITAS Volume Manager.
VM40_Solaris_R1.0_20040115 3-38
Recovery Essentials
Encapsulation and Rootability
Reconfiguring Volumes Online
Configuring Volumes
Creating Volumes
Managing Disks and Disk Groups
Installation and Interfaces
Virtual Objects
VM40_Solaris_R1.0_20040115 4-2
Introduction
Overview
This lesson describes how to create a volume in VxVM. This lesson covers how to
create a volume using different volume layouts, how to display volume layout
information, and how to remove a volume.
Importance
By creating volumes, you begin to take advantage of the VxVM concept of virtual
storage. Volumes enable you to span data across multiple disks using a variety of
storage layouts and to achieve data redundancy and resilience.
Outline of Topics
• Selecting a Volume Layout
• Creating a Volume
• Displaying Volume Layout Information
• Creating a Layered Volume
• Removing a Volume
Plex datavol-01
datadg01-01
Subdisks 14
14GB
GB
datadg02-03
VM40_Solaris_R1.0_20040115 4-4
Concatenated Layout
A concatenated volume layout maps data in a linear manner onto one or more
subdisks in a plex. Subdisks do not have to be physically contiguous and can
belong to more than one VM disk. Storage is allocated completely from one
subdisk before using the next subdisk in the span. Data is accessed in the
remaining subdisks sequentially until the end of the last subdisk.
For example, if you have 14 GB of data, then a concatenated volume can logically
map the volume address space across subdisks on different disks. The addresses
0 GB to 8 GB of volume address space map to the first 8-gigabyte subdisk, and
addresses 9 GB to 14 GB map to the second 6-gigabyte subdisk. An address offset
of 12 GB, therefore, maps to an address offset of 4 GB in the second subdisk.
Columns
Columns
SU4 SU5 SU6
Subdisks SU7 SU8 SU9
SU10 SU11 SU12
VM40_Solaris_R1.0_20040115 4-5
Striped Layout
A striped volume layout maps data so that the data is interleaved, or allocated in
stripes, among two or more subdisks on two or more physical disks. Data is
allocated alternately and evenly to the subdisks of a striped plex.
The subdisks are grouped into “columns.” Each column contains one or more
subdisks and can be derived from one or more physical disks. To obtain the
maximum performance benefits of striping, you should not use a single disk to
provide space for more than one column.
All columns must be the same size. The minimum size of a column should equal
the size of the volume divided by the number of columns. The default number of
columns in a striped volume is one-half the number of disks in the disk group.
Data is allocated in equal-sized units, called stripe units, that are interleaved
between the columns. Each stripe unit is a set of contiguous blocks on a disk. The
stripe unit size can be in units of sectors, kilobytes, megabytes, or gigabytes. The
default stripe unit size is 64K, which provides adequate performance for most
general purpose volumes. Performance of an individual volume may be improved
by matching the stripe unit size to the I/O characteristics of the application using
the volume.
VM40_Solaris_R1.0_20040115 4-6
Mirrored Layout
By adding a mirror to a concatenated or striped volume, you create a mirrored
layout. A mirrored volume layout consists of more than one plex that duplicate the
information contained in a volume. Each plex in a mirrored layout contains an
identical copy of the volume data. In the event of a physical disk failure and when
the plex on the failed disk becomes unavailable, the system can continue to operate
using the unaffected mirrors.
Although a volume can have a single plex, at least two plexes are required to
provide redundancy of data. Each of these plexes must contain disk space from
different disks to achieve redundancy.
Volume Manager uses true mirrors, which means that all copies of the data are the
same at all times. When a write occurs to a volume, all plexes must receive the
write before the write is considered complete.
You should distribute mirrors across controllers to eliminate the controller as a
single point of failure.
Columns
Columns
calculated
calculatedvalue
valueused
used SU5 SU6 P SU4
to
toreconstruct
reconstructdata
data SU9 P SU7 SU8
after
afterdisk
diskfailure.
failure. P SU10 SU11 SU12
VM40_Solaris_R1.0_20040115 4-7
RAID-5
A RAID-5 volume layout has the same attributes as a striped plex, but includes one
additional column of data that is used for parity. Parity provides redundancy.
Parity is a calculated value used to reconstruct data after a failure. While data is
being written to a RAID-5 volume, parity is calculated by doing an exclusive OR
(XOR) procedure on the data. The resulting parity is then written to the volume. If
a portion of a RAID-5 volume fails, the data that was on that portion of the failed
volume can be re-created from the remaining data and parity information.
RAID-5 volumes keep a copy of the data and calculated parity in a plex that is
striped across multiple disks. Parity is spread equally across columns. Given a
five-column RAID-5 where each column is 1 GB in size, the RAID-5 volume size
is 4 GB. One column of space is devoted to parity, and the remaining four 1-GB
columns are used for data.
The default stripe unit size for a RAID-5 volume is 16K. Each column must be the
same length but may be made from multiple subdisks of variable length. Subdisks
used in different columns must not be located on the same physical disk.
RAID-5 requires a minimum of three disks for data and parity. When implemented
as recommended, an additional disk is required for the log.
RAID-5 cannot be mirrored.
Advantages
availability
• Better • Load balancing • Requires less
utilization of • Improved read space than
free space • Improved performance mirroring
performance (if
• Simplified properly • Fast recovery • Improved read
administration configured) through performance
logging
• Fast recovery
through logging
• No • No • Requires more • Slower write
Disadvantages
VM40_Solaris_R1.0_20040115 4-8
Concatenation: Advantages
• Removes size restrictions: Concatenation removes the restriction on size of
storage devices imposed by physical disk size.
• Better utilization of free space: Concatenation enables better utilization of
free space on disks by providing for the ordering of available discrete disk
space on multiple disks into a single addressable volume.
• Simplified administration: Concatenation enables large file systems to be
created and reduces overall system administration complexity.
Concatenation: Disadvantages
No protection against disk failure: Concatenation does not protect against disk
failure. A single disk failure results in the failure of the entire volume.
Striping: Advantages
• Improved performance through parallel data transfer: Improved
performance is obtained by increasing the effective bandwidth of the I/O path
to the data. This may be achieved by a single volume I/O operation spanning
across a number of disks or by multiple concurrent volume I/O operations to
more than one disk at the same time.
• Load balancing: Striping is also helpful in balancing the I/O load from
multiuser applications across multiple disks.
Mirroring: Advantages
• Improved reliability and availability: With concatenation or striping, failure
of any one disk makes the entire plex unusable. With mirroring, data is
protected against the failure of any one disk. Mirroring improves the reliability
and availability of a striped or concatenated volume.
• Improved read performance: Reads benefit from having multiple places
from which to read the data.
Mirroring: Disadvantages
• Requires more disk space: Mirroring requires twice as much disk space,
which can be costly for large configurations. Each mirrored plex requires
enough space for a complete copy of the volume’s data.
• Slightly slower write performance: Writing to volumes is slightly slower,
because multiple copies have to be written in parallel. The overall time the
write operation takes is determined by the time needed to write to the slowest
disk involved in the operation.
The slower write performance of a mirrored volume is not generally significant
enough to decide against its use. The benefit of the resilience that mirrored
volumes provide outweighs the performance reduction.
RAID-5: Advantages
• Redundancy through parity: With a RAID-5 volume layout, data can be
re-created from remaining data and parity in case of the failure of one disk.
• Requires less space than mirroring: RAID-5 stores parity information, rather
than a complete copy of the data.
• Improved read performance: RAID-5 provides similar improvements in read
performance as in a normal striped layout.
• Fast recovery through logging: RAID-5 logging minimizes recovery time in
case of disk failure.
RAID-5: Disadvantages
Slow write performance: The performance overhead for writes can be
substantial, because a write can involve much more than simply writing to a data
block. A write can involve reading the old data and parity, computing the new
parity, and writing the new data and parity.
VM40_Solaris_R1.0_20040115 4-9
VM40_Solaris_R1.0_20040115 4-9
Creating a Volume
Creating a Volume
When you create a volume using VEA or CLI commands, you indicate the desired
volume characteristics, and VxVM automatically creates the underlying plexes
and subdisks. The VxVM interfaces require minimal input if you use default
settings. For experienced users, the interfaces also enable you to enter more
detailed specifications regarding all aspects of volume creation.
Note: Most volume tasks cannot be performed with the vxdiskadm menu
interface—a management tool used for disk objects.
When you create a volume, two device node files are created that can be used to
access the volume:
• /dev/vx/dsk/diskgroup/volume_name
• /dev/vx/rdsk/diskgroup/volume_name
Disks
Diskscan
canbebe
Included
Includedfor
foror
or
Excluded
Excludedfrom
from
volume
volumeuse.
use.
VM40_Solaris_R1.0_20040115 4-10
4-10
New Volume Wizard Step 1: Assigning Disks to Use for a New Volume
By default, VxVM locates available space on all disks in the disk group and
assigns the space to a volume automatically based on the layout you choose.
Alternatively, you can choose specific disks, mirror or stripe across controllers,
trays, targets, or enclosures, or implement ordered allocation. Ordered allocation is
a method of allocating disk space to volumes based on a specific set of VxVM
rules.
VM40_Solaris_R1.0_20040115 4-11
Create
Createand
and
mount
mountoptions
options
Mount
Mountat
atboot
boot
VM40_Solaris_R1.0_20040115 4-12
Examples:
# vxassist -g acctdg make payvol 2g
layout=stripe ncol=3 !acctdg04
Notes:
• For a RAID-5 volume, the default stripe unit size is 32 sectors (16K).
• When a RAID-5 volume is created, a RAID-5 log is created by default. This
means that you must have at least one additional disk available for the log.
• If you do not want the default log, then add the nolog option in the syntax,
layout=raid5,nolog.
• If you specify too few disks when creating a volume, you receive the error
message “Cannot allocate space for a x block volume”, even if there is enough
space in the disk group.
Examples:
Concatenated
Concatenated # vxassist -g datadg make datavol 5g
and
andmirrored
mirrored layout=mirror
Specify
Specifythree
three
# vxassist -g datadg make datavol 5g
mirrors.
mirrors. layout=stripe,mirror nmirror=3
Run
Runprocess
processin
in
# vxassist -g datadg -b make datavol
background.
background. 5g layout=stripe,mirror nmirror=3
VM40_Solaris_R1.0_20040115 4-17
VM40_Solaris_R1.0_20040115 4-18
VM40_Solaris_R1.0_20040115
Highlight
Highlight a volume, and click
4-20
the
the tabs to display details.
4-20
VM40_Solaris_R1.0_20040115 4-21
VM40_Solaris_R1.0_20040115 4-22
Click
Clickaadot
dotto
to
highlight
highlightan
an
VM40_Solaris_R1.0_20040115intersecting row
intersecting row 4-23
and
andcolumn.
column.
VM40_Solaris_R1.0_20040115 4-23
Select
SelectView—>Horizontal
View—>Horizontaloror
View—>Vertical
View—>Verticalto tochange
changethe
the
orientation
orientationof
ofthe
thediagram.
diagram.
VM40_Solaris_R1.0_20040115 4-24
Right-click
Right-clickaa
volume
volumeand
andselect
select
Properties.
Properties.
Used
Used for
for FastResync
FastResync
Refers
Refers to
to volumes
volumes managed
managed under
under
Intelligent
Intelligent Storage
Storage Provisioning
Provisioning (ISP)
(ISP)
Refers
Refers to
to volume
volume set
set
VM40_Solaris_R1.0_20040115 4-25
VM40_Solaris_R1.0_20040115 4-25
Option Description
-vpsd Select only volumes (v), plexes (p), subdisks (s), or disks (d).
Options can be used individually or in combination.
-h List hierarchies below selected records.
-r Display related records of a volume containing subvolumes.
Grouping is done under the highest-level volume.
-t Print single-line output records that depend upon the
configuration record type. For disk groups, the output consists of
the record type, the disk group name, and the disk group ID.
-l Display all information from each selected record. Most records
that have a default value are not displayed. This information is in
a free format that is not intended for use by scripts.
-a Display all information about each selected record—one record
per line, with a one-space character between each field; the list of
associated records is displayed.
-A Select from all active disk groups.
-e pattern Show records that match an editor pattern.
Additional Options
Option Description
-F[type:]format_spec Enable the user to define which fields to display.
-D - Read a configuration from the standard input. The
standard input is expected to be in standard
vxmake input format.
-m Display all information about each selected record
in a format that is useful as input to the vxmake
utility.
-f Display information about each record as one-line
output records.
-n Display only the names of selected records.
-G Display only disk group records.
-Q Suppress the disk group header that separates each
disk group. A single blank line separates each disk
group.
-q Suppress headers that would otherwise be printed
for the default and the -t and -f output formats.
VM40_Solaris_R1.0_20040115 4-28
Regular Mirroring
The example illustrates a regular mirrored volume layout called a mirror-stripe
layout. Data is striped across two disks, disk01 and disk03, to create one plex,
and that plex is mirrored and striped across two other disks, disk02 and disk04.
If two drives fail, the volume survives 2 out of 6 (1/3) times. As more subdisks are
added to each plex, the odds of a traditional volume surviving a two-disk failure
approach (but never equal) 50 percent.
If a disk fails in a mirror-stripe layout, the entire plex is detached, and redundancy
is lost on the entire volume. When the disk is replaced, the entire plex must be
brought up-to-date, or resynchronized.
sd
sd==subdisk
subdisk
disk01 disk02 disk03 disk04 Volume Status
X X Down
X X Up
X X Up
X X Up
When
Whentwo twodisks
disks
X X Up fail,
fail,volume
volume
X X Down survives
VM40_Solaris_R1.0_20040115 survives4/6,
4/6,or
or 4-30
Layered Volumes
The example illustrates a layered volume layout called a stripe-mirror layout. In
this layout, VxVM creates underlying volumes that mirror each subdisk. These
underlying volumes are used as subvolumes to create a top-level volume that
contains a striped plex of the data.
If two drives fail, the volume survives 4 out of 6 (2/3) times. In other words, the
use of layered volumes reduces the risk of failure rate by 50 percent without the
need for additional hardware. As more subvolumes are added, the odds of a
volume surviving a two-disk failure approach 100 percent. For volume failure to
occur, both subdisks that make up a subvolume must fail. If a disk fails, only the
failing subdisk must be detached, and only that portion of the volume loses
redundancy. When the disk is replaced, only a portion of the volume needs to be
recovered, which takes less time.
VM40_Solaris_R1.0_20040115 4-31
Top-level volume
Subdisk 1 Subdisk 3
contains more than
one plex (mirror). 1.5 GB 1 GB
Subdisk 4 500 MB
Plexes are
concatenated.
Concat Plex Concat Plex
1.5 GB 1.5 GB
Underlying Subdisk 1
VM40_Solaris_R1.0_20040115 Subdisk 3 Subdisk 4 4-32
Disks 1.5 GB 1 GB 500 MB
VM40_Solaris_R1.0_20040115 4-32
mirror-concat
This layout mirrors data across concatenated plexes. The concatenated plexes can
be comprised of subdisks of different sizes. In the example, the plexes are mirrors
of each other; each plex is a concatenation of one or more subdisks, and the plexes
are of equal size.
When you create a simple mirrored volume that is less than 1 GB in size, a
nonlayered mirrored volume is created by default.
Underlying
VM40_Solaris_R1.0_20040115
Subdisk 1 Subdisk 2 Subdisk 3 Subdisk 4 4-33
Disks 750 MB 750 MB 750 MB 750 MB
VM40_Solaris_R1.0_20040115 4-33
mirror-stripe
This layout mirrors data across striped plexes. The striped plexes can be made up
of different numbers of subdisks. In the example, plexes are mirrors of each other;
each plex is striped across the same number of subdisks. Each striped plex can
have different numbers of columns and different stripe unit sizes. One plex could
also be concatenated.
When you create a striped mirrored volume that is less than one gigabyte in size, a
nonlayered mirrored volume is created by default.
Volume Volume
1.5 GB 2 GB
Subdisk 1 Subdisk 2 Subdisk 3 Subdisk 4
Subvolumes 1.5 GB 1.5 GB 2 GB 2 GB
Concat Plex Concat Plex
1.5 GB 1.5 GB Concat Plex Concat Plex
2 GB 2 GB
VM40_Solaris_R1.0_20040115 4-34
concat-mirror
This volume layout contains a single plex made up of one or more concatenated
subvolumes. Each subvolume comprises two concatenated plexes (mirrors) made
up of one or more subdisks. If you have two subdisks in the top-level plex, then a
second subvolume is created, which is used as the second concatenated subdisk of
the plex. Additional subvolumes can be added and concatenated in the same
manner.
In the VEA interface, the GUI term used for a layered, concatenated layout is
Concatenated Mirrored. Concatenated Mirrored volumes are mirrored by default
and therefore require more disks than unmirrored concatenated volumes.
Concatenated Mirrored volumes require at least two disks.
Volume Volume
750 MB 750 MB
stripe-mirror
This volume layout stripes data across mirrored volumes. The difference between
stripe-mirror and concat-mirror is that the top-level plex is striped rather than
concatenated.
In the VEA interface, the GUI term used for a layered, striped layout is Striped
Mirrored. Striped Mirrored volumes are mirrored by default and therefore require
more disks than unmirrored striped volumes. Striped Mirrored volumes require at
least four disks.
vxprint
vxprint -rth
-rth vol01
vol01
...
...
Top-level vv vol01
vol01 -- ENABLED
ENABLED ACTIVE...
ACTIVE...
volume and plex pl
pl vol01-03
vol01-03 vol01
vol01 ENABLED
ENABLED ACTIVE...
ACTIVE...
Subvolume, sv
sv vol01-S01
vol01-S01 vol01-03
vol01-03 vol01-L01
vol01-L01 1...
1...
second-level v2
v2 vol01-L01
vol01-L01 -- ENABLED
ENABLED ACTIVE...
ACTIVE...
volume, plex, p2
p2 vol01-P01 vol01-L01
vol01-P01 vol01-L01 ENABLED
ENABLED ACTIVE...
ACTIVE...
and subvolume s2
s2 datadg05-02
datadg05-02 vol01-P01
vol01-P01 datadg05
datadg05 0...
0...
p2
p2 vol01-P02
vol01-P02 vol01-L01
vol01-L01 ENABLED
ENABLED ACTIVE...
ACTIVE...
s2 datadg03-02 vol01-P02
s2 datadg03-02 vol01-P02 datadg03
datadg03 0...
0...
sv
sv vol01-S02
vol01-S02 vol01-03
vol01-03 vol01-L02
vol01-L02 1...
1...
VM40_Solaris_R1.0_20040115 4-37
VM40_Solaris_R1.0_20040115 4-38
Removing a Volume
You should only remove a volume if you are sure that you do not need the data in
the volume, or if the data is backed up elsewhere. A volume must be closed before
it can be removed. For example, if the volume contains a file system, the file
system must be unmounted. You must manually edit the OS-specific file system
table file in order to remove the entry for the file system and avoid errors at boot. If
the volume is used as a raw device, the application, such as a database, must close
the device.
Select: A volume
Navigation path: Actions—>Delete Volume
Input: Verify the volume to be removed and confirm its removal.
Summary
This lesson described how to create a volume in VxVM. This lesson covered how
to create a volume using different volume layouts, how to display volume layout
information, and how to remove a volume.
Next Steps
In the next lesson, you learn how to configure additional volume attributes.
Additional Resources
• VERITAS Volume Manager Administrator’s Guide
This guide provides detailed information on procedures and concepts
involving volume management and system administration using VERITAS
Volume Manager.
• VERITAS Volume Manager User’s Guide—VERITAS Enterprise
Administrator
This guide describes how to use the VERITAS Enterprise Administrator
graphical user interface for VERITAS Volume Manager.
• VERITAS Volume Manager Release Notes
This document provides software version release information for VERITAS
Volume Manager and VERITAS Enterprise Administrator.
VM40_Solaris_R1.0_20040115 4-40
Recovery Essentials
Encapsulation and Rootability
Reconfiguring Volumes Online
Configuring Volumes
Creating Volumes
Managing Disks and Disk Groups
Installation and Interfaces
Virtual Objects
VM40_Solaris_R1.0_20040115 5-2
Introduction
Overview
This lesson describes how to configure volumes in VxVM. This lesson covers how
to add and remove a mirror, add a log, change the volume read policy, and allocate
storage to volumes. This lesson also describes how to add a file system to a volume
and administer VERITAS File System.
Importance
By configuring volume attributes, you can create volumes that meet the needs of
your business environment.
VM40_Solaris_R1.0_20040115 5-3
Outline of Topics
• Administering Mirrors
• Adding a Log to a Volume
• Changing the Volume Read Policy
• Allocating Storage for Volumes
• Administering File Systems
Administering Mirrors
Adding a Mirror
If a volume was not originally created as a mirrored volume, or if you want to add
additional mirrors, you can add a mirror to an existing volume.
Only concatenated or striped volumes can be mirrored. You cannot mirror a
RAID-5 volume. By default, a mirror is created with the same plex layout as the
plex already in the volume. For example, assume that a volume is composed of a
single striped plex. If you add a mirror to the volume, VxVM makes that plex
striped, as well. You can specify a different layout using VEA or from the
command line.
A mirrored volume requires at least two disks. You cannot add a mirror to a disk
that is already being used by the volume. A volume can have multiple mirrors, as
long as each mirror resides on separate disks.
Only disks in the same disk group as the volume can be used to create the new
mirror. Unless you specify the disks to be used for the mirror, VxVM
automatically locates and uses available disk space to create the mirror.
A volume can contain up to 32 plexes (mirrors); however, the practical limit is 31.
One plex should be reserved for use by VxVM for background repair operations.
Note: Adding a mirror requires resynchronization of the additional plex, so this
operation may take some time.
VM40_Solaris_R1.0_20040115 5-5
To verify that a new mirror was added, view the total number of copies of the
volume as displayed in the main window. The total number of copies is increased
by the number of mirrors added.
VM40_Solaris_R1.0_20040115 5-6
Removing a Mirror
When a mirror (plex) is no longer needed, you can remove it. When a mirror is
removed, the space occupied by that mirror can be used elsewhere. Removing a
mirror can be used:
• To provide free disk space
• To reduce the number of mirrors in a volume in order to reduce I/O to the
volume
• To remove a temporary mirror that was created to back up a volume and is no
longer needed
Space from the subdisks of a removed plex is returned to the disk group’s free
space pool.
Caution: Removing a mirror results in loss of data redundancy. If a volume only
has two plexes, removing one of them leaves the volume unmirrored.
VM40_Solaris_R1.0_20040115 5-7
RAID-5 Logging
When you create a RAID-5 volume, a RAID-5 log is added by default. RAID-5
logs speed up the resynchronization time for RAID-5 volumes after a system
failure. A RAID-5 log maintains a copy of the data and parity being written to the
volume at any given time. If a system failure occurs, VxVM can replay the
RAID-5 log to resynchronize the volume. This copies the data and parity that was
being written at the time of failure from the log to the appropriate areas of the
RAID-5 volume.
You can create multiple RAID-5 logs on different disks to mirror the log
information. Ideally, each RAID-5 volume should have at least two logs to protect
against the loss of logging information due to the failure of a single disk. A
RAID-5 log should be stored on a separate disk from the volume data and parity
disks. Therefore, at least four disks are required to implement RAID-5 with
logging. Although a RAID-5 volume cannot be mirrored, RAID-5 logs can be
mirrored.
To support concurrent access to the RAID-5 array, the log should be several times
the stripe size of the RAID-5 plex. As a guideline, make the log six times the size
of a full-stripe write to the RAID-5 volume.
vxassist addlog:
vxassist -g diskgroup addlog volume [logtype=drl]
[nlog=n] [attributes]
Examples:
• To add a dirty region log to an existing mirrored volume:
# vxassist -g datadg addlog datavol logtype=drl
• To add a RAID-5 log to a RAID-5 volume, no log type is needed:
# vxassist -g acctdg addlog payvol
• To remove a log from a volume:
vxassist -g diskgroup remove log [nlog=n] volume
VM40_Solaris_R1.0_20040115 5-9
You can add a log to a volume when you create the volume or at any time after
volume creation. The type of log that is created is based on the type of volume
layout.
Removing a Log: VEA
Note: When you remove the only log from a volume, logging is no longer in
effect, and recovery time increases in the event of a system crash.
Read Preferred
Preferred
I/O
Selected
Selected Plex
Plex Volume
Read Is there a
I/O striped plex?
VM40_Solaris_R1.0_20040115 5-10
Default
Default Method
Method
Examples:
• To set the read policy to round robin:
# vxvol -g datadg rdpol round datavol
• To set the read policy to read from a preferred plex:
# vxvol -g datadg rdpol prefer datavol datavol-02
• To set the read policy to select a plex based on layouts:
# vxvol -g datadg rdpol select datavol
VM40_Solaris_R1.0_20040115 5-11
Select: A volume
Navigation path: Actions—>Set Volume Usage
Input: Volume read policy: Select Based on layouts (default; the
selected plex method), Round robin, or Preferred. If you select
Preferred, then you can also select the preferred plex from the list
of available plexes.
CLI:
Add the -o ordered option:
vxassist [-g diskgroup][-o ordered] make
volume length [layout=layout]...
VM40_Solaris_R1.0_20040115 5-16
When Ordered is selected, VxVM uses the specified storage to first concatenate
disks, then to form columns, and finally to form mirrors.
3 GB 3 GB 3 GB 3 GB
2 GB 2 GB 2 GB 2 GB
01 02 03 04 05 06 07 08
VM40_Solaris_R1.0_20040115 5-18
Mount
Mount File
File System
System Details
Details
New
New File
File System
System Details
Details
VM40_Solaris_R1.0_20040115 5-19
VM40_Solaris_R1.0_20040115 5-19
Select: A volume
Navigation path: Actions—>File System—>New File System
Input: File system type: Select vxfs or other supported platform-specific
file system type.
Create options: Set mkfs options.
Mount options: Specify a mount point and other mount options.
VM40_Solaris_R1.0_20040115 5-20
Notes:
• In VEA, when you create a file system, if you select the “Add to file system
table” and “Mount at boot” check boxes, the entry is made automatically in the
file system table file. If the volume is later removed through VEA, its
corresponding file system table file entry is also removed automatically.
• In AIX, you can use the following commands when working with the file
system table file, /etc/filesystems:
– To view entries: # lsfs mount_point
– To change details of an entry, use chfs. For example, to turn off mount at
boot: # chfs -A no mount_point
– To remove an entry: # rmfs mount_point
VM40_Solaris_R1.0_20040115 5-22
-o N -o bsize=n
• Provides information only • Sets logical block size
• Does not create the file • Default: 1024 bytes (1K) for most
system • Cannot be changed after creation
• In most cases, the default is best.
-o largefiles|
• Resizing the file system does not
nolargefiles change the block size.
• Supports files > 2 gigabytes
(or > 8 million files) -o logsize=n
• Default: largefiles • Sets size of logging area
• Default depends on file system size.
-o version=n • Default is sufficient for most
• Specifies layout version workloads.
• Valid values are 4, 5, and 6. • Log size can be changed after
• Default: Version 6 creation using fsadm.
VM40_Solaris_R1.0_20040115 5-23
VM40_Solaris_R1.0_20040115 5-25
VM40_Solaris_R1.0_20040115 5-26
n+9 n+10 n+11 n+12 n+13 n+14 n+15 n+16 n+17 n+37 n+38 n+39
VM40_Solaris_R1.0_20040115 5-28
VM40_Solaris_R1.0_20040115 5-29
File Description
Object Location Contains the object location table (OLT), which is used to
Table File locate the other structural files
Label File Encapsulates the superblock and superblock replicas
The superblock contains fundamental information about the
file system, such as file system type, size, layout, and
available resources. The location of the primary superblock
is known. The label file can locate superblock copies if there
is structural damage to the file system.
Device File Records device information, such as volume length and
volume label, and contains pointers to other structural files
Fileset Header File Holds information on a per-fileset basis, which may include
the inode of the fileset’s inode list file, the maximum number
of inodes allowed, an indication of whether the file system
supports large files, and the inode number of the quotas file
if the fileset supports quotas
Inode List File Contains inode lists that are stored in inode list files
Increasing the number of inodes involves increasing the size
of the file after expanding the inode allocation unit file.
Inode Allocation Holds the free inode map, extended operations map, and a
Unit File summary of inode resources
Log File Maps the block used by the file system intent log (The intent
log is a record of current activity used to guarantee file
system integrity in the event of system failure.)
Extent Allocation Indicates the allocation state of each AU by defining whether
Unit State File each AU is free, allocated as a whole (no bitmaps allocated),
or expanded
Extent Allocation Contains the AU summary for each allocation unit, which
Unit Summary File contains the number of free extents of each size (The
summary for an extent is created only when an allocation
unit is expanded for use.)
Free Extent Map Contains the free extent maps for each of the allocation units
File
Quotas Files If the file system supports quotas, there is a quotas file that
is used to track the resources allocated to each user.
Initial
Initial Allocation
Allocation Fragmented
Fragmented Defragmented
Defragmented
Fragmentation types:
• Directory fragmentation
• Extent fragmentation
VM40_Solaris_R1.0_20040115 5-30
Types of Fragmentation
VxFS addresses two types of fragmentation:
• Directory fragmentation
As files are created and removed, gaps are left in directory inodes. This is
known as directory fragmentation. Directory fragmentation causes directory
lookups to become slower.
Example: Example:
fsadm -e -E -s /mnt1 fsadm -d -D /mnt1
VM40_Solaris_R1.0_20040115 5-32
VxFS Defragmentation
You can use the online administration utility fsadm to defragment, or reorganize,
file system directories and extents. The fsadm utility defragments a file system
mounted for read/write access by:
• Removing unused space from directories
• Making all small files contiguous
• Consolidating free blocks for file system use
Only a privileged user can reorganize a file system.
Defragmenting Extents
Defragmenting extents, called extent reorganization, can improve performance:
fsadm -e mount_point
During extent reorganization:
• Small files (less than 64K) are made into one contiguous extent.
• Large files are built from large extents.
• Small and recently used (less than 14 days) files are moved near the inode area.
• Large or old files (more than 14 days since last access) are moved to the end of
the allocation unit.
• Free space is clustered in the center of the data area.
Extent reorganization is performed on all inodes in the file system. Each pass
through the inodes moves the file system closer to optimal organization.
Duration of Defragmentation
The time it takes to complete extent reorganization varies, depending on the degree
of fragmentation, disk speed, and the number of inodes in the file system. In
general, extent reorganization takes approximately one minute for every 100
megabytes of disk space.
VM40_Solaris_R1.0_20040115 5-33
Scheduling Defragmentation
The best way to ensure that fragmentation does not become a problem is to
defragment the file system on a regular basis. The frequency of defragmentation
depends on file system usage, activity patterns, and the importance of file system
performance. In general, follow these guidelines:
• Schedule defragmentation during a time when the file system is relatively idle.
• For frequently used file systems, you should schedule defragmentation daily or
weekly.
• For infrequently used file systems, you should schedule defragmentation at
least monthly.
• Full file systems tend to fragment and are difficult to defragment. You should
consider expanding the file system.
To determine the defragmentation schedule that is best for your system, select
what you think is an appropriate interval for running extent reorganization and run
the fragmentation reports both before and after the reorganization. If the degree of
fragmentation is approaching the bad fragmentation figures, then the interval
between fsadm runs should be reduced. If the degree of fragmentation is low, then
the interval between fsadm runs can be increased.
You should schedule directory reorganization for file systems when the extent
reorganization is scheduled. The fsadm utility can run on demand and can be
scheduled regularly as a cron job.
The defragmentation process can take some time. You receive an alert when the
process is complete.
Crash
Crash
Data Metadata
2 fsck
fsck
After
After the
the intent
intent log
log is written, other 3
file
file system updates areare made.
made. If
If the
the system
system
crashes,
crashes, the
the
intent
intent log
log is
is
replayed
replayed byby
VM40_Solaris_R1.0_20040115
Disk VxFS
VxFS fsck.
fsck. 5-34
Option Description
-m Checks, but does not repair, a file system before mounting
-n|N Assumes a response of no to all prompts by fsck (This option
does not replay the intent log and performs a full fsck.)
-V Echoes the expanded command line but does not execute the
command
-y|Y Assumes a response of yes to all prompts by fsck (If the file
system requires a full fsck after the log replay, then a full fsck is
performed.)
Option Description
-o full Perform a log replay and a full file system check. (By default,
VxFS performs an intent log replay only.)
-o nolog Do not perform log replay. You can use this option if the log area
becomes physically damaged.
-o p Note: This option is supported in Solaris 8, update 2 and later.
Allow parallel log replay for several VxFS file systems. Each
message from fsck is prefixed with the device name to identify
the device. This suboption does not perform a full file system
check in parallel; that is still done sequentially on each device,
even when multiple devices are specified.
Integrity Performance
-o blkclear -o delaylog
All storage initialized; Default; some logging
provides increased delayed; improves
security; slower than performance
standard file system
VM40_Solaris_R1.0_20040115 5-37
-o log
This option guarantees that all structural changes to the file system have been
logged on disk when the system call returns. If a system failure occurs, fsck
replays recent changes so that they are not lost.
-o delaylog
This is the default option that does not need to be specified. When you use this
option, some system calls return before the intent log is written, and the logging
delay improves the performance of the system. With this option, VxFS
synchronously maintains structural changes to the file system, and operations such
as file create, file delete, and extending file sizes are guaranteed to go into the log.
Other operations such as synchronous I/Os (for example, a database transaction
log) are also guaranteed to be stored on disk. For some workloads, such as file
servers, where the application does not request synchronous semantics, VxFS tries
to cache things when allowed to improve performance. If VxFS is not allowed to
cache things, for example, in database environments or NFS environments where
the database sets the caching policies, VxFS will strictly adhere to those policies.
However, when the application allows VxFS to choose the caching policy, VxFS
will attempt to do the best job from a performance and memory management
perspective.
-o tmplog
With the tmplog option, intent logging is almost always delayed. This option
greatly improves performance, but recent changes may disappear if the system
crashes. This mode is only recommended for temporary file systems.
On most UNIX systems, temporary file system directories (such as /tmp and
/usr/tmp) often hold files that do not need to be retained when the system
reboots. The underlying file system does not need to maintain a high degree of
structural integrity for these temporary directories.
VM40_Solaris_R1.0_20040115 5-40
Summary
This lesson described how to configure volumes in VxVM. This lesson covered
how to add and remove a mirror, how to add a log, and how to add a file system to
a volume. In addition, methods for allocating storage for volumes and changing
the volume read policy were also covered.
Next Steps
In the next lesson, you learn how to reconfigure volumes while online.
Additional Resources
• VERITAS Volume Manager Administrator’s Guide
This guide provides detailed information on procedures and concepts
involving volume management and system administration using VERITAS
Volume Manager.
• VERITAS Volume Manager User’s Guide—VERITAS Enterprise
Administrator
This guide describes how to use the VERITAS Enterprise Administrator
graphical user interface for VERITAS Volume Manager.
• VERITAS Volume Manager Release Notes
This document provides software version release information for VERITAS
Volume Manager and VERITAS Enterprise Administrator.
VM40_Solaris_R1.0_20040115 5-41
Recovery Essentials
Encapsulation and Rootability
Reconfiguring Volumes Online
Configuring Volumes
Creating Volumes
Managing Disks and Disk Groups
Installation and Interfaces
Virtual Objects
VM40_Solaris_R1.0_20040115 6-2
Introduction
Overview
This lesson describes how to perform and monitor volume maintenance tasks
using VERITAS Volume Manager (VxVM). This lesson describes how to perform
online administration tasks, such as resizing a volume and changing the layout of a
volume, and how to analyze volume configurations with the Storage Expert utility.
Importance
With VxVM, you can perform volume maintenance, such as changing the size and
layout of a volume, without disrupting applications or file systems that are using
the volume. A volume layout can be resized, reconfigured, monitored, and
controlled while the volume is online and accessible to users. The Storage Expert
utility enables you to analyze volume configurations based on VxVM best
practices.
VM40_Solaris_R1.0_20040115 6-3
Outline of Topics
• Resizing a Volume
• Changing the Volume Layout
• Managing Volume Tasks
• Analyzing Volume Configurations with Storage Expert
Resizing a Volume
Resizing a Volume
If users require more space on a volume, you can increase the size of the volume.
If a volume contains unused space that you need to use elsewhere, you can shrink
the volume. To resize a volume, you can specify either:
• The desired new size of the volume, or
• The amount of space to add to or subtract from the current volume size
When the volume size is reduced, the resulting extra space is returned to the free
space pool.
When the volume size is increased, sufficient disk space must be available in the
disk group. When increasing the size of a volume, VxVM assigns the necessary
new space from available disks. By default, VxVM uses space from any disk in the
disk group, unless you define specific disks.
VM40_Solaris_R1.0_20040115 6-5
Specify
Specifythe
theamount
amount
of
of spaceto
space toadd
addoror
subtract,
subtract,or
orspecify
specifyaa
new
newvolume
volumesize.
size.
IfIfdesired,
desired,specify
specify
disks
diskstotobe
beused
used
for
forthe
theadditional
additional
space.
space.
VM40_Solaris_R1.0_20040115 6-6
Notes: When you resize a volume, if a VERITAS file system (VxFS) is mounted
on the volume, the file system is also resized. The file system is not resized if it is
unmounted.
10 MB 50 MB 60 MB 40 MB 30 MB
Original 1 2 3 4
VM40_Solaris_R1.0_20040115 6-7
Example: The size of the volume myvol is 10 MB. To extend myvol to 50 MB:
# vxresize -g mydg myvol 50m
To extend myvol by an additional 10 MB:
# vxresize -g mydg myvol +10m
To shrink myvol back to a length of 40 MB:
# vxresize -g mydg myvol 40m
To shrink myvol by an additional 10 MB:
# vxresize -g mydg myvol -10m
20 MB 40 MB 50 MB 30 MB 20 MB
Original 1 2 3 4
VM40_Solaris_R1.0_20040115 6-8
Examples:
• Relayout concatenated to mirror-concat to achieve redundancy.
• Relayout RAID-5 to mirrored for better write performance.
• Relayout mirrored to RAID-5 to save space.
• Change stripe unit size or add columns to achieve desired
performance.
VM40_Solaris_R1.0_20040115 6-11
VM40_Solaris_R1.0_20040115 6-12
Supported Transformations
By using online relayout, you can change the layout of an entire volume or a
specific plex. VxVM currently supports the transformations listed in the slide.
Note: Online relayout should be used only with volumes created with the
vxassist command or through the VEA interface.
Source
Subvolume Temporary
Subvolume
(scratch pad)
2 Data
Dataisisreturned
returnedfrom
fromtemporary
temporary
area
area to new layoutarea.
to new layout area.
By default:
• If volume size is less than 50 MB, the temp area = volume size.
• If volume size is 50 MB to 1 GB, the temp area = 50 MB.
• If volume size is 1 GB or greater, the temp area = 1 GB.
VM40_Solaris_R1.0_20040115 6-13
The larger the temporary space, the faster the relayout, because
larger pieces can be copied at one time.
VM40_Solaris_R1.0_20040115 6-13
Select
Selectaanew
new
volume
volumelayout.
layout.
Set
Setrelayout
relayoutoptions.
options.
VM40_Solaris_R1.0_20040115 6-15
Relayout
RelayoutStatus
StatusMonitor
MonitorWindow
Window
Status
Status
Information
Information
Relayout
Relayout
controls
controls
VM40_Solaris_R1.0_20040115 6-16
When you launch a relayout operation, the Relayout Status Monitor window is
displayed. This window provides information and options regarding the progress
of the relayout operation.
• Volume Name: The name of the volume that is undergoing relayout
• Initial Layout: The original layout of the volume
• Desired Layout: The new layout for the volume
• Status: The status of the relayout task
• % Complete: The progress of the relayout task
The Relayout Status Monitor window also contains options that enable you to
control the relayout process:
• Pause: To temporarily stop the relayout operation, click Pause.
• Abort: To cancel the relayout operation, click Abort.
• Continue: To resume a paused or aborted operation, click Continue.
• Reverse: To undo the layout changes and return the volume to its original
layout, click Reverse.
VM40_Solaris_R1.0_20040115 6-18
VM40_Solaris_R1.0_20040115 6-19
VxVM-assigned
VxVM-assigned Percentage
Percentage of
of Starting,
Starting, ending,
ending, and
and
Task
Task ID
ID task
task complete
complete current
current offset
offset
## vxtask
vxtask list
list
TASKID
TASKID PTID
PTID TYPE/STATE
TYPE/STATEPCT
PCT PROGRESS
PROGRESS
198
198 RELAYOUT/R
RELAYOUT/R58.48%
58.48% 0/20480/11976
0/20480/11976 RELAYOUT
RELAYOUT myvol
myvol
Parent
Parent ID
ID Description
Description State
State of
of Affected
Affected
of
of task
task Running
Running (R),
(R), VxVM
VxVM object
object
Paused
Paused (P),
(P), or
or
VM40_Solaris_R1.0_20040115
Aborting
Aborting (A)
(A) 6-22
VM40_Solaris_R1.0_20040115 6-25
## vxrelayout
vxrelayout -g
-g datadg
datadg status
status datavol
datavol
STRIPED, Source
Source layout
layout
STRIPED, columns=5,
columns=5, stwidth=128
stwidth=128 -->
-->
STRIPED, columns=6, stwidth=128
STRIPED, columns=6, stwidth=128 Destination
Destination layout
layout
Relayout
Relayout running,
running, 58.48%
58.48% completed.
completed.
VM40_Solaris_R1.0_20040115 Task
Task status
status Percentage
Percentage of
of task
task 6-26
completed
completed
Set
Setthe
thethrottling
throttling
value
valuein
inthe
theThrottle
Throttle
Task
Task dialogbox.
dialog box.
VM40_Solaris_R1.0_20040115 6-28
Administrator: Storage
Administrator:
•• AreAreallallofofmy
mylogs
logs
Expert Rules:
mirrored? VxVM Report:
mirrored?
•• AreAreallallofofmy
my “Best INFO
volumes
volumesredundant?
redundant? Practices” VIOLATION
•• Should
Shouldmy
VM40_Solaris_R1.0_20040115mymirror-stripe
mirror-stripe PASS 6-29
be
beaastripe-mirror?
stripe-mirror?
VM40_Solaris_R1.0_20040115 6-29
VM40_Solaris_R1.0_20040115 6-30
Rule Output
When you run a rule, output is generated that indicates the status of objects that are
examined against the rule. In the output:
• INFO Indicates information about an object
• PASS Indicates that the object met the conditions of the rule
• VIOLATION Indicates that the object did not meet the conditions of the rule
Notes:
• By default, output is displayed on the screen, but you can redirect the output to
a file using standard UNIX redirection.
• You can also set Storage Expert to run as a cron job to notify administrators
and automatically archive reports.
VM40_Solaris_R1.0_20040115 6-33
VM40_Solaris_R1.0_20040115 6-35
Summary
This lesson described how to perform and monitor volume maintenance tasks
using VERITAS Volume Manager (VxVM). This lesson described how to perform
online administration tasks, such as resizing a volume and changing the layout of a
volume, and how to analyze volume configurations with the Storage Expert utility.
Next Steps
The next lesson describes root disk encapsulation and upgrading.
Additional Resources
• VERITAS Volume Manager Administrator’s Guide
This guide provides detailed information on procedures and concepts
involving volume management and system administration using VERITAS
Volume Manager.
• VERITAS Volume Manager User’s Guide—VERITAS Enterprise
Administrator
This guide describes how to use the VERITAS Enterprise Administrator
graphical user interface for VERITAS Volume Manager.
• VERITAS Volume Manager Release Notes
This document provides software version release information for VERITAS
Volume Manager and VERITAS Enterprise Administrator.
VM40_Solaris_R1.0_20040115 6-36
Recovery Essentials
Encapsulation and Rootability
Reconfiguring Volumes Online
Configuring Volumes
Creating Volumes
Managing Disks and Disk Groups
Installation and Interfaces
Virtual Objects
VM40_Solaris_R1.0_20040115 7-2
Introduction
Overview
This lesson describes the process of placing the boot disk under VxVM control.
Methods for creating an alternate boot disk, removing the boot disk from VxVM
control, and upgrading VxVM are covered.
Importance
Disk encapsulation enables you to preserve data on a disk when you place the disk
under VxVM control. By encapsulating and mirroring your boot disk, you can
ensure that if your boot disk is lost, the system continues to operate on the mirror.
A thorough understanding of the encapsulation process is important for
performing upgrades of VxVM software.
VM40_Solaris_R1.0_20040115 7-3
Outline of Topics
• Placing the Boot Disk Under VxVM Control
• Creating an Alternate Boot Disk
• Removing the Boot Disk from VxVM Control
• Upgrading to a New VxVM Version
Private region
Encapsulated homevol engvol
home
data disk eng
acct
dist
acctvol distvol
VM40_Solaris_R1.0_20040115 7-4
VM40_Solaris_R1.0_20040115 7-4
Private region
rootvol usr
/
/usr
/var
swap
var swapvol
VM40_Solaris_R1.0_20040115 7-5
VM40_Solaris_R1.0_20040115 7-5
What Is Rootability?
Rootability, or root encapsulation, is the process of placing the root file system,
swap device, and other file systems on the boot disk under VxVM control. VxVM
converts existing partitions of the boot disk into VxVM volumes. The system can
then mount the standard boot disk file systems (that is, /, /usr, and so on) from
volumes instead of disk partitions.
VM40_Solaris_R1.0_20040115 7-6
VM40_Solaris_R1.0_20040115 7-7
VM40_Solaris_R1.0_20040115 7-10
VM40_Solaris_R1.0_20040115 7-11
Solaris
VM40_Solaris_R1.0_20040115 7-12
Failed
Failedstartup
startup
vxvm: vxconfigd: Error: System startup failed:
Root
Rootplex
plexnot
notvalid
valid
vxvm: vxconfigd: Error: System boot disk does
not have a valid root plex
Alternate
Alternateboot
bootdisks
disksare
arelisted.
listed.
VM40_Solaris_R1.0_20040115 7-17
Solaris Note
This command changes the volume entries in /etc/vfstab to the underlying
disk partitions for the rootvol, swapvol, usr, and var volumes. The
command also modifies /etc/system and prompts for a reboot so that disk
partitions are mounted instead of volumes for the root, swap, usr, and var
volumes.
VM40_Solaris_R1.0_20040115 7-18
VM40_Solaris_R1.0_20040115 7-19
Note: Upgrade procedures are documented in the VERITAS Volume Manager 4.0
Installation Guide. Follow all instructions in the installation guide when
performing any upgrade. This training provides guidelines for a successful
upgrade; refer to the installation documentation for detailed steps.
VM40_Solaris_R1.0_20040115 7-21
To upgrade:
1 Bring system to single-user mode.
2 Load VERITAS CD-ROM.
3 Check for upgrade issues.
4 Run upgrade_start.
5 Reboot to single-user mode.
6 Upgrade your operating system.
7 Reboot to single-user mode.
8 Load VERITAS CD-ROM.
9 Run upgrade_finish.
VM40_Solaris_R1.0_20040115 7-24
After Upgrading
After completing the upgrade and rebooting, confirm the following:
1 Confirm that key VxVM processes (vxconfigd, vxnotify, and
vxrelocd) are running by using the command:
# ps -ef | grep vx
2 Verify the existence of the boot disk’s volumes by using vxprint:
# vxprint -ht
At this point, your preupgrade configuration is in effect, and any file systems
previously defined on volumes are defined and mounted.
Note: If you prefer to perform an upgrade without using the upgrade_start
and upgrade_finish scripts, you can use the vxunroot command to convert
volumes back to partitions. See the VERITAS Volume Manager Installation Guide
and visit http://support.veritas.com for more information.
VM40_Solaris_R1.0_20040115 7-27
VM40_Solaris_R1.0_20040115 7-28
Summary
This lesson described the disk encapsulation process and how to encapsulate the
boot disk on your system. Methods for creating an alternate boot disk and
unencapsulating a boot disk were covered.
Next Steps
The next lesson introduces basic recovery operations.
Additional Resources
• VERITAS Volume Manager Administrator’s Guide
This guide provides detailed information on procedures and concepts
involving volume management and system administration using VERITAS
Volume Manager.
• VERITAS Volume Manager Installation Guide
This guide provides information on installing and initializing VxVM and the
VERITAS Enterprise Administrator graphical user interface.
Recovery Essentials
Encapsulation and Rootability
Reconfiguring Volumes Online
Configuring Volumes
Creating Volumes
Managing Disks and Disk Groups
Installation and Interfaces
Virtual Objects
VM40_Solaris_R1.0_20040115 8-2
Introduction
Overview
This lesson introduces basic recovery concepts and techniques. This lesson
describes how data consistency is maintained after a system crash and how hot
relocation restores redundancy to failed VxVM objects. This lesson also describes
how to manage spare disks, replace a failed disk, and recover a volume.
Importance
VxVM protects systems from disk failures and helps you to recover from disk
failures. You can use the techniques discussed in this lesson to recover from a
variety of disk- and volume-related problems that may occur.
Outline of Topics
• Maintaining Data Consistency
• Hot Relocation
• Managing Spare Disks
• Replacing a Disk
• Unrelocating a Disk
• Recovering a Volume
• Protecting the VxVM Configuration
Crash
Crash Did
Didall
allwrites
writes
complete?
complete?
Do
Doall
allmirrors
mirrors
Writes
Writes contain
containthe
the Resynchronize
Resynchronize
same
samedata?
data?
• Read-writeback resynchronization
VM40_Solaris_R1.0_20040115 8-4
VM40_Solaris_R1.0_20040115 8-5
Atomic-Copy Resynchronization
Atomic-copy resynchronization refers to the sequential writing of all blocks of the
volume to a plex. This operation is used anytime a new mirror is added to a
volume, or an existing mirror is in stale mode and has to be resynchronized.
VM40_Solaris_R1.0_20040115 8-6
Read-Writeback Resynchronization
Read-writeback resynchronization is a process where two or more plexes have the
same data, but there may have been outstanding writes to the volume when the
system crashed.
Because the application must ensure that all writes are completed, the application
must fix any writes that are not completed. The responsibility of VxVM is to
guarantee that the mirrors have the same data.
• A database (as an application) usually does this by writing the original data
back to the disk.
• A file system checks to ensure that all of its structures are intact. The
applications using the file system must do their own checking.
VM40_Solaris_R1.0_20040115 8-7
• If you resize a volume, the log size does not change. To resize
the log, you must delete the log and add it back after resizing
the volume.
VM40_Solaris_R1.0_20040115 8-8
VM40_Solaris_R1.0_20040115 8-9
VM40_Solaris_R1.0_20040115 8-10
RAID-5 Logging
Dirty region logging is used for mirrored volumes only. RAID-5 volumes use
RAID-5 logs to keep a copy of the data and parity currently being written. You
were introduced to RAID-5 logging when you created a volume with a log.
Without logging, data not involved in any active writes can be lost or silently
corrupted if both a disk in a RAID-5 volume and the system fail. If this double-
failure occurs, there is no way of knowing if the data being written to the data
portions of the disks or the parity being written to the parity portions have actually
been written.
RAID-5 logging is used to prevent corruption of data during recovery by
immediately recording changes to data and parity to a log area on a persistent
device (such as a disk-resident volume or nonvolatile RAM). The new data and
parity are then written to disk.
Logs are associated with a RAID-5 volume by being attached as log plexes. More
than one log plex can exist for each RAID-5 volume, in which case the log areas
are mirrored.
VM Disks
VM40_Solaris_R1.0_20040115 8-11
Hot Relocation
What Is Hot Relocation?
Hot relocation is a feature of VxVM that enables a system to automatically react to
I/O failures on redundant (mirrored or RAID-5) VxVM objects and restore
redundancy and access to those objects. VxVM detects I/O failures on objects and
relocates the affected subdisks. The subdisks are relocated to disks designated as
spare disks or to free space within the disk group. VxVM then reconstructs the
objects that existed before the failure and makes them redundant and accessible
again.
Volumes
Spare Disks
1
VM Disks
3
1. vxrelocddetects
1.vxrelocd detectsdisk
diskfailure.
failure.
2.
2.Administrator
Administratoris
isnotified
notifiedbybye-mail.
e-mail.
3.
3.Subdisks
Subdisksare
arerelocated
relocatedtotoaaspare.
spare. 2
4.
4.Volume
Volumerecovery
recoveryisisattempted.
attempted.
VM40_Solaris_R1.0_20040115
Administrator 8-12
VM40_Solaris_R1.0_20040115 8-13
VM40_Solaris_R1.0_20040115 8-14
1 Physical Replacement
Replace corrupt disk with
a new disk.
2 Logical Replacement
• Replace the disk in VxVM. Volume
Replacing a Disk
Disk Replacement Tasks
Replacing a failed or corrupted disk involves both physically replacing the disk
and then logically replacing the disk and recovering volumes in VxVM:
• Disk replacement: When a disk fails, you replace the corrupt disk with a new
disk. The disk used to replace the failed disk must be either an uninitialized
disk or a disk in the free disk pool. The replacement disk cannot already be in a
disk group. If you want to use a disk that exists in another disk group, then you
must remove the disk from the disk group and place it back into the free disk
pool before you can use it as the replacement disk.
• Volume recovery: When a disk fails and is removed for replacement, the plex
on the failed disk is disabled, until the disk is replaced. Volume recovery
involves starting disabled volumes, resynchronizing mirrors, and
resynchronizing RAID-5 parity.
After successful recovery, the volume is available for use again. Redundant
(mirrored or RAID-5) volumes can be recovered by VxVM. Nonredundant
(unmirrored) volumes must be restored from backup.
Note: This lesson only discusses disks that have failed completely. When hot
relocation takes place, VxVM removes the disk from VxVM control and marks the
disk as FAILED. Partial disk failure, that is, disks marked with a status of
FAILING, is covered in another lesson.
VM40_Solaris_R1.0_20040115 8-16
VM40_Solaris_R1.0_20040115 8-17
Unrelocating a Disk
The vxunreloc Utility
The hot-relocation feature detects I/O failures in a subdisk, relocates the subdisk,
and recovers the plex associated with the subdisk.
VxVM also provides a utility that unrelocates a disk—that is, moves relocated
subdisks back to their original disk. After hot relocation moves subdisks from a
failed disk to other disks, you can return the relocated subdisks to their original
disk locations after the original disk is repaired or replaced.
Unrelocation is performed using the vxunreloc utility, which restores the system
to the same configuration that existed before a disk failure caused subdisks to be
relocated.
Select: The original disk that contained the subdisks before hot relocation
Navigation path: Actions—>Undo Hot Relocation
Note: This option is only available after hot relocation or hot
sparing has occurred.
Input: Select the disk that contained the subdisks before relocation
occurred
Note: It is not possible to return relocated subdisks to their original disks if their
disk group’s relocation information has been cleared.
CLI:
vxreattach [-bcr] [device_tag]
• Reattaches disks to a disk group if disk has a transient failure,
such as when a drive is turned off and then turned back on
• -r attempts to recover stale plexes using vxrecover.
VM40_Solaris_R1.0_20040115 8-19
Recovering a Volume
Recovering a Volume: VEA
Examples
After replacing the failed disk datadg01 in the datadg disk group, and adding
the new disk c1t1d0s2 in its place, you can attempt to recover the volume
datavol:
# vxrecover -bs -g datadg datavol
To recover, in the background, any detached subdisks or plexes that resulted from
replacement of the disk datadg01 in the datadg disk group:
# vxrecover -b -g datadg datadg01
To monitor the operations during the recovery, you add the -v option:
# vxrecover -v -g datadg datadg01
vxconfigbackup diskgroup
vxconfigrestore -p diskgroup
vxconfigrestore -c diskgroup
VM40_Solaris_R1.0_20040115 8-20
Disk
Summary
This lesson introduced basic recovery concepts and techniques. This lesson
described how data consistency is maintained after a system crash and how hot
relocation restores redundancy to failed VxVM objects. This lesson also described
how to manage spare disks, replace a failed disk, and recover a volume.
Additional Resources
• VERITAS Volume Manager Administrator’s Guide
This guide provides detailed information on procedures and concepts
involving volume management and system administration using VERITAS
Volume Manager.
• VERITAS Volume Manager User’s Guide—VERITAS Enterprise
Administrator
This guide describes how to use the VERITAS Enterprise Administrator
graphical user interface for VERITAS Volume Manager.
• VERITAS Volume Manager Release Notes
This document provides software version release information for VERITAS
Volume Manager and VERITAS Enterprise Administrator.
VM40_Solaris_R1.0_20040115 8-22
Lab Prerequisites
Record the following information to be provided by your instructor:
root password
Host name
My Boot Disk:
My Data Disks:
Location of VERITAS
Volume Manager packages:
Location of VERITAS
Storage Foundation license
keys:
Preinstallation
1 Determine if there are any VRTS packages currently installed on your system.
2 Before installing VxVM, save the following important system files into backup
files named with a “.preVM” extension. Also, save your boot disk
information to a file for later use (do not store the file in /tmp). You may need
the boot disk information when you bring the boot disk under VxVM control in
a later lab.
3 Are any VERITAS license keys installed on your system? Check for installed
licenses.
2 Using the VERITAS Volume Manager installation script, install and perform
initial configuration of VxVM. During the installation:
– If you do not have Storage Foundation licenses on your system, install
licenses when prompted. Your instructor will provide licensing
information.
– Install all optional and required packages.
– Do not use enclosure-based naming.
– Do not set a default disk group.
– Start VxVM.
3 Check in /.profile to ensure that the following paths are present on your
system.
/opt/VRTS/bin
/opt/VRTSvxfs/sbin
Note: On some systems, you may need to configure the system to use the
appropriate display. For example, if the display is pc1:0, before you run
VEA, type:
# DISPLAY=pc1:0
# export DISPLAY
It is also important that the display itself is configured to accept connections
from your client. If you get permission errors when you try to start VEA, in a
terminal window on the display system, type:
xhost system or xhost +
3 Connect to your system as root. Your instructor provides you with the
password.
11 Test the new account. After you have tested the new account, exit VEA.
Exploring vxdiskadm
1 From the command line, invoke the text-based VxVM menu interface.
3 From the command line, invoke the VxVM manual pages and read about the
vxdisk command.
5 From the command line, invoke the VxVM manual pages and read about the
vxdg command.
7 From the command line, invoke the VxVM manual pages and read about the
vxprint command.
2 What is the version and revision number of the installed version of VxVM?
3 What daemons are running after the system boots under VxVM control?
3 Create a new disk group by adding a disk from the free disk pool, or an
uninitialized disk, to a new disk group. Initialize the disk (if it is uninitialized)
and name the new disk group datadg.
Note: If you are sharing a disk array, each participant should select a different
disk group name, such as data1dg and data2dg.
View all the disk devices on the system.
4 Add one more disk to your disk group. Initialize the disk and view all the disk
devices on the system.
5 Remove all of the disks from your disk group. What happens to your disk
group?
6 Create a new disk group by adding a disk from the free disk pool, or an
uninitialized disk, to a new disk group. Initialize the disk (if it is uninitialized)
and name the new disk group datadg.
Note: If you are sharing a disk array, each participant should select a different
disk group name, such as data1dg and data2dg.
7 Deport your disk group. Do not give it a new owner. View all the disk devices
on the system.
8 Import your datadg disk group and view all the disk devices on the system.
10 Import the disk group and change its name to data3dg. View all the disk
devices on the system.
Note: If you are sharing a disk array, each participant should select a different
disk group name, such as data3dg and data4dg.
13 Now import data3dg and overwrite the disk group lock. What did you have
to do to import it and why?
2 Add one uninitialized disk to the free disk pool and view the status of the disk
devices to verify your action.
3 Remove the disk from the free disk pool and return the disk to an uninitialized
state. View the status of the disk devices to verify your action.
4 Add four data disks to the free disk pool as sliced disks and view the status of
the disk devices to verify your action.
Note: It is important to create sliced disks and use a non-CDS disk group as
specified in the instructions, so that you can practice upgrading the disk group
version later in this exercise.
5 Create a non-CDS disk group data4dg with at least one drive. Verify your
action.
6 Deport disk group data4dg, then import the disk group back to your machine.
Verify your action.
10 How would you check that you have upgraded the version?
11 Add two more disks to the disk group data4dg. You should now have three
disks in your disk group. Verify your action.
12 Remove a disk from the disk group data4dg. Verify your action.
13 Deport disk group data4dg and assign the host name as the host name of your
machine. Verify your action.
14 View the status of the disks in the deported disk group using vxdisk list
device_tag. What is in the hostid field?
17 Try again to remove a disk from data4dg. Does it work this time?
18 Deport the disk group data4dg and do not assign a host name. Verify your
action.
19 View the status of the disk in the deported disk group using vxdisk list
device_tag. What is in the hostid field?
22 Destroy the disk group and send any initialized disks back to an uninitialized
state.
Setup
A minimum of four disks is required to perform this lab, not including the root
disk.
Creating Volumes
1 Add four initialized disks to a disk group called datadg. Verify your action
using vxdisk -o alldgs list.
Note: If you are sharing a disk array, each participant should select a different
disk group name, such as data1dg and data2dg.
3 Display the volume layout. What names have been assigned to the plex and
subdisks?
5 Create a 50-MB striped volume on two disks and specify which two disks to
use in creating the volume.
What names have been assigned to the plex and subdisks?
6 Create a 20-MB, two-column striped volume with a mirror. Set the stripe unit
size to 128K.
7 Create a 20-MB, two-column striped volume with a mirror. Set the stripe unit
size to 128K. Select at least one disk you should not use.
9 Create the same volume specified in step 7, but without the mirror.
What names have been assigned to the plex and subdisks?
10 Create a 100-MB RAID-5 volume. Set the number of columns to the number
of drives in the disk group.
Was the volume created?
1 First, remove any volumes that you created in the previous lab.
a Using the vi editor, create a file called vxassist that includes the
following:
# when mirroring create three mirrors
nmirror=3
b Using the vi editor, create a file called alt_vxassist that includes the
following:
# use 256K as the default stripe unit size for
# regular volumes
stripeunit=256k
3 View the layout of these volumes using VEA and by using vxprint. What do
you notice?
4 Remove any vxassist default files that you created in this optional lab
section. The presence of these files can impact subsequent labs where default
behavior is assumed.
Setup
Before you begin this lab, ensure that any volumes created in previous labs have
been removed. Create a new disk group that contains four disks only.
2 Display the volume layout. How are the disks allocated in the volume? Which
disk devices are used?
3 Remove the volume you just made, and re-create it by specifying the four disks
in an order different from the original layout.
4 Display the volume layout. How are the disks allocated this time?
6 Remove one of the two mirrors, and display the volume layout.
7 Add a mirror to the existing volume, and display the volume layout.
8 Add a dirty region log to the existing volume and specify the disk to use for the
DRL. Display the volume layout.
9 Change the volume read policy to round robin, and display the volume layout.
2 Create a VERITAS file system on the datavol volume using the default
options.
4 Mount the newly created file system on the mount point, and use all default
options.
5 Using the newly created file system, create, modify, and remove files.
6 Display the content of the mount point directory, showing hidden entries, inode
numbers, and block sizes of the files.
8 How many disk blocks are defined within the file system and are used by the
file system?
12 Display details of the file system that were set when it was created.
13 Check the structural integrity of the file system using the default log policy.
2 Repeatedly copy /opt to the file system using a new target directory name
each time until the file system is approximately 85 percent full.
# for i in 1 2 3
> do
> cp -r /opt /fs_test/opt$i
> done
5 Repeat steps two and three using values 4 5 for i in the loop. Fragmentation
of both free space and directories will result.
6 Repeat step two using values 6 7 for i. Then delete all files that are smaller
than 64K to release a reasonable amount of space.
7 Defragment the file system and display the results. Run fragmentation reports
both before and after the defragmentation and display summary statistics after
each pass. Compare the fsadm report from step 4 with the final report from
the last pass in this step.
8 Unmount the file systems and remove the volumes used in this lab.
Setup
To perform this lab, you should have at least four disks in the disk group that you
are using. You can use either the VEA interface or the command line interface,
whichever you prefer. The solutions for both methods are covered in the Lab
Solutions appendix. If you use object names other than the ones provided,
substitute the names accordingly in the commands.
Note: If you are using VEA, view the properties of the related task after each step
to view the underlying command that was issued.
Resizing a Volume
1 If you have not already done so, remove the volumes created in the previous
lab.
2 Create a 20-MB concatenated mirrored volume with a file system /myfs, and
mount the volume.
4 Add data to the volume and verify that the file has been added.
3 Unmount the file system on the volume and remove the volume.
2 Create a VERITAS file system on the volume by using the mkfs command.
Specify the file system size as 40 MB.
3 Create a mount point /reszmnt on which to the mount the file system.
4 Mount the newly created file system on the mount point /reszmnt.
5 Verify disk space using the df command. Observe that the available space is
smaller than the size of the volume.
6 Expand the file system to the full size of the underlying volume using the
fsadm -b newsize option.
8 Make a file on the file system mounted at /reszmnt (using mkfile), so that
the free space is less than 50 percent of the total file system size.
9 Shrink the file system to 50 percent of its current size. What happens?
2 Display a description of Storage Expert rule vxse_drl1. What does this rule
do?
4 From the command line, create a 100-MB mirrored volume with no log. Create
and mount a file system on the volume.
7 Run Storage Expert rule vxse_drl1 again on the disk group containing the
volume. What does Storage Expert report?
9 Run Storage Expert rule vxse_drl1 again on the disk group containing the
volume. What does Storage Expert report?
10 What are the attributes and parameters that Storage Expert uses in running the
vxse_drl1 rule?
12 Run Storage Expert rule vxse_drl1 again. When running the rule, specify
that you want Storage Expert to test the mirrored volume against a
mirror_threshold of 100 MB. What does Storage Expert report?
13 Unmount the file system and remove the volume used in this exercise.
3 Slow down the task progress rate to insert an I/O delay of 100 milliseconds.
View the layout of the volume in the VEA interface.
5 In another terminal window, abort the task to simulate a crash during relayout.
View the layout of the volume in the VEA interface.
2 After the reboot, use vxdiskadm to add a disk that will be used for the mirror
of rootdisk. If your system has two internal disks, use the second internal
disk on your system for the mirror. (This is required due to the nature of the
classroom configuration.) When setting up the disk, make sure that the disk
layout is sliced. Use altboot as the name of your disk.
3 Next, use vxdiskadm to mirror your system disk, rootdisk, to the disk
that you added, altboot.
4 After the mirroring operation is complete, verify that you now have two disks
in systemdg: rootdisk and altboot, and that all volumes are mirrored.
What order are the volumes mirrored?
Check to determine if rootvol is enabled and active. Hint: Use vxprint
and examine the STATE fields.
5 From the command line, set the eeprom variable to enable VxVM to create a
device alias in the openboot program.
6 To disable the boot disk and make rootvol-01 disabled and offline, use the
vxmend command. This command is used to make changes to configuration
records. Here, you are using the command to place the plex in an offline state.
For more information about this command, see the vxmend (1m) manual
page.
# vxmend -g systemdg off rootvol-01
Use the boot disk alias vx-altboot to boot up from the alternate boot disk.
For example:
OK> boot vx-altboot
12 To boot up from the original boot disk, reboot again using init 6.
You have now booted up from the original boot disk.
13 Using VEA, remove all but one plex of rootvol, swapvol, usr, var, opt,
and home (that is, remove the newer plex from each volume in systemdg.)
14 Run the command to convert the root volumes back to disk partitions.
16 Verify that the mount points are now slices rather than volumes.
17 At the end of this lab, leave your boot disk unencapsulated and remove any
other disks from systemdg.
Setup
For this lab, you should have at least four disks (datadg01 through datadg04)
in a disk group called datadg. If you use object names other than the ones
provided, substitute the names accordingly in the commands.
10 After the system is running again, check the state of the volumes to ensure that
neither of the volumes is in the sync/needsync mode.
11 Run the vxstat command. This utility displays statistical information about
volumes and other VxVM objects. For more information on this command, see
the vxstat (1m) manual page.
# vxstat -g diskgroup -fab vollog volnolog
15 Unmount both file systems and remove the volumes vollog and volnolog.
2 Display the properties of the volume. In the table, record the device and disk
media name of the disks used in this volume.
5 From the command line, check that the state of one the plexes is DISABLED
and REMOVED.
7 Check the status of the disks. What is the status of the disks?
9 In VEA, what is the status of the disks? What is the status of the volume?
10 From the command line, recover the volume. During and after recovery, check
the status of the plex in another command window and in VEA.
11 At the end of this lab, destroy your disk group and send your data disks back to
an uninitialized state. In the next exercises, you will use sliced disks and non-
CDS disk groups to practice recovery operations.
2 Stop vxrelocd using ps and kill, in order to stop hot relocation from
taking place. Verify that the vxrelocd processes are killed before you
continue.
Note: There are two vxrelocd processes. You must kill both of them at the
same time.
3 Next, you simulate disk failure by removing the public and private regions of
one of the disks in the volume. In the commands, substitute the appropriate
disk device name for one of the disks in use by recvol, for example
c1t2d0s2.
# fmthard -d 3:0:0:0:0 /dev/rdsk/c1t2d0s2
# fmthard -d 4:0:0:0:0 /dev/rdsk/c1t2d0s2
4 An error will occurs when you start I/O to the volume. You can view the error
on the console or in tail -f /var/adm/messages. A summary of the
mail can be viewed in /var/mail/root.
Start I/O to the volume using the command:
# dd if=/dev/zero of=/dev/vx/rdsk/diskgroup/recvol &
5 When the error occurs, view the status of the disks from the command line.
9 Recover the disk by replacing the private and public regions on the disk:
Note: This method for recovering the disk is only used because of the method
in which the disk was defaulted (by writing over the private and public
regions). In most real-life situations, you do not need to perform this step.
13 Check the status of the disks and the volume to ensure that the disk and volume
are fully recovered.
3 Attempt to create the same volume again, but this time specify two disks to
use. Do not clear any spare flags on the disks.
10 Next, you simulate disk failure by removing the public and private regions of
one of the disks in the volume. In the commands, substitute the appropriate
disk device name:
# fmthard -d 3:0:0:0:0 /dev/rdsk/c1t2d0s2
# fmthard -d 4:0:0:0:0 /dev/rdsk/c1t2d0s2
11 An error occurs when you start I/O to the volume. You can view the error on
the console or in tail -f /var/adm/messages. A summary of the mail
can be viewed in /var/mail/root.
Start I/O to the volume using the command:
# dd if=/dev/zero of=/dev/vx/rdsk/diskgroup/volume_name &
12 Run vxprint -rth and compare the output to the vxprint output that you
saved earlier. What has occurred?
13 In VEA, view the disks. Notice that the disk is in the disconnected state.
17 View the status of the disks and the volume from the command line.
18 Recover the disk by replacing the private and public regions on the disk.
19 Bring the disk back under VxVM control and into the disk group.
23 Turn off any spare flags from your disks that you set during this lab.
Lab Prerequisites
Record the following information to be provided by your instructor:
root password
Host name
My Boot Disk:
My Data Disks:
Location of VERITAS
Volume Manager packages:
Location of VERITAS
Storage Foundation license
keys:
Preinstallation
1 Determine if there are any VRTS packages currently installed on your system.
Solaris
# pkginfo | grep -i VRTS
HP-UX
# swlist -l product | grep VRTS
AIX
# lslpp -l ‘VRTS*’
2 Before installing VxVM, save the following important system files into backup
files named with a “.preVM” extension. Also, save your boot disk
information to a file for later use (do not store the file in /tmp). You may need
the boot disk information when you bring the boot disk under VxVM control in
a later lab.
Solaris
# cp /etc/system /etc/system.preVM
# cp /etc/vfstab /etc/vfstab.preVM
# prtvtoc /dev/rdsk/device_name > /etc/bootdisk.preVM
AIX
# cp /etc/filesystems /etc/filesystems.preVM
# cp /etc/vfs /etc/vfs.preVM
3 Are any VERITAS license keys installed on your system? Check for installed
licenses.
# vxlicrep
3 Connect to your system as root. Your instructor provides you with the
password.
– Hostname: (For example, train13)
– Username: root
– Password: (Your instructor provides the password.)
11 Test the new account. After you have tested the new account, exit VEA.
# vea
Hostname: (For example, train13)
User: admin1
Password: (Type the password that you created for admin1.)
Select File—>Exit.
3 From the command line, invoke the VxVM manual pages and read about the
vxdisk command.
# man vxdisk
5 From the command line, invoke the VxVM manual pages and read about the
vxdg command.
# man vxdg
7 From the command line, invoke the VxVM manual pages and read about the
vxprint command.
# man vxprint
2 What is the version and revision number of the installed version of VxVM?
Solaris
# pkginfo -l VRTSvxvm
In the output, look at the Version field.
HP-UX
# swlist -l product | grep -i vxvm
The version is in the second column of the output.
AIX
# lslpp -l VRTSvxvm
In the output, look under the column named Level.
3 What daemons are running after the system boots under VxVM control?
# ps -ef|grep -i vx
vxconfigd, vxrelocd, vxnotify, vxcached, vxesd,
vxconfigbackupd
3 Create a new disk group by adding a disk from the free disk pool, or an
uninitialized disk, to a new disk group. Initialize the disk (if it is uninitialized)
and name the new disk group datadg.
Note: If you are sharing a disk array, each participant should select a different
disk group name, such as data1dg and data2dg.
View all the disk devices on the system.
Select the Disk Groups node and select Actions—>New Disk Group.
In the New Disk Group wizard, do not select a disk group organization
principle. Type a name for the disk group, select a disk to be placed in the
disk group, and click Add.
Click Next, confirm your selection, and click Finish.
4 Add one more disk to your disk group. Initialize the disk and view all the disk
devices on the system.
Select an unused disk and select Actions—>Add Disk to Disk Group. In
the Add Disk to Disk Group Wizard, select the disk group name, and
verify or change the list of disks under Selected disks.
Click Next, confirm your selection, and click Finish.
5 Remove all of the disks from your disk group. What happens to your disk
group?
6 Create a new disk group by adding a disk from the free disk pool, or an
uninitialized disk, to a new disk group. Initialize the disk (if it is uninitialized)
and name the new disk group datadg.
Note: If you are sharing a disk array, each participant should select a different
disk group name, such as data1dg and data2dg.
View all the disk devices on the system.
Select the Disk Groups node and select Actions—>New Disk Group.
In the New Disk Group Wizard, type a name for the disk group, select a
disk to be placed in the disk group, and click Add.
Click Next, confirm your selection, and click Finish.
7 Deport your disk group. Do not give it a new owner. View all the disk devices
on the system.
Select the disk group and select Actions—>Deport Disk Group.
Confirm your request when prompted in the Deport Disk Group dialog
box.
8 Import your datadg disk group and view all the disk devices on the system.
Select the disk group and select Actions—>Import Disk Group. In the
Import Disk Group dialog box, click OK.
9 Deport datadg and assign your machine name, for example, train5, as the
New Host.
Select the disk group and select Actions—>Deport Disk Group. Confirm
your request. In the Deport Disk Group dialog box, type your machine
name in the New Host field and click OK.
10 Import the disk group and change its name to data3dg. View all the disk
devices on the system.
Note: If you are sharing a disk array, each participant should select a different
disk group name, such as data3dg and data4dg.
Select the disk group and select Actions—>Import Disk Group. Confirm
your request. In the Import Disk Group dialog box, type data3dg in the
New Name field, and click OK.
13 Now import data3dg and overwrite the disk group lock. What did you have
to do to import it and why?
Select the disk group and select Actions—>Import Disk Group. In the
Import Disk Group dialog box, mark the Clear host ID check box, and
click OK.
2 Add one uninitialized disk to the free disk pool and view the status of the disk
devices to verify your action.
# vxdisksetup -i device_tag
# vxdisk -o alldgs list
3 Remove the disk from the free disk pool and return the disk to an uninitialized
state. View the status of the disk devices to verify your action.
# vxdiskunsetup -C device_tag
# vxdisk -o alldgs list
5 Create a non-CDS disk group data4dg with at least one drive. Verify your
action.
# vxdg init diskgroup data4dg01=device_tag cds=off
# vxdisk -o alldgs list
6 Deport disk group data4dg, then import the disk group back to your machine.
Verify your action.
# vxdg deport diskgroup
# vxdg import diskgroup
# vxdisk -o alldgs list
8 Create a new non-CDS disk group data4dg with an older disk group version
assigned to it. Verify your action.
# vxdg –T 20 init diskgroup data4dg01=device_tag
cds=off
# vxdisk -o alldgs list
10 How would you check that you have upgraded the version?
# vxdg list diskgroup
11 Add two more disks to the disk group data4dg. You should now have three
disks in your disk group. Verify your action.
# vxdg –g diskgroup adddisk data4dg02=device_tag
# vxdg -g diskgroup adddisk data4dg03=device_tag
# vxdisk -o alldgs list
13 Deport disk group data4dg and assign the host name as the host name of your
machine. Verify your action.
# vxdg -h host_name deport diskgroup
# vxdisk -o alldgs list
14 View the status of the disks in the deported disk group using vxdisk list
device_tag. What is in the hostid field?
# vxdisk list device_tag
The hostid is the name of your machine.
17 Try again to remove a disk from data4dg. Does it work this time?
# vxdg –g diskgroup rmdisk data4dg03
The operation is successful, because the disk group is imported.
18 Deport the disk group data4dg and do not assign a host name. Verify your
action.
# vxdg deport diskgroup
# vxdisk -o alldgs list
19 View the status of the disk in the deported disk group using vxdisk list
device_tag. What is in the hostid field?
# vxdisk list device_tag
The host id is now empty.
Setup
A minimum of four disks is required to perform this lab, not including the root
disk.
3 Display the volume layout. What names have been assigned to the plex and
subdisks?
To view the assigned names, view the volume using:
# vxprint –g diskgroup –thr | more
6 Create a 20-MB, two-column striped volume with a mirror. Set the stripe unit
size to 128K.
# vxassist –g diskgroup make vol03 20m
layout=mirror-stripe ncol=2 stripeunit=128k
7 Create a 20-MB, two-column striped volume with a mirror. Set the stripe unit
size to 128K. Select at least one disk you should not use.
# vxassist –g diskgroup make vol04 20m
layout=mirror-stripe ncol=2 stripeunit=128k !datadg03
8 Create a 20-MB striped volume with a mirror that has one less column (3) than
number of drives.
# vxassist –g diskgroup –b make vol04 20m
layout=mirror-stripe ncol=3 datadg01 datadg02 datadg03
9 Create the same volume specified in step 7, but without the mirror.
# vxassist –g diskgroup –b make vol05 20m layout=stripe
ncol=3 datadg01 datadg02 datadg03
What names have been assigned to the plex and subdisks?
To view the assigned names, view the volume using:
vxprint -g diskgroup -thr | more
3 Display the volume layout. Notice the naming convention of the plex and
subdisk.
5 Create a 50-MB striped volume on two disks, and specify which two disks to
use in creating the volume.
Select a disk group, and select Actions—>New Volume.
In the New Volume wizard, select “Manually select disks for use by this
volume.” Move two disks into the Included box, and then click Next. Type
the name of the volume, and specify a size of 50 MB.
Select the Striped option in the Layout region. Verify that the number of
columns is 2.
Complete the wizard by accepting all remaining defaults to create the
volume.
6 Create a 20-MB, two-column striped volume with a mirror. Set the stripe unit
size to 128K.
Select a disk group, and select Actions—>New Volume.
In the New Volume wizard, let VxVM decide which disks to use. Type the
name of the volume, and specify a size of 20 MB.
Select the Striped option in the Layout region. Verify that the number of
columns is 2. Set the Stripe unit size to 256 (sectors), or 128K.
Mark the Mirrored check box in the Mirror Info region.
Complete the wizard by accepting all remaining defaults to create the
volume.
View the volume. Notice that you now have a second plex.
Select the volume, and select Actions—>Volume View. Close the Volumes
window when you are satisfied.
8 Create a 20-MB striped volume with a mirror with one less column than
number of drives.
Select a disk group, and select Actions—>New Volume.
In the New Volume wizard, let VxVM decide which disks to use. Type the
name of the volume, and specify a size of 20 MB.
Select the Striped option in the Layout region. Change the number of
columns to 3.
Mark the Mirrored check box in the Mirror Info region.
You receive an error and are not able to complete the wizard.
9 Create the same volume specified in step 7, but without the mirror.
Select a disk group, and select Actions—>New Volume.
In the New Volume wizard, let VxVM decide which disks to use. Type the
name of the volume, and specify a size of 20 MB.
Select the Striped option in the Layout region. Change the number of
columns to 3.
Complete the wizard by accepting all remaining defaults to create the
volume.
Was the volume created?
Yes, the volume is created this time.
1 First, remove any volumes that you created in the previous lab.
To remove a volume, highlight a volume in the main window, and select
Actions—>Remove Volume.
a Using the vi editor, create a file called vxassist that includes the
following:
# when mirroring create three mirrors
nmirror=3
b Using the vi editor, create a file called alt_vxassist that includes the
following:
# use 256K as the default stripe unit size for
# regular volumes
stripeunit=256k
3 View the layout of these volumes using VEA and by using vxprint. What do
you notice?
– The first volume should show three plexes rather than the standard
two.
– The second volume should show a stripe size of 256K instead of the
standard 64K.
4 Remove any vxassist default files that you created in this optional lab
section. The presence of these files can impact subsequent labs where default
behavior is assumed.
Setup
Before you begin this lab, ensure that any volumes created in previous labs have
been removed. Create a new disk group that contains four disks only.
2 Display the volume layout. How are the disks allocated in the volume? Which
disk devices are used?
# vxprint -htr
Notice which two disks are allocated to the first plex and which two disks
are allocated to the second plex and record your observation.
3 Remove the volume you just made, and re-create it by specifying the four disks
in an order different from the original layout.
# vxassist -g diskgroup remove volume volume_name
# vxassist -g diskgroup -o ordered make volume_name 20m
layout=mirror-stripe ncol=2 datadg04 datadg03 datadg02
datadg01
4 Display the volume layout. How are the disks allocated this time?
# vxprint -htr
The plexes are now allocated in the order specified on the command line.
6 Remove one of the two mirrors, and display the volume layout.
# vxplex -g diskgroup -o rm dis plex_name
# vxprint -rth
7 Add a mirror to the existing volume, and display the volume layout.
# vxassist -g diskgroup mirror volume_name
# vxprint -rth
8 Add a dirty region log to the existing volume and specify the disk to use for the
DRL. Display the volume layout.
# vxassist -g diskgroup addlog volume_name logtype=drl
disk_name
# vxprint -rth
9 Change the volume read policy to round robin, and display the volume layout.
# vxvol -g diskgroup rdpol round volume_name
# vxprint -rth
11 Mount the file system at the mount point /mydirectory and add files. Verify
that the files were added to the new volume.
Create a mount point:
# mkdir /mydirectory
Mount the file system:
# mount –F vxfs /dev/vx/dsk/diskgroup/volume_name
/mydirectory
2 Display the volume layout. How are the disks allocated in the volume? Which
disk devices are used?
Highlight the volume and click each of the tabs in the right pane and
notice the information under the Mirrors, Logs, and Subdisks tabs.
Select Actions—>Volume View, click the Expand button, and compare the
information to the information in the main window.
3 Remove the volume you just made, and re-create it by specifying the four disks
in order of highest target first (for example, datadg04, datadg03,
datadg02, datadg01, where datadg04=c1t15d0,
datadg03=c1t14d0, and so on).
When you create the volume, select “Manually select disks to use for this
volume.” Move the disks into the Included box in the desired order, mark
the Ordered check box, click Next, and click Finish.
4 Display the volume layout. How are the disks allocated this time?
Highlight the volume and click each of the tabs in the right pane. Notice
the information in the Mirrors, Logs, and Subdisks tabs.
Select Actions—>Volume View, click the Expand button, and compare the
information to the information in the main window.
10 Add files to the new volume. Verify that the files were added to the new
volume.
After adding files to the file system, you can verify that files were added by
displaying file system information. Expand the File Systems node in the
object tree, and right-click the file system in the right pane, and select
Properties.
Using the VEA interface, open the Volume to Disk Mapping window and
display the subdisk information for each disk.
Highlight the disk group and select Actions—>Disk/Volume Map.
2 Repeatedly copy /opt to the file system using a new target directory name
each time until the file system is approximately 85 percent full.
# for i in 1 2 3
> do
> cp -r /opt /fs_test/opt$i
> done
5 Repeat steps two and three using values 4 5 for i in the loop. Fragmentation
of both free space and directories will result.
7 Defragment the file system and display the results. Run fragmentation reports
both before and after the defragmentation and display summary statistics after
each pass. Compare the fsadm report from step 4 with the final report from
the last pass in this step.
# fsadm -e -E -d -D -s /fs_test
8 Unmount the file systems and remove the volumes used in this lab.
# umount mount_point
# vxassist -g diskgroup remove volume volume_name
Setup
To perform this lab, you should have at least four disks in the disk group that you
are using. You can use either the VEA interface or the command line interface,
whichever you prefer. The solutions for both methods are covered in the Lab
Solutions appendix. If you use object names other than the ones provided,
substitute the names accordingly in the commands.
Note: If you are using VEA, view the properties of the related task after each step
to view the underlying command that was issued.
Resizing a Volume
1 If you have not already done so, remove the volumes created in the previous
lab.
VEA:
For each volume in your disk group, highlight the volume, and select
Actions—>Delete Volume.
CLI:
# umount /filesystem
# vxedit –g diskgroup –rf rm volume_name
2 Create a 20-MB concatenated mirrored volume with a file system /myfs, and
mount the volume.
VEA:
Highlight the disk group, and select Actions—>New Volume. Specify a
volume name, the size, a concatenated layout, and select mirrored. Ensure
that “Enable logging” is not checked. Add a VxFS file system and set a
mount point.
CLI:
# vxassist -g diskgroup make volume_name 20m layout=mirror
# mkfs -F vxfs /dev/vx/rdsk/diskgroup/volume_name
# mkdir /myfs
# mount –F vxfs /dev/vx/dsk/diskgroup/volume_name /myfs
CLI:
# vxprint -rth
4 Add data to the volume and verify that the file has been added.
# echo “hello myfs” > /myfs/hello
CLI:
# vxresize –g diskgroup volume_name 100m
CLI:
3 Unmount the file system on the volume and remove the volume.
VEA:
Highlight the volume, and select Actions—>Delete Volume.
In the Delete Volume dialog box, click Yes.
In the Unmount File System dialog box, click Yes.
CLI:
# umount /filesystem
# vxedit –g diskgroup –rf rm volume_name
2 Display a description of Storage Expert rule vxse_drl1. What does this rule
do?
# vxse_drl1 info
This rule checks for large mirrored volumes that do not have an
associated log.
4 From the command line, create a 100-MB mirrored volume with no log. Create
and mount a file system on the volume.
5 Run Storage Expert rule vxse_drl1 on the disk group containing the volume.
What does Storage Expert report?
# vxse_drl1 -g diskgroup run
Storage Expert reports information; the mirrored volume is skipped,
since the volume is less than the size of volumes tested by the rule.
7 Run Storage Expert rule vxse_drl1 again on the disk group containing the
volume. What does Storage Expert report?
# vxse_drl1 -g diskgroup run
Storage Expert reports a violation, since the large mirrored volume does
not have a log.
9 Run Storage Expert rule vxse_drl1 again on the disk group containing the
volume. What does Storage Expert report?
# vxse_drl1 -g diskgroup run
Storage Expert reports that the volume passes the test, since the large
mirrored volume now has a log.
10 What are the attributes and parameters that Storage Expert uses in running the
vxse_drl1 rule?
# vxse_drl1 list
The attribute is mirror_threshold. Storage Expert will warn if a mirror is
greater than this size and the volume does not have a log.
# vxse_drl1 check
The mirror_threshold is a 1-GB mirrored volume.
13 Unmount the file system and remove the volume used in this exercise.
# umount /sefs
# vxedit –g diskgroup –rf rm volume_name
CLI:
# vxassist –g diskgroup -b –t task_name make
volume_name 1g layout=mirror-stripe
CLI:
# vxtask list task_name
or
# vxtask monitor
CLI:
# vxtask set slow=100 task_name
View the layout of the volume in the VEA interface.
4 After the volume has been created, use vxassist to relayout the volume to
stripe-mirror. Use a stripe unit size of 256K, use two columns, and assign the
process to the above task tag.
VEA:
Highlight the volume and select Actions—>Change Layout. In the Change
Volume Layout dialog box, select a Striped Mirrored layout. Change the
stripe unit size value to 512.
CLI:
# vxassist –g diskgroup –t task_name relayout
volume_name layout=stripe-mirror stripeunit=256k ncol=2
5 In another terminal window, abort the task to simulate a crash during relayout.
VEA:
In the Relayout status monitor window, click Abort.
CLI:
# vxtask abort task_name
View the layout of the volume in the VEA interface.
CLI:
# vxrelayout –g diskgroup reverse volume_name
View the layout of the volume in the VEA interface.
2 After the reboot, use vxdiskadm to add a disk that will be used for the mirror
of rootdisk. If your system has two internal disks, use the second internal
disk on your system for the mirror. (This is required due to the nature of the
classroom configuration.) When setting up the disk, make sure that the disk
layout is sliced. Use altboot as the name of your disk.
Select the vxdiskadm option, “Add or initialize one or more disks,” and
follow the steps to add a disk to the systemdg disk group. Select the
second internal disk as the device to add. Add the disk to the systemdg
disk group. Specify a sliced format when prompted. Specify the name of
the disk as altboot.
3 Next, use vxdiskadm to mirror your system disk, rootdisk, to the disk
that you added, altboot.
Select the vxdiskadm option, “Mirror volumes on a disk,” and follow the
steps to mirror the volumes. Specify the disk containing the volumes to be
mirrored as rootdisk. Specify the destination disk as altboot.
4 After the mirroring operation is complete, verify that you now have two disks
in systemdg: rootdisk and altboot, and that all volumes are mirrored.
# vxprint -g systemdg -htr
5 From the command line, set the eeprom variable to enable VxVM to create a
device alias in the openboot program.
# eeprom use-nvramrc?=true
6 To disable the boot disk and make rootvol-01 disabled and offline, use the
vxmend command. This command is used to make changes to configuration
records. Here, you are using the command to place the plex in an offline state.
For more information about this command, see the vxmend (1m) manual
page.
# vxmend -g systemdg off rootvol-01
Use the boot disk alias vx-altboot to boot up from the alternate boot disk.
For example:
OK> boot vx-altboot
12 To boot up from the original boot disk, reboot again using init 6.
# init 6
13 Using VEA, remove all but one plex of rootvol, swapvol, usr, var, opt,
and home (that is, remove the newer plex from each volume in systemdg.)
For each volume in systemdg, remove all of the newly created mirrors.
More specifically, for each volume, two plexes are displayed, and you
should remove the newer (-02) plexes from each volume. To remove a
mirror, highlight a volume and select Actions—>Mirror—>Remove.
14 Run the command to convert the root volumes back to disk partitions.
# vxunroot
16 Verify that the mount points are now slices rather than volumes.
# df -k
17 At the end of this lab, leave your boot disk unencapsulated and remove any
other disks from systemdg.
Setup
For this lab, you should have at least four disks (datadg01 through datadg04)
in a disk group called datadg. If you use object names other than the ones
provided, substitute the names accordingly in the commands.
10 After the system is running again, check the state of the volumes to ensure that
neither of the volumes is in the sync/needsync mode.
# vxprint -thf vollog volnolog
11 Run the vxstat command. This utility displays statistical information about
volumes and other VxVM objects. For more information on this command, see
the vxstat (1m) manual page.
# vxstat -g diskgroup -fab vollog volnolog
The output shows how many I/Os it took to resynchronize the mirrors.
Compare the number of I/Os for each volume. What do you notice?
You should notice that fewer I/O operations were required to
resynchronize vollog. The log keeps track of data that needs to be
resynchronized.
15 Unmount both file systems and remove the volumes vollog and volnolog.
# umount /vollog
# umount /volnolog
# vxedit -g diskgroup -rf rm vollog volnolog
2 Display the properties of the volume. In the table, record the device and disk
media name of the disks used in this volume.
# vxprint -thf
For example, the volume recvol uses datadg02 and datadg04:
5 From the command line, check that the state of one the plexes is DISABLED
and REMOVED.
# vxprint -thf
7 Check the status of the disks. What is the status of the disks?
# vxdisk -o alldgs list
The status of the disks is ONLINE.
9 In VEA, what is the status of the disks? What is the status of the volume?
The disk is reconnected and shows that the disk contains a volume that is
recoverable. Select the volume in the left pane, and click the Mirrors tab
in the right pane. The plex is marked recoverable.
11 At the end of this lab, destroy your disk group and send your data disks back to
an uninitialized state. In the next exercises, you will use sliced disks and non-
CDS disk groups to practice recovery operations.
2 Stop vxrelocd using ps and kill, in order to stop hot relocation from
taking place. Verify that the vxrelocd processes are killed before you
continue.
# ps -e | grep vx
# kill -9 pid1 pid2
# ps -e | grep vx
Note: There are two vxrelocd processes. You must kill both of them at the
same time.
3 Next, you simulate disk failure by removing the public and private regions of
one of the disks in the volume. In the commands, substitute the appropriate
disk device name for one of the disks in use by recvol, for example
c1t2d0s2.
4 An error will occurs when you start I/O to the volume. You can view the error
on the console or in tail -f /var/adm/messages. A summary of the
mail can be viewed in /var/mail/root.
Start I/O to the volume using the command:
# dd if=/dev/zero of=/dev/vx/rdsk/diskgroup/recvol &
5 When the error occurs, view the status of the disks from the command line.
# vxdisk -o alldgs list
The physical device is no longer associated with the disk media name and
the disk group.
9 Recover the disk by replacing the private and public regions on the disk:
# vxdisksetup -i c1t2d0 format=sliced
Note: This method for recovering the disk is only used because of the method
in which the disk was defaulted (by writing over the private and public
regions). In most real-life situations, you do not need to perform this step.
3 Attempt to create the same volume again, but this time specify two disks to
use. Do not clear any spare flags on the disks.
# vxassist -g diskgroup make sparevol 100m
layout=mirror datadg03 datadg04
Notice that VxVM overrides its default and applies the two spare disks to
the volume, because the two disks were specified by the administrator.
9 Display the properties of the volume. In the table, record the device and disk
media name of the disks used in this volume. You are going to simulate disk
failure on one of the disks. Decide which disk you are going to fail. Open a
console screen.
For example, the volume spare2vol uses datadg02 and datadg04:
10 Next, you simulate disk failure by removing the public and private regions of
one of the disks in the volume. In the commands, substitute the appropriate
disk device name:
# fmthard -d 3:0:0:0:0 /dev/rdsk/c1t2d0s2
# fmthard -d 4:0:0:0:0 /dev/rdsk/c1t2d0s2
11 An error occurs when you start I/O to the volume. You can view the error on
the console or in tail -f /var/adm/messages. A summary of the mail
can be viewed in /var/mail/root.
Start I/O to the volume using the command:
# dd if=/dev/zero of=/dev/vx/rdsk/diskgroup/volume_name &
12 Run vxprint -rth and compare the output to the vxprint output that you
saved earlier. What has occurred?
Hot relocation has taken place. The failed disk has a status of NODEVICE.
VxVM has relocated the mirror of the failed disk onto the designated
spare disk.
13 In VEA, view the disks. Notice that the disk is in the disconnected state.
17 View the status of the disks and the volume from the command line.
# vxdisk -o alldgs list
# vxprint -thf
18 Recover the disk by replacing the private and public regions on the disk.
# vxdisksetup -i c1t2d0 format=sliced
19 Bring the disk back under VxVM control and into the disk group.
# vxdg -g diskgroup -k adddisk datadg02=c1t2d0
21 Wait until the volume is fully recovered before continuing. Check to ensure
that the disk and the volume are fully recovered.
# vxdisk -o alldgs list
# vxprint -thf
23 Turn off any spare flags from your disks that you set during this lab.
# vxedit -g diskgroup set spare=off datadg04
Disk Operations
Task Command
Initialize disk vxdisksetup -i device (CDS disk)
vxdisksetup -i device format=sliced (sliced disk)
or
vxdiskadm option, “Add or initialize one or more disks”
Uninitialize disk vxdiskunsetup device
List disks vxdisk -o alldgs list
List disk header vxdisk -g diskgroup list diskname|device
Evacuate a disk vxevac -g diskgroup from_disk to_disk
Rename a disk vxedit -g diskgroup rename oldname newname
Set a disk as a spare vxedit -g diskgroup set spare=on|off diskname
Unrelocate a disk vxunreloc -g diskgroup original_diskname
Task Command
Create disk group vxdg init diskgroup diskname=device
(CDS disk group)
vxdg init diskgroup diskname=device cds=off
(non-CDS disk group)
Add disk to disk group vxdg -g diskgroup adddisk diskname=device
Deport disk group vxdg deport diskgroup
Import disk group vxdg import diskgroup
Destroy disk group vxdg destroy diskgroup
List disk groups vxdg list
List specific disk group vxdg list diskgroup
details
Remove disk from disk vxdg –g diskgroup rmdisk diskname
group
Upgrade disk group version vxdg [-T version] upgrade diskgroup
Move an object between vxdg move sourcedg targetdg object...
disk groups
Split objects between disk vxdg split sourcedg targetdg object...
groups
Join disk groups vxdg join sourcedg targetdg
Subdisk Operations
Task Command
Create a subdisk vxmake -g diskgroup sd subdiskname diskname
offset length
Remove a subdisk vxedit -g diskgroup rm subdisk_name
Display subdisk info vxprint -st
vxprint -l subdisk_name
Associate a subdisk to a plex vxsd assoc plex_name subdisk_name
Dissociate a subdisk vxsd dis subdisk_name
Plex Operations
Task Command
Create a plex vxmake -g diskgroup plex plex_name
sd=subdisk_name,…
Associate a plex (to a vxplex –g diskgroup att vol_name plex_name
volume)
Dissociate a plex vxplex dis plex_name
Remove a plex vxedit –g diskgroup rm plex_name
List all plexes vxprint -lp
Detach a plex vxplex –g diskgroup det plex_name
Attach a plex vxplex –g diskgroup att vol_name plex_name
Task Command
Create a volume vxassist -g diskgroup make vol_name size
layout=format diskname
or
vxmake -g diskgroup -U fsgen vol vol_name
len=size plex plex_name
Remove a volume vxedit -g diskgroup -rf rm vol_name
or
vxassist -g diskgroup remove volume vol_name
Display a volume vxprint -g diskgroup -vt vol_name
vxprint -g diskgroup -l vol_name
Change volume attributes vxedit -g diskgroup set attribute=value
vol_name
vxvol -g diskgroup set attribute=value
vol_name
Resize a volume vxassist -g diskgroup growto vol_name
new_length
vxassist -g diskgroup growby vol_name
length_change
vxassist -g diskgroup shrinkto vol_name
new_length
vxassist -g diskgroup shrinkby vol_name
length_change
vxresize -g diskgroup vol_name [+|-]length
Resize a dynamic LUN vxdisk -g diskgroup resize disk_name
length=attribute
Change volume read vxvol -g diskgroup rdpol round vol_name
policy
vxvol -g diskgroup rdpol prefer vol_name
preferrred_plex_name
vxvol -g diskgroup rdpol select vol_name
Start/Stop volumes vxvol start vol_name
Start all volumes vxvol startall
Start all volumes in a dg vxvol -g diskgroup startall
Stop a volume vxvol stop vol_name
Stop all volumes vxvol stopall
Recover a volume vxrecover -sn vol_name
List unstartable volumes vxinfo [vol_name]
Task Command
Manage tasks vxtask list
vxtask monitor
Manage device discovery
layer (DDL)
Discover new devices vxdisk scandisks new
List supported disk arrays vxddladm listsupport
Exclude support for an array vxddladm excludearray libname=library
vxddladm excludearray vid=vid pid=pid
Reinclude support vxddladm includearray libname=library
vxddladm includearray vid=vid pid=pid
List excluded arrays vxddladm listexclude
List supported JBODs vxddladm listjbod
Add/remove JBOD support vxddladm addjbod vid=vid pid=pid
vxddladm rmjbod vid=vid pid=pid
Add a foreign device vxddladm addforeign blockdir=path
chardir=path
Create a disk group named datadg and add the six disks:
# vxdg init datadg datadg01=c1t0d0 datadg02=c1t1d0
datadg03=c1t2d0
# vxdg -g datadg adddisk datadg04=c2t0d0 datadg05=c2t1d0
datadg06=c2t2d0
Change the permissions of the volume so that dba is the owner and dbgroup
is the group:
# vxedit set user=dba group=dbgroup mode=0744 datavol02
Destroy the volume and remove the disks from the disk group datadg. Also,
remove disks from Volume Manager control:
# vxedit -g datadg -rf rm datavol02
# vxdg -g datadg rmdisk datadg01 datadg02 datadg03
datadg04 datadg05
# vxdg deport datadg
# vxdiskunsetup c1t1d0
# vxdiskunsetup c1t2d0
# vxdiskunsetup c1t3d0...
Advanced vxmake Operation: Create a RAID 0+1 volume with a DRL Log:
# vxmake -g acctdg sd sd01 acctdg01,0,194560
# vxmake -g acctdg sd sd02 acctdg02,0,194560
# vxmake -g acctdg sd sd03 acctdg03,0,194560
# vxmake -g acctdg sd sd04 acctdg04,0,194560
# vxmake -g acctdg sd logsd acctdg01,194560,2
# vxmake -g acctdg plex plex1 layout=stripe ncolumn=2
stwidth=64k sd=sd01:0/0,sd02:1/0
# vxmake -g acctdg plex plex2 layout=stripe ncolumn=2
stwidth=64k sd=sd03:0/0,sd04:1/0
# vxmake -g acctdg plex logplex log_sd=logsd
# vxmake -g acctdg -U fsgen vol datavol06
plex=plex1,plex2,logplex
# vxvol -g acctdg start datavol06
Task Command
Create a VERITAS file system mkfs [fstype] [generic_options] [-o specific_options]
special [size]
# mkfs -F vxfs /dev/vx/rdsk/datadg/datavol
Options
-o N Check VxFS structure without writing to device.
-o version=n Create VxFS with different layout version.
-o bsize=size Create VxFS with a specific block size. size is the
block size in bytes.
-o logsize=size Create VxFS with a specific logging area size. size is
the number of file system blocks to be used for the
intent log.
Mount a VERITAS file system mount [fstype] [generic_options] [-r]
[-o specific_options] special mount_point
# mount -F vxfs /dev/vx/dsk/datadg/datavol /mydata
List mounted file systems mount -v
List mounted file systems in the mount -p
file system table format
Unmount a mounted file system umount special|mount_point
# umount /mnt
Unmount all mounted file systems umount -a
Determine the file system type fstyp [-v] special
# fstyp /dev/dsk/c0t6d0s0
Report free disk blocks and inodes df [-F vxfs] [generic_options] [-o s] [special|mount]
# df -F vxfs /mnt
Check the consistency of and fsck [-F vxfs] [generic_options] [-y|Y] [-n|N] special
repair a file system # fsck -F vxfs /dev/vx/rdsk/datadg/datavol
Online Administration
Task Command
Resize a VERITAS file system fsadm [-b newsize] [-r rawdev] mount_point
# /usr/lib/fs/vxfs/fsadm -b 1024000
-r /dev/vx/rdsk/datadg/datavol /mnt
Benchmarking
Task Command
Create different vxbench -w workload [options] filename . . .
combinations of I/O # vxbench -w write -i iosize=8,iocount=131072
workloads /mnt/testfile01
# vxbench -w rand_write -i iosize=8,iocount=131072,
maxfilesize=1048576 /mnt/testfile01
List vxbench command vxbench -h
options
Workloads Options -i Suboptions
read -h Display help nrep=n Repeat I/O loop n times
write -P Use processes and threads nthreads=n Specify the number of threads
rand_read (default) iosize=n Specify I/O size (in kilobytes)
rand_write -p Use processes fsync Perform an fsync on the file
-t Use threads
rand_mixed remove Remove each file after the test
-m Lock I/O buffers in memory
mmap_read iocount=n Specify the number of I/Os
-s Print summary results
mmap_write reserveonly Only reserve space for the file
-v Print per-thread results
Print throughput in kbytes/sec maxfilesiz=n Max offset for random I/O
-k
Print throughput in mbytes/sec randseed=n Seed value for random number
-M
Specify suboptions generator
-i
rdpct=n Read percentage of the job mix
Managing Extents
Task Command
List file names and inode ff [-F vxfs] [generic_options] [-o s] special
information
Generate path names from ncheck [-F vxfs] [generic_options] [-o options] special
inode numbers for a VxFS
file system
Set extent attributes setext [-e extent_size] [-f flags] [-r reservation] file
Options
-e Specify a fixed extent size.
-r Preallocate, or reserve, space for a file.
-f Set allocation flags.
Flags
align Align extents to the start of allocation units.
chgsize Add the reservation into the file.
contig Allocate the reservation contiguously.
noextend File may not be extended after reservation is used.
noreserve Space reserved is allocated only until the close of the file, and
then is freed.
trim Reservation is reduced to the current file size after the last close.
Intent Logging
Task Command
Check the consistency of and fsck [-F vxfs] [generic_options] [-y|Y] [-n|N]
repair a VERITAS file [-o full,nolog] [-o p] special
system. By default the fsck # fsck -F vxfs /dev/vx/rdsk/datadg/datavol
utility replays the
Options
intent log instead of doing a
-m Checks, but does not repair, a file system before
full structural file system
-n|N mounting.
check.
-V Assumes a response of no to all prompts by fsck.
-y|Y Echoes the command line, but does not execute.
-o full Assumes a response of yes to all prompts by fsck.
-o nolog Perform a full file system check after log replay.
-o p Do not perform log replay.
Check two file systems in parallel.
Perform a full file system # fsck -F vxfs -o full,nolog special
check without the intent log # fsck -F vxfs -o full,nolog /dev/vx/rdsk/datadg/datavol
Task Command
Alter the way in which mount -F vxfs [generic_options] -o mincache=suboption
VxFS handles buffered I/O special mount_point
operations mount -F vxfs -o mincache=closesync
/dev/vx/dsk/datadg/datavol /mnt
Options
mincache=closesync
mincache=direct
mincache=dsync
mincache=unbuffered
mincache=tmpcache
Alter the way in which VxFS mount -F vxfs [generic_options] -o convosync=suboption
handles I/O requests for files special mount_point
opened with the O_SYNC and mount -F vxfs -o convosync=closesync
O_DSYNC flags /dev/vx/dsk/datadg/datavol /mnt
Options
convosync=closesync
convosync=direct
convosync=dsync
convosync=unbuffered
convosync=delay
Task Command
Set tuning parameters for vxtunefs [-ps] [-f filename] [-o parameter=value]
mounted file systems [{mount_point | block_special}]...
# vxtunefs -o write_pref_io=32768 /mnt
Options
-f filename Specifies a parameters file other than the
default
-p /etc/vx/tunefstab
-s Prints tuning parameters
Sets new tuning parameters
Tuning Parameters
read_ahead Enables enhanced read ahead to detect patterns.
read_pref_io Preferred read request size. Default is 64K.
read_nstream Desired number of parallel read requests to
have outstanding at one time. Default is 1.
write_pref_io Preferred write request size. Default is 64K.
write_nstream Desired number of parallel write requests to
have outstanding at one time. Default is 1.
discovered_direct_iosz I/O requests larger than this value are handled
as discovered direct I/O. Default is 256K.
hsm_write_prealloc Improves performance when using HSM
applications with VxFS
initial_extent_size Default initial extent size, in file system blocks.
max_direct_iosz Maximum size of a direct I/O requests.
max_diskq Maximum disk queue generated by a single file.
Default is 1M.
max_seqio_extent_size Maximum size of an extent. Default is 2048 file
system blocks.
qio_cache_enable
Enables or disables caching on Quick I/O for
Databases files. Default is disabled. To enable
caching, you set qio_cache_enable=1.
write_throttle Limits dirty pages per file that a file system
generates before flushing pages to disk.
Display current tuning vxtunefs mount_point
parameters # vxtunefs /mnt
Set read-ahead size Use vxtunefs to set the tuning parameters read_pref_io and
read_nstream.
Read-ahead size = (read_pref_io x read_nstream)
Set write-behind size Use vxtunefs to set the tuning parameters write_pref_io and
write_nstream.
Write-behind size = (write_pref_io x write_nstream)
Task Command
Create a quotas files touch /mount_point/quotas
touch /mount_point/quotas.grp
Turn on quotas for a mounted vxquotaon [-u|-g] mount_point
file system # vxquotaon -u /mnt
Mount a file system and turn mount -F vxfs -o quota|usrquota|grpquota special
on quotas at the same time mount_point
# mount -F vxfs -o quota /dev/dsk/c0t5d0s2 /mnt
Invoke the quota editor vxedquota username|UID|groupname|GID
# vxedquota rsmith
Modify the quota time limit vxedquota -t
View quotas for a user vxquota -v username|groupname
# vxquota -v rsmith
Display summary of quotas vxrepquota mount_point
and disk usage # vxrepquota /mnt
Display a summary of vxquot mount_point
ownership and usage # vxquot /mnt
Turn off quotas for a mounted vxquotaoff [-u|-g] mount_point
file system # vxquotaoff /mnt
Set or modify an ACL for a setfacl [-r] -s acl_entries file
file setfacl [-r] -md acl_entries file
setfacl [-r] -f acl_file file
# setfacl -m user:bob:r-- myfile
# setfacl -d user:scott myfile
# setfacl -s user::rwx,group::r--,user:maria:r--,
mask:rw-,other:--- myfile
Options
-s Set an ACL for a file.
-m Add new or modify ACL entries to a file.
-d Remove an ACL entry for a user.
Elements in an ACL Entry
entry_type:[uid|gid]:permissions
entry_type Entry type: user, group, other, or mask.
uid|gid User or group name or identification number.
permissions Read, write, and/or execute indicated by rwx.
Display ACL entries for a file getfacl filename
# getfacl myfile
Copy existing ACL entries getfacl file1 | setfacl -f file2
from one file to another file # getfacl myfile | setfacl -f - newfile
Task Command
Create a volume or format a vxassist -g diskgroup make qlog_volume size [vxvm_disk]
disk partition to contain the # vxassist -g datadg make qvol01 32m
QuickLog device
Build the QuickLog volume qlogmk -g diskgroup vxlog[x] qlog_volume
layout # qlogmk -g datadg vxlog1 qvol01
Enable a QuickLog device # mount -F vxfs -o qlog= special mount_point
# mount -F vxfs -o qlog= /dev/vx/dsk/datadg/datvol
/mnt
Or
qlogenable [qlog_device] mount_point
# qlogenable /mnt
Disable logging by QuickLog qlogdisable mount_point
without unmounting a # qlogdisable /mnt
VERITAS File System
Detach a QuickLog volume qlogrm qlog_volume
from its QuickLog device # qlogrm qvol01
Remove the QuickLog vxedit -g diskgroup -rf rm qlog_volume
volume from the underlying # vxedit -g datadg -rf rm qvol01
VxVM volume
Display status of QuickLog qlogprint
devices, QuickLog volumes,
and VxFS file systems
Print statistical data for qlogstat [-dvf] [-l qlogdev] [-i interval] [-c count]
QuickLog devices, QuickLog Options
volumes, and VxFS file
-d Report statistics for all QuickLog devices only.
systems
-v Report statistics for all QuickLog volumes only.
-f Report statistics for all logged VxFS file systems only.
-l qlogdev Report statistics for a specified QuickLog device only.
-i interval Print the change in statistics after every interval seconds
-c count Default is 10 seconds.
Stop after printing interval statistics count times.
Default is 1.
Task Command
Enable Quick I/O at mount mount -F vxfs -o qio mount_point
time
Disable Quick I/O mount -F vxfs -o noqio mount_point
Treat a file as a raw character filename::cdev:vxfs:
device mydbfile::cdev:vxfs:
Create a Quick I/O file qiomkfile [-h [headersize]] [-a] [-s size] [-e|-r size]
through a symbolic link file
# qiomkfile -s 100m /database/dbfile
Options
-h For Oracle database files. Creates a file with additional
space allocated for the Oracle header.
-s Preallocates space for a file
-e For Oracle database files. Extends the file by a specified
amount to allow Oracle tablespace resizing.
-r For Oracle database files. Increases the file to a specified
size to allow Oracle tablespace resizing.
-a
Creates a symbolic link with an absolute pathname.
Default behavior creates relative pathnames.
Obtain Quick I/O statistics qiostat [-i interval] [-c count] [-l] [-r] file...
# qiostat -i 5 /database/dbfile
Options
-c count Stop after printing statistics count times.
-i interval Print updated I/O statistics after every interval seconds.
-l Print the statistics in long format. Also prints the caching
statistics when Cached Quick I/O is enabled.
-r Reset statistics instead of printing them.
Enable Cached Quick I/O for vxtunefs -s -o qio_cache_enable=1 mount_point
all files in a file system # vxtunefs -s -o qio_cache_enable=1 /oradata
Disable Cached Quick I/O for qioadmin -S filename=OFF mount_point
a file # qioadmin -S /oradata/sal/hist.dat=OFF /oradata
Feature Description
New product naming Foundation Suite has been renamed as Storage Foundation; other
related products (DBE, DBEAC, SPFSHA) are also renamed.
New product packaging There are new licensing bundles, including Storage Foundation
and licensing bundles QuickStart, Standard, and Enterprise.
New software packages Several new software packages have been added to support the new
VM and FS features. These include VRTSalloc, VRTSap,
VRTScpi, VRTSddlpr, VRTSfppm, VRTSperl, VRTStep.
New installation scripts New installation scripts have been added to simplify installation of
multiple VERITAS products as well as individual VERITAS
products. The Installer script available in 3.5 is still available and is
the easiest way to install multiple VERITAS products from within
one menu system. New individual product installation scripts have
been added (installvm, installfs, installsf) that
simplify installation of individual products. These scripts include
vxinstall, which has been considerably reduced in scope of what it
sets up.
Simplified upgrade You can now upgrade a product through the installation scripts
procedures (installvm, installfs, installsf).
Display/Change default A new task is available in the vxdiskadm menu interface that
disk layout attributes enables you to change or display default disk layout attributes, such
as default private region size, and the offset of the private and public
regions.
Dynamic LUN You can resize a LUN while preserving the existing data. You do not
expansion have to stop the use of the volume, therefore you can proceed
without disruption to the application. VxVM can keep access open
at all times to all volumes using the device that is being resized.
Removal of rootdg The rootdg disk group is no longer required for VxVM to
requirement function. The vxinstall script no longer prompts you to set up
rootdg.
Reserved disk groups There are three disk group names that are reserved and cannot be
used to name any disk groups that you create: bootdg,
defaultdg, and nodg.
Configuration Backup Using the disk group configuration backup and restoration (CBR)
and Restore functionality, you can backup and restore all configuration data for
VxVM disk groups and VxVM objects such as volumes that are
configured within the disk groups. After the disk group
configuration has been restored, and the volume enabled, the user
data in the public region is available again without the need to
restore this from backup media.
Glossary-1
B C
back-rev disk group A disk group capability A feature that is provided by a
created using a version of VxVM released volume. For example, a volume may
prior to the release of CDS. Adding CDS exhibit capabilities, such as performance
functionality rolls over to the latest disk and reliability to various degrees. Applies
group version number. to the ISP feature of VxVM.
block The minimum unit of data transfer CDS disk A disk whose contents and
to or from a disk or array. attributes are such that the disk can be used
for CDS as part of a CDS disk group. In
Block-Level Incremental Backup (BLI contrast, a non-CDS disk can neither be
Backup) A VERITAS backup capability
used for CDS nor be part of a CDS disk
that does not store and retrieve entire files. group.
Instead, only the data blocks that have
changed since the previous backup are CDS disk group A VxVM disk group
backed up. whose contents and attributes are such that
the disk group can be used to provide for
boot disk A disk used for booting
cross-platform data sharing. In contrast, a
purposes. This disk may be under VxVM non-CDS disk group (that is, a back-rev
control for some operating systems. disk group or a current-rev disk group)
boot disk group A disk group that cannot be used for cross-platform data
contains the disks from which the system sharing. A CDS disk group can only
may be booted. contain CDS disks.
bootdg A reserved disk group name that CFS VERITAS Cluster File System.
is an alias for the name of the boot disk check box A control button used to
group. select optional settings. A check mark
browse dialog box A dialog box that is usually indicates that a check box is
used to view and/or select existing objects selected.
on the system. Most browse dialog boxes children Objects that belong to an object
consist of a tree and grid. group.
buffered I/O During a read or write
clean node shutdown The ability of a
operation, data usually goes through an node to leave the cluster gracefully when
intermediate file system buffer before all access to shared volumes has ceased.
being copied between the user buffer and
disk. If the same data is repeatedly read or clone pool A storage pool that contains
written, this file system buffer acts as a one or more full-sized instant volume
cache, which can improve performance. snapshots of volumes within a data pool.
See direct I/O and unbuffered I/O. Applies to the ISP feature of VxVM.
cluster A set of host machines (nodes)
that share a set of disks.
command log A log file that contains a data change object (DCO) A VxVM
history of VEA tasks performed in the object that is used to manage information
current session and previous sessions. about the FastResync maps in the DCO log
Each task is listed with the task originator, volume. Both a DCO object and a DCO
the start/finish times, the task status, and log volume must be associated with a
the low-level commands used to perform volume to implement Persistent
the task. FastResync on that volume.
concatenation A layout style data pool The first storage pool that is
characterized by subdisks that are arranged created within a disk group. Applies to the
sequentially and contiguously. ISP feature of VxVM.
configuration copy A single copy of a data stripe This represents the usable
configuration database. data portion of a stripe and is equal to the
stripe minus the parity region.
Glossary-3
Glossary-5
Glossary-7
Glossary-9
Glossary-11
Glossary-13
Glossary-15
Index-1
C D
CDS 1-13 data change object 4-31
CDS disk 1-14 data consistency
CDS disk layout 1-13 maintaining 8-4
cfgmgr 8-17 data redundancy 1-20
CFS 2-15 databases on file systems 2-15
chfs 5-27 default disk group 2-18
CLI 2-29 defaultdg 2-18, 3-9
CLI commands in VEA 2-33 defragmentation
cluster 2-15 scheduling 5-48
cluster environment 3-37 defragmenting a file system 5-46
Cluster File System 2-15, 2-16 defragmenting directories 5-47
cluster functionality 2-15 defragmenting extents 5-46
cluster group 3-15 delaylog mount option 5-54
cluster management 3-6 deporting a disk group
Cluster Volume Manager and renaming 3-33
licensing 2-16 CLI 3-35
to new host 3-33
col_switch 5-21, 5-23
VEA 3-34
column 4-5 vxdiskadm 3-35
command line interface 2-29, 2-36 destroying a disk group 3-45
command log file 2-34, 6-24 CLI 3-45
complete plex 1-16 VEA 3-45
devalias 7-18
Q
P
Quick I/O 2-15, 2-16
packages 2-12
QuickLog 2-16, 5-56
listing 2-26
space requirements 2-12 quotas files 5-41
parallel log replay 5-51
parent task 6-26 R
parity 1-20, 4-7
RAID 1-18
partial disk failure 8-11
RAID levels 1-18
partition 1-5, 1-8
RAID-0 1-19
partitions
after encapsulation 7-12 RAID-0+1 1-19
PATH 5-28, A-17, B-33 RAID-1 1-19
pausing online relayout 6-20 RAID-1+0 1-19
physical disk RAID-5 1-19, 4-13
naming 1-8 advantages 4-9
disadvantages 4-9
Physical Volume Reserved Area 1-5 logging 5-10
pkgadd 2-19, 2-24 RAID-5 column 4-7
pkginfo 2-26 default size 4-13
plex 1-16, 4-6 RAID-5 log 8-10
definition 1-16 RAID-5 logging 8-7
log 1-17
naming 1-16 RAID-5 volume 1-20, 4-7
sparse 1-17 creating 4-19
types 1-17 raw device file 5-25
plex name read policy 5-13
default 1-16 changing in CLI 5-14
Preferences window 2-32 changing in VEA 5-14
preferred plex 5-13