Académique Documents
Professionnel Documents
Culture Documents
P/N 300-004-184
Rev A08
Version 5.6.47
December 2009
Contents
Introduction to Celerra Replicator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5
Terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5
Restrictions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7
Cautions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9
Celerra Replicator concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10
Local replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10
Remote replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11
Activating the destination file system as read/write . . . . . . . . . . . . .13
Communication between Celerra Network Servers. . . . . . . . . . . . . .17
How resynchronization works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .18
How suspend works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .19
How replication relationship restarts . . . . . . . . . . . . . . . . . . . . . . . . .20
System requirements for Celerra Replicator. . . . . . . . . . . . . . . . . . . . . . .22
Local replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .22
Remote replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .22
Upgrading from previous Celerra Network Server versions . . . . . . . . . .24
Upgrade from a version earlier than 5.5.39.2 . . . . . . . . . . . . . . . . . . .24
Upgrade from Celerra Network Server version 5.5.39.2 or later. . . .25
Planning considerations for Celerra Replicator . . . . . . . . . . . . . . . . . . . .29
Replication policies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .29
SavVol size requirements for remote replication . . . . . . . . . . . . . . .33
Determine the number of replications per Data Mover . . . . . . . . . . .34
Configuration considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .35
User interface choices for Celerra Replicator. . . . . . . . . . . . . . . . . . . . . .37
Roadmap for Celerra Replicator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .38
Initiating replication. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .39
Task 1: Establish communication . . . . . . . . . . . . . . . . . . . . . . . . . . . .40
Task 2: Verify communication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .40
Task 3: Create SnapSure checkpoint of source file system. . . . . . .42
Task 4: Create the destination file system . . . . . . . . . . . . . . . . . . . . .45
Task 5: Copy checkpoint to the destination file system . . . . . . . . . .45
Task 6: Begin replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .48
Task 7: Create a second checkpoint of the source file system . . . .50
Task 8: Copy incremental changes. . . . . . . . . . . . . . . . . . . . . . . . . . .52
1 of 172
Task 9: Verify file system conversion . . . . . . . . . . . . . . . . . . . . . . . . 54
Task 10: Check replication status . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
Task 11: Create restartable checkpoints . . . . . . . . . . . . . . . . . . . . . . 59
Recovering replication data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
Task 1: Replication failover. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
Task 2: Resynchronize the source and destination sites . . . . . . . . 66
Task 3: Replication reversal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
Abort Celerra Replicator. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
Suspend a replication relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
Verify the suspended replication relationship . . . . . . . . . . . . . . . . . 88
Restarting a replication relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
Verify that the replication relationship is not synchronized . . . . . . 90
Restart replication relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
Extending the size of a file system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
Extend file system size automatically . . . . . . . . . . . . . . . . . . . . . . . . 98
Extend file system size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
Resetting replication policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
High water mark and time-out policies . . . . . . . . . . . . . . . . . . . . . . 105
Modify replication policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
Change flow-control policies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
Set bandwidth size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
Set policies using parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
Reverse the direction of a replication relationship . . . . . . . . . . . . . . . . 111
Verify the reverse direction of replication relationship . . . . . . . . . 113
Monitor replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
Checking playback service and outstanding delta sets . . . . . . . . . . . . 115
Task 1: Determine playback service status . . . . . . . . . . . . . . . . . . . 115
Task 2: Playback delta set. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
Task 3: Verify delta set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
Events for Celerra Replicator. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
Change the Celerra Replicator SavVol default size . . . . . . . . . . . . . . . . 123
Change the passphrase between Celerra Network Servers . . . . . . . . . 124
Managing and avoiding IP replication problems . . . . . . . . . . . . . . . . . . 125
Preventive measures to avoid IP replication problems . . . . . . . . . 125
Replication restart methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
Recovering from a corrupted file system . . . . . . . . . . . . . . . . . . . . 130
Managing anticipated destination site or network outages . . . . . . 131
Managing unanticipated destination site or network outages. . . . 132
Managing unanticipated source site outages . . . . . . . . . . . . . . . . . 133
Managing expected source site outages . . . . . . . . . . . . . . . . . . . . . 133
Mount the destination file system read/write temporarily . . . . . . . 133
Recovering from an inactive replication state. . . . . . . . . . . . . . . . . 135
Creating checkpoints on the destination site . . . . . . . . . . . . . . . . . 136
Copy file system to multiple destinations with fs_copy. . . . . . . . . 136
Transporting replication data using disk or tape . . . . . . . . . . . . . . . . . . 139
Disk transport method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
Tape transport method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
Setting up the CLARiiON disk array . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
Review the prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
Run the setup script . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
Create data LUNs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
Troubleshooting Celerra Replicator . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
Where to get help . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
Terminology
This section defines terms important to understanding replication on the Celerra
Network Server. The EMC Celerra Glossary provides a complete list of Celerra
terms.
Automatic File System Extension: A configurable Celerra file system feature that
automatically extends a file system created or extended with Automatic Volume
Restrictions
The following restrictions apply to Celerra Replicator:
◆ Celerra Data Migration Service (CDMS) is unsupported (an mgfs file system
cannot be replicated).
◆ Multi-Path File System (MPFS) is supported on the source file system, but not
on the destination file system.
◆ EMC E-Lab™ Interoperability Navigator provides information about disaster
recovery replication products such as EMC SRDF®/Synchronous (SRDF/S) and
SRDF/Asynchronous (SRDF/A).
◆ For EMC TimeFinder®/FS:
• A business continuance volume (BCV) cannot be a source or a destination
file system for replication. You can replicate the underlying source file
system, but not the BCV.
• Do not use the TimeFinder/FS -Restore option for a replicated source file
system. Replication will be unaware of any changes because these changes
occur at the volume level. However, you can restore on a single file basis
using a NFS/CIFS client, which has access to the source file system and the
BCV of the source file system.
• Do not use TimeFinder/FS with a file system that was created on a slice
volume. Creating a file system using the samesize option slices the volume.
TimeFinder does not recognize sliced partitions. Using TimeFinder/FS,
NearCopy, and FarCopy with EMC Celerra further details this feature.
◆ For TimeFinder/FS Near Copy and Far Copy:
• A BCV cannot be a source or a destination file system for replication. You
can replicate the underlying source file system, but cannot replicate the BCV.
◆ Do not extend the source file system while fs_copy is running.
◆ On a per-Data-Mover basis, the total size of all file systems, the size of all
SavVols used by SnapSure, and the size of all SavVols used by the Celerra
Replicator feature must be less than the total supported capacity of the Data
Mover. The EMC Celerra Network Server Release Notes, available at
http://Powerlink.EMC.com, the EMC Powerlink® website, provide a list of Data
Mover capacities.
Local replication
Replication produces a read-only copy of the source file system for use by a Data
Mover in the same Celerra cabinet. The source and destination file systems are
stored on separate volumes. Local replication can use different Data Movers or the
same Data Mover.
Local replication process
Figure 1 on page 10 and subsequent steps show the processing that occurs when
using local replication for the first time.
Primary Secondary
Data Mover Data Mover
2
Source Destination
file file
system system
3 4
SavVol
Storage unit
CNS-000765
Remote replication
Remote replication creates and periodically updates a read-only copy of a source
file system at a remote (destination) site. This is done by transferring changes
made to a source file system at a local site to a file system replica (destination) at
the destination site over an IP network. These transfers are automatic and are
based on user-definable replication policies.
Data Data
Mover Mover
2
Source Destination
file file
system system
3 5
4
Source Destination
SavVol SavVol
CNS-000762
Data Data
Mover Mover
Source Destination
file file
system system
Source Destination
SavVol SavVol
CNS-000763
Data Data
Mover Mover
Source Destination
file file
system system
Source Destination
SavVol SavVol
CNS-000764
Figure 4 Failover
3. The destination site can be enabled to allow read/write access to the
destination file system from network clients (local or remote scenario). "After a
failover or reversal" on page 78 provides more information. This optional step is
done manually.
Source Destination
file file
system system
Source Destination
SavVol SavVol
CNS-000761
Figure 5 Resynchronization
Data Data
Mover Mover
Source Destination
file file
system system
Source Destination
SavVol SavVol
CNS-000766
Note: A reversal requires both sites to be available and results in no data loss. During the
reversal phase, the source and destination file systems are set as read-only while the last
updates are transferred.
The time on the Data Movers involved in a replication relationship and the Control
Stations at both sites must be synchronized with a maximum allowable skew of 10
minutes. Take into account time zones and daylight savings time, if applicable,
when using the Network Time Protocol (NTP) to synchronize the time.
Configuring EMC Celerra Time Services offers more information.
To establish communication, first, you must have root privileges and each site must
be active and configured for external communication. Table 1 on page 18 shows
information about the source and destination sites used in these examples.
Table 1 Source and destination sites information
Note: If this incremental resynchronization fails, restarting replication using a full file system
copy might take considerable time and resources. Plan carefully before using this option.
! !
CAUTION
Any data on the source file system not played back to the destination file system
prior to the failover is permanently lost.
The replication and playback services are no longer running after the suspend
action is complete.
Note: Before you restart a replication, make sure that all checkpoints are mounted.
Otherwise, a full data copy will be initiated instead of an incremental copy.
Local replication
Table 2 on page 22 lists the system requirements for local replication.
Table 2 Local replication system requirements
Network • IP addresses configured for the primary and secondary Data Movers.
• Sufficient storage space available for the source and destination file systems.
Storage • Sufficient SavVol space available for use by Celerra Replicator and
SnapSure.
Remote replication
Table 3 on page 22 lists the system requirements for remote replication.
Table 3 Remote replication system requirements (page 1 of 2)
• Celerra Network Server version 5.6 with the same Celerra version on the
source and destination Celerra Network Servers.
• IP addresses configured for the source and destination Data Movers (ports
8888 and 8887 used by replication for transferring data and internal
operations—contact Customer Service to change this setting).
• HTTPS connection between the Control Station on the source site and the
Network Control Station on the destination site (port 443—cannot be changed).
• Internet Control Message Protocol (ICMP) ensures that a destination
Celerra Network Server is accessible from a source Celerra Network Server.
The ICMP protocol reports errors and provides control data about IP packet
processing.
• Sufficient storage space available for the source and destination file systems.
Storage • Sufficient SavVol space available for use by Celerra Replicator and
SnapSure.
Note: You do not have to abort replication when upgrading Celerra Network Server
version.
Replication state
Command
Active Inactive Suspended Failed-over Not configured
Replication state
Command
Active Inactive Suspended Failed-over Not configured
Prerequisites
Before upgrading, make sure that the Celerra to be updated:
◆ Is running version 5.5.39.2 or later.
◆ If Celerra to be upgraded is a destination, make sure that the source hosting the
active replication sessions is running version 5.5.39.2 or later.
◆ If Celerra to be upgraded is a source, make sure that the destination hosting the
active replication sessions is running version 5.6.47 or later.
◆ Has no fs_copy sessions running.
◆ Is running under minimum load. If there is a high rate of I/O during upgrade it
may cause replications to become inactive.
◆ Is not hosting both the source and destination sides of active replications
running version 5.5. If bi-directional sessions exist on the Celerra to be
upgraded, do the following:
a. Suspend all replication sessions in one direction. (Either the sessions
running from A to B or the sessions running from B to A.)
b. Upgrade the Celerra that is hosting only the destination side of the active
replication sessions.
c. Data transfer will continue but with limited replication management capability.
d. Upgrade the source Celerra.
e. Restart all the suspended replication sessions.
Procedure
To upgrade to Celerra Network Server version 5.6.47:
Step Action
1. Upgrade the destination Celerra from NAS code version 5.5 to 5.5.39.2.
2. Upgrade all of the source Celerras from NAS code version 5.5 to 5.5.39.2.
3. Upgrade the destination Celerra from NAS code version 5.5.39.2 to 5.6.47.
Data continues to transfer between the source and destination sites, but there is limited
replication management capability until you upgrade the source Celerra to 5.6.47. For
example, you cannot start a new replication, restart a suspended or inactive replication,
resync a failed-over replication, suspend or reverse a replication, or perform a copy (full or
differential) of a file system. Table 4 on page 26 provides the out-of-family replication
command support matrix.
Replication policies
Most replication policies can be established for one replication relationship (using
the fs_replicate command) or all replication relationships on a Data Mover by
setting a parameter. Celerra Replicator has policies to:
◆ Control delta-set generation using a time-out interval and high water mark.
◆ Control how to handle data if network connectivity is lost (flow control). "Celerra
Replicator flow control" on page 31 describes this action.
◆ Set the maximum IP bandwidth size used by a replication session. "Set
bandwidth size" on page 109 details this policy.
◆ Set the data amount to be sent across the IP network before an
acknowledgment is required from the receiving side. This is controlled by a
parameter for the TCP window size (tcpwindow). "Accommodating network
concerns" on page 127.
IP Alias with IP replication
Celerra Network Server versions 5.5.28.1 and later support IP Alias with IP
replication. All restrictions on Control Station failover also apply to IP Alias with IP
replication configurations. The following guidelines apply to this feature:
◆ When using IP replication for the first time, or on new systems, configure IP
Alias first, and use IP Alias in the -ip <ipaddr> option of nas_cel -create
command.
◆ For existing systems with existing IP replication sessions, the current slot_0 IP
address (primary Control Station IP address) must be used. For example:
nas_config -IPalias -create 0 -> Do you want slot_0 IP address as your alias
[yes or no] yes
◆ If the Control Station fails over while IP replication is running, the IP replication
command (fs_replicate) might need to be re-executed manually. Check logs
(/nas/log/cmd_log*, server_log command output etc.) to determine how to
proceed. Keep the fs_replicate command output for resync, suspend, restart,
failover, and reverse options in case of failure, then execute the rest of steps
based on the instruction in command output.
◆ When IP Alias is deleted using the nas_config -IPalias -delete command, the IP
address of the primary or the secondary Control Station is not changed.
Changes to the IP address of the primary or the secondary Control Station must
be done separately. IP replication depends on communication between the
Note: A delta set may not be generated or copied if a flow control is triggered. "Celerra
Replicator flow control" on page 31 provides further information.
Temporarily halts all I/Os to the Set an option using Set the VRPL freeze
source file system until fs_replicate -modify parameter as described in the
sufficient space is available on -option autofreeze=yes. "How EMC Celerra Network Server
the source SavVol. During this time-out and high water mark Parameters Guide.
time, the file system is policies work" on page 31
inaccessible to network clients. describes this policy.
When space is available on
the Celerra Replicator SavVol,
the source file system is
mounted and begins accepting
I/Os.
Temporarily stops writing data Set an option using Set the VRPL read-only
to the source file system by fs_replicate -modify parameter as described in the
mounting it read-only. Users -option autoro=yes. "How time- EMC Celerra Network Server
still have read-only access to out and high water mark Parameters Guide.
the source file system. policies work" on page 31
When space becomes describes this policy.
available on the Celerra
Replicator SavVol, the source
file system is remounted
read/write and begins
accepting all I/Os to the source
file system.
As shown in Table 5 on page 32, these policies can be set for one replication
session or for all replication sessions on a Data Mover. If there is a conflict between
these two policies, the one defined for a single replication session takes
precedence.
You can set up alerts to notify you when these events occur. "Events for Celerra
Replicator" on page 121 details Celerra Replicator events, and Configuring EMC
Celerra Events and Notifications describes how to use them.
Before starting any Change default size of each "Change the Celerra
replication processing. Celerra Replicator SavVol Replicator SavVol default
from 10% of the source file size" on page 123
system.
By default, the system
allocates 10% of the size of
the source file system for the
replication SavVol on the
source and destination sites.
At the start of a replication Control SavVol size for a file Task 6: "Begin replication" on
instance. system by specifying a page 48
specific SavVol size. Use
savsize option for
fs_replicate -start.
Note: If you plan to run loopback replications, remember that each loopback replication
counts as two replication sessions because each session encapsulates outgoing and
incoming replications.
Note: To provide a stable network transfer rate for delta sets, it is strongly
recommended that you configure a dedicated network port for Data Mover transfers.
4. Verify that the Data Mover can handle all replication sessions and production
I/Os. You can also monitor memory usage and CPU usage using the
server_sysstat command. This command shows total memory utilization, not
just Celerra Replicator and SnapSure memory usage.
Note: Use Celerra Manager to monitor memory and CPU usage by creating a new
notification on Celerras > Notifications > Data Mover Load tab.
Configuration considerations
Before setting the replication policy triggers, consider the following:
◆ To avoid the source and destination file systems from becoming out-of-sync, do
not allow the replication service to create delta sets significantly faster than it
can copy them to the destination file system. Set the delta-set creation
replication policy to a higher number (for example, 1200 seconds) than the
delta-set playback number (for example, 600 seconds). The replication policies
you establish for creating and replaying delta sets depend on the size and
number of transactions processed on the source file system.
◆ Determine if the network bandwidth can effectively transport the production
change data generated at the source site to the destination site.
◆ During the delta-set playback on the destination file system, network clients can
access the destination file system. However, at the beginning of the delta-set
playback for CIFS (Common Internet File Service) clients, there is a temporary
freeze/thaw period that may cause a network disconnect. As a result, do not set
the replication policy to a low number because this reduces the availability of
the destination file system.
To eliminate this freeze/thaw period, create a checkpoint of the destination file
system and mount it for client access at the destination site. However, this
checkpoint will not contain the most up-to-date production data.
◆ Carefully evaluate the infrastructure of the destination site by reviewing items
such as:
• Subnet addresses
• Unicode configuration
• Availability of name resolution services; for example, WINS, DNS, and NIS
• Availability of WINS/PDC/BDC/DC in the correct Microsoft Windows NT, or
Windows Server domain
• Share names
◆ Local groups are not supported on replicated file systems unless you use
VDMs. Replicating EMC Celerra CIFS Environments (V1) describes this
consideration more fully.
◆ The replication SavVol for the delta sets must be large enough to store and
process all the delta-set write I/Os, and the SnapSure SavVol for the
checkpoints must be able to store all the source file system block changes for
the initial synchronization.
◆ The destination file system can only be mounted on one Data Mover, even
though it is read-only.
◆ At the application level, as well as the operating system level, some
applications might have limitations on the read-only destination file system due
to caching and locking.
◆ If you are planning on enabling international character sets (Unicode) on your
source and destination sites, you must first set up translation files on both sites
before starting Unicode conversion on the source site. Using International
Character Sets with EMC Celerra covers this consideration.
◆ Celerra FileMover feature supports replicated file systems. This is described in
Using EMC Celerra FileMover.
◆ Celerra File-Level Retention Capability supports replicated file systems. Using
File-Level Retention on EMC Celerra provides additional configuration
information.
Action
To establish a trusted relationship at each site, logged in as root, use this command syntax:
# nas_cel -create [-name <cel_name>] <ip> -passphrase <passphrase>
where:
<cel_name> = name of the remote (destination) Celerra Network Server in the configuration
<ip> = IP address of the remote Control Station in slot 0
<passphrase> = secure passphrase used for the connection, which must have 6- to 15-
characters and be the same on both sides of the connection
Example:
To set up a trust relationship, type the following commands at both sites:
[source_site]# nas_cel -create eng25271 -ip 172.24.252.71 -passphrase
nasadmin
[destination_site]# nas_cel -create eng25246 -ip 172.24.252.46 -
passphrase nasadmin
Note: If you need to change the passphrase later, follow the procedure described in "Change the
passphrase between Celerra Network Servers" on page 124.
Output
From source site eng25271, to set up relationship with destination site eng25246:
operation in progress (not interruptible)...
id = 3
name = eng25271
owner = 0
device =
channel =
net_path = 172.24.252.71
celerra_id = 0001901003890010
passphrase = nasadmin
Action
To verify that the source and destination sites can communicate with each other, type the
command at each site:
[source_site]$ nas_cel -list
Output
Note
The sample output shows the source site can communicate with the destination site, cs110.
Action
To verify that the source and destination sites can communicate with each other, at each site, type:
[destination_site]$ nas_cel -list
Output
Note
The sample output shows the destination site can communicate with the source site, cs100.
Action
To view the passphrase of a Celerra Network Server, use this command syntax:
$ nas_cel -info id=<cel_id>
where:
<cel_id> = Celerra ID
Note: Celerra ID is assigned automatically. To view this ID of a remote system, use the
nas_cel -list command. You can also use the hostname.
Example:
To view the passphrase of the Celerra system, type:
$ nas_cel -info id=5
Output
id = 5
name = cs110
owner = 503
device =
channel =
net_path = 192.168.168.102
celerra_id = APM000446038450000
passphrase = nasadmin
! !
CAUTION
◆ When creating checkpoints, be careful not to exceed your system’s limit. Celerra
permits 96 checkpoints per PFS, regardless of whether the PFS is replicated, for
all systems except the Model 510 Data Mover (which permits 32 checkpoints with
PFS replication and 64 checkpoints without). This limit counts existing
checkpoints, or those already created in a schedule and might count two
restartable checkpoints as well as a third checkpoint created by certain
replication operations on either the PFS or SFS.
◆ If you are at the limit, delete existing checkpoints to create space for newer
checkpoints, or do not create new checkpoints if existing ones are more
Action
To create a SnapSure checkpoint of the source file system, use this command syntax:
$ fs_ckpt <fs_name> -Create
where:
<fs_name> = file system name on which a checkpoint is created
Remote replication example:
To create a checkpoint of the source file system src_ufs1, type:
$ fs_ckpt src_ufs1 -Create
Local replication example:
To create a checkpoint of the source file system local_src, type:
$ fs_ckpt local_src -Create
id = 90
name = src_ufs1_ckpt1
acl = 0
in_use = True
type = ckpt
worm = off
volume = vp246
pool = clar_r5_performance
member_of =
rw_servers=
ro_servers= server_2
rw_vdms =
ro_vdms =
checkpt_of= src_ufs1 Mon Feb 7 06:58:10 EST 2005
used = 1%
full(mark)= 90%
stor_devs = APM00034000068-001F,APM00034000068-001E
disks = d21,d15
disk=d21 stor_dev=APM00034000068-001F addr=c16t1l14
server=server_2
disk=d21 stor_dev=APM00034000068-001F addr=c0t1l14
server=server_2
disk=d15 stor_dev=APM00034000068-001E addr=c0t1l13
server=server_2
disk=d15 stor_dev=APM00034000068-001E addr=c16t1l13
server=server_2
Step Action
3. Mount the file system as read-only on the destination Data Mover by typing:
$ server_mount server_2 -option ro dst_ufs1 /dst_ufs1
Local replication example:
$ server_mount server_2 -option ro local_dst /local_dst
Note: The destination file system can only be mounted on one Data Mover, even though it
is read-only.
Note: If the primary file system extends during the running of the fs_copy command and
before replication starts in Task 6: "Begin replication" on page 48, you must extend the
destination file system manually to keep file system sizes identical. Use the nas_fs -xtend
command.
To copy a checkpoint to the destination file system, use this command syntax:
$ fs_copy -start <srcfs> <dstfs>:cel=<cel_name> -option
convert=no,monitor=off
where:
<srcfs> = source file system checkpoint.
<dstfs> = destination file system.
<cel_name> = destination Celerra Network Server name.
-option convert=[yes|no] = allows the conversion of the <dstfs> to uxfs after the file system
copy is executed. If no is specified, when the copy has completed, the <dstfs> remains a rawfs file
system type. The default is yes.
-option monitor=off = progress of the copy is printed to the screen by default. The off
option forces the command to run as a background process.
Remote replication example:
To copy the checkpoint, src_ufs1_ckpt1, to the destination file system without converting it to uxfs,
type:
$ fs_copy -start src_ufs1_ckpt1 dst_ufs1:cel=cs110 -option
convert=no,monitor=off
Local replication example:
To copy the checkpoint, local_src_ckpt1, to the destination file system, local_dst, type:
$ fs_copy -start local_src_ckpt1 local_dst -option convert=no,
monitor=off
Output
id = 126
name = dst_ufs1
acl = 0
in_use = True
type = rawfs
worm = off
volume = v272
pool = clar_r5_performance
member_of = root_avm_fs_group_3:cs110
rw_servers=
ro_servers= server_2
rw_vdms =
ro_vdms =
backup_of = src_ufs1 Mon Feb 7 06:58:10 EST 2005
stor_devs = APM00044603845-0008,APM00044603845-0007
disks = d8,d9
disk=d8 stor_dev=APM00044603845-0008 addr=c0t1l2
server=server_2
disk=d8 stor_dev=APM00044603845-0008 addr=c32t1l2
server=server_2
disk=d8 stor_dev=APM00044603845-0008 addr=c16t1l2
server=server_2
disk=d8 stor_dev=APM00044603845-0008 addr=c48t1l2
server=server_2
disk=d9 stor_dev=APM00044603845-0007 addr=c16t1l1
server=server_2
disk=d9 stor_dev=APM00044603845-0007 addr=c48t1l1
server=server_2
disk=d9 stor_dev=APM00044603845-0007 addr=c0t1l1
server=server_2
disk=d9 stor_dev=APM00044603845-0007 addr=c32t1l1
server=server_2
done
Output
Note
The fs_copy session is not listed in the output indicating the copy is complete.
Action
To start replication for the first time, use this command syntax:
$ fs_replicate -start <srcfs> <dstfs>:cel=<cel_name>
where:
<srcfs> = source file system
<dstfs> = destination file system
<cel_name> = destination Celerra Network Server
Note: Multiple fs_replicate -start processes must be executed sequentially, not in parallel. Run
only one fs_replicate -start command at a time.
id = 126
name = dst_ufs1
acl = 0
in_use = True
type = rawfs
worm = off
volume = v272
pool = clar_r5_performance
member_of = root_avm_fs_group_3:cs110
rw_servers=
ro_servers= server_2
rw_vdms =
ro_vdms =
backup_of = src_ufs1 Mon Feb 7 06:58:10 EST 2005
stor_devs = APM00044603845-0008,APM00044603845-0007
disks = d8,d9
disk=d8 stor_dev=APM00044603845-0008 addr=c0t1l2
server=server_2
disk=d8 stor_dev=APM00044603845-0008 addr=c32t1l2
server=server_2
disk=d8 stor_dev=APM00044603845-0008 addr=c16t1l2
server=server_2
disk=d8 stor_dev=APM00044603845-0008 addr=c48t1l2
server=server_2
disk=d9 stor_dev=APM00044603845-0007 addr=c16t1l1
server=server_2
disk=d9 stor_dev=APM00044603845-0007 addr=c48t1l1
server=server_2
disk=d9 stor_dev=APM00044603845-0007 addr=c0t1l1
server=server_2
disk=d9 stor_dev=APM00044603845-0007 addr=c32t1l1
server=server_2
done
Action
To create a second checkpoint of the source file system, which is compared to the initial
checkpoint, use this command syntax:
$ fs_ckpt <fs_name> -Create
where:
<fs_name> = file system name for which a checkpoint is created
Remote replication example:
To create a SnapSure checkpoint of source file system src_ufs1, type:
$ fs_ckpt src_ufs1 -Create
Local replication example:
To create a SnapSure checkpoint of source file system local_src, type:
$ fs_ckpt local_src -Create
Note: The following output shown is for the remote replication example.
id = 97
name = src_ufs1_ckpt2
acl = 0
in_use = True
type = ckpt
worm = off
volume = vp246
pool = clar_r5_performance
member_of =
rw_servers=
ro_servers= server_2
rw_vdms =
ro_vdms =
checkpt_of= src_ufs1 Mon Feb 7 07:05:00 EST 2005
used = 3%
full(mark)= 90%
stor_devs = APM00034000068-001F,APM00034000068-001E
disks = d21,d15
disk=d21 stor_dev=APM00034000068-001F addr=c16t1l14
server=server_2
disk=d21 stor_dev=APM00034000068-001F addr=c0t1l14
server=server_2
disk=d15 stor_dev=APM00034000068-001E addr=c0t1l13
server=server_2
disk=d15 stor_dev=APM00034000068-001E addr=c16t1l13
server=server_2
Action
To copy the incremental changes (or the delta set) between the two source file system checkpoints
to the destination file system, use this command syntax:
$ fs_copy -start <new_check_point> <dstfs>:cel=<cel_name> -fromfs
<previous_check_point> -option monitor=off
where:
<new_check_point> = last checkpoint taken as described in Task 7: "Create a second
checkpoint of the source file system" on page 50.
<dstfs> = destination file system.
<cel_name> = Celerra Network Server where the destination file system resides.
<previous_check_point> = first checkpoint taken.
-option monitor=off = progress of the copy is printed to the screen by default. The off
option forces the command to run as a background process.
Note: The following output shown is for the remote replication example.
id = 126
name = dst_ufs1
acl = 0
in_use = True
type = rawfs
worm = off
volume = v272
pool = clar_r5_performance
member_of = root_avm_fs_group_3:cs110
rw_servers=
ro_servers= server_2
rw_vdms =
ro_vdms =
backup_of = src_ufs1 Mon Feb 7 07:05:00 EST 2005
stor_devs = APM00044603845-0008,APM00044603845-0007
disks = d8,d9
disk=d8 stor_dev=APM00044603845-0008 addr=c0t1l2
server=server_2
disk=d8 stor_dev=APM00044603845-0008 addr=c32t1l2
server=server_2
disk=d8 stor_dev=APM00044603845-0008 addr=c16t1l2
server=server_2
disk=d8 stor_dev=APM00044603845-0008 addr=c48t1l2
server=server_2
disk=d9 stor_dev=APM00044603845-0007 addr=c16t1l1
server=server_2
disk=d9 stor_dev=APM00044603845-0007 addr=c48t1l1
server=server_2
disk=d9 stor_dev=APM00044603845-0007 addr=c0t1l1
server=server_2
disk=d9 stor_dev=APM00044603845-0007 addr=c32t1l1
server=server_2
done
Output
Note
The fs_copy session is not listed in the output which indicates that the copy is complete.
Action
To verify that the file system is converted to a uxfs type file system, type:
[destination_site]$ nas_fs -info dst_ufs1
id = 126
name = dst_ufs1
acl = 0
in_use = True
type = uxfs
worm = off
volume = v272
pool = clar_r5_performance
member_of = root_avm_fs_group_3:cs110
rw_servers=
ro_servers= server_2
rw_vdms =
ro_vdms =
backup_of = src_ufs1 Mon Feb 7 07:05:00 EST 2005
stor_devs = APM00044603845-0008,APM00044603845-0007
disks = d8,d9
disk=d8 stor_dev=APM00044603845-0008 addr=c0t1l2
server=server_2
disk=d8 stor_dev=APM00044603845-0008 addr=c32t1l2
server=server_2
disk=d8 stor_dev=APM00044603845-0008 addr=c16t1l2
server=server_2
disk=d8 stor_dev=APM00044603845-0008 addr=c48t1l2
server=server_2
disk=d9 stor_dev=APM00044603845-0007 addr=c16t1l1
server=server_2
disk=d9 stor_dev=APM00044603845-0007 addr=c48t1l1
server=server_2
disk=d9 stor_dev=APM00044603845-0007 addr=c0t1l1
server=server_2
disk=d9 stor_dev=APM00044603845-0007 addr=c32t1l1
server=server_2
Source:
Local Source Filesystems
Id Source FlowCtrl State Destination FlowCtrl State Network
138 src_ufs1 inactive active dst_ufs1:cs110 inactive active alive
Destination:
Local Source Filesystems
Id Source FlowCtrl State Destination FlowCtrl State Network
Note
For local replication, the Destination FlowCtrl status and Network status always contain N/A (not
applicable).
Action
To check the replication status and generate historical data about the replication up to the number
of lines specified, use this command syntax:
$ fs_replicate -info <fs_name> -verbose <number_of_lines>
where:
<fs_name> = name of the file system (in the example it is the source file system).
<number_of_lines> = lines to display historical replication data. The maximum number is 128.
Remote replication example:
To check the replication status of the replication relationship, type:
$ fs_replicate -info src_ufs1 -verbose 10
Local replication example:
To check the replication status of the replication relationship, type:
$ fs_replicate -info local_src -verbose 10
Note: The following output shown is for the remote replication example. "Appendix A: fs_replicate
-info output fields" on page 164 gives the output definition description for the fs_replicate -info
command.
id = 88
name = src_ufs1
fs_state = active
type = replication
replicator_state = active
source_policy = NoPolicy
high_water_mark = 600
time_out = 600
current_delta_set = 3
current_number_of_blocks = 1
flow_control = inactive
total_savevol_space = 1048576 KBytes
savevol_space_available = 917504 KBytes (Before Flow Control)
id = 126
name = dst_ufs1:cs110
type = playback
playback_state = active
high_water_mark = 600
time_out = 600
current_delta_set = 3
flow_control = inactive
total_savevol_space = 1048576 KBytes
savevol_space_available = 786432 KBytes (Before Flow Control)
communication_state = alive
current_transfer_rate = ~ 13312 Kbits/second
avg_transfer_rate = ~ 169984 Kbits/second
source_ip = 192.168.168.18
source_port = 57273
destination_ip = 192.168.168.20
destination_port = 8888
QOS_bandwidth = 0 kbits/sec
| Source | Destination
Delta|Create Time Dur Blocks|Playback Time Dur Blocks DSinGroup
-----|-------------- ------ ------|-------------- ------ ------ --------
-
2 2005-02-08 00:20 1
1 2005-02-08 00:20 128 2005-02-08 00:20 128 2
0 2005-02-08 00:10 333 2005-02-08 00:10 333 1
Note
Action
To create checkpoints for use when a replication relationship falls out-of-sync, use this command
syntax:
$ fs_ckpt <fs_name> -name <name> -Create
where:
<fs_name> = file system name on which a checkpoint is created
<name> = name of the refresh checkpoint that must follow this naming convention
<source_fs_name>_repl_restart_1 and <source_fs_name>_repl_restart_2
Example:
To create the checkpoints of the source file system for use when a replication relationship falls out-
of-sync, type:
$ fs_ckpt src_ufs1 -name src_ufs1_repl_restart_1 -Create
$ fs_ckpt src_ufs1 -name src_usf1_repl_restart_2 -Create
The output only shows the creation of the first checkpoint.
id = 100
name = src_ufs1_repl_restart_1
acl = 0
in_use = True
type = ckpt
worm = off
volume = vp246
pool = clar_r5_performance
member_of =
rw_servers=
ro_servers= server_2
rw_vdms =
ro_vdms =
checkpt_of= src_ufs1 Mon Feb 7 07:14:26 EST 2005
used = 4%
full(mark)= 90%
delta_number= 1
stor_devs = APM00034000068-001F,APM00034000068-001E
disks = d21,d15
disk=d21 stor_dev=APM00034000068-001F addr=c16t1l14
server=server_2
disk=d21 stor_dev=APM00034000068-001F addr=c0t1l14
server=server_2
disk=d15 stor_dev=APM00034000068-001E addr=c0t1l13
server=server_2
disk=d15 stor_dev=APM00034000068-001E addr=c16t1l13
server=server_2
Table 7 on page 61 shows the relationships between the source and destination file
systems when there is a initiated replication, failover, resynchronization, and
reversal.
Table 7 Replication file system relationships (page 1 of 2)
Replication
Source site Destination site Explanation
option
Begin replication Source file Destination file system Normal replication processing
system is is read-only. establishes source file system
read/write. as read/write and destination
file system as read-only.
Failover Source file Destination file system Changes which file system
system becomes becomes read/write. acts as the source file system
read-only, if and which acts as the
source site is still destination file system.
available. Brings the destination file
system to read/write to service
the I/O in the case of disaster.
Replication
Source site Destination site Explanation
option
Reversal The read-only file The read/write file Changes which file system
system becomes system becomes read- acts as the source file system
read/write. only. and which acts as the
destination file system.
Perform a reversal from
whichever file system is
read/write.
When used after a failover,
restores the direction of
replication to what it was prior
to the failover.
Both sites must be available.
default The source site is Plays back all available delta sets at the Destination site
totally corrupt or destination site before failing over.
unavailable.
now The source site is Initiates an immediate failover and no Destination site
totally corrupt or delta sets are played back.
unavailable.
Note: If you perform a failover using this
option and delta sets are in the SavVol
at the destination site, an incremental
resynchronization might not be possible
in all cases.
sync The source site is Fails over without any data loss by Source and
still available. making the source file system read-only, destination
the destination file system read/write,
and creating a restart checkpoint of the
destination file system.
Synchronized failover takes longer to
invoke and cannot be performed if the
source site is unavailable. It is more
suited to a maintenance-related failover
as part of a failover plan.
Using the failover command when the source site is unavailable results in data loss
because delta sets cannot be transferred from the source site to the destination
site. Using the default failover option reduces the amount of data loss, by replaying
any available delta sets (pending data) at the destination before initiating failover.
Failover processing creates a file system checkpoint to use to resynchronize the
replication relationship. You must include this checkpoint when determining the
maximum number of checkpoints per file system used with replication.
When a failover completes, replication is stopped and the destination file system
becomes read/write. Replication is no longer running because the source site is
usually unavailable when a failover is initiated. If the source site becomes available,
reestablish replication in the opposite direction (from the destination site to the
source site) by resynchronizing the source and destination file systems.
For replication failover:
◆ "Verify status of destination file system" on page 64
◆ "Initiate replication failover" on page 64
◆ "Verify file system is read/write" on page 66
To check the status of the destination file system and verify if it is mounted as read-only, use this
command syntax:
$ server_mount <movername>
where:
<movername> = name of the Data Mover on which the file system is mounted
Example:
To verify the status of the destination file systems on server_2, type:
$ server_mount server_2
Output
server_2 :
root_fs_2 on / uxfs,perm,rw
fs1 on /fs1 uxfs,perm,rw
ckpt1 on /ckpt1 ckpt,perm,ro
fsk on /fsk ckpt,perm,ro
root_fs_common on /.etc_common uxfs,perm,ro
dst_ufsl on /dst_ufsl uxfs,perm,ro
To initiate a failover from the destination site, use this command syntax:
[destination_site]$ fs_replicate -failover <srcfs>:cel=<cel_name>
<dstfs> -option now|sync
where:
<srcfs> = source file system name
<cel_name> = Celerra Network Server name of the source site
<dstfs> = destination file system name
Example:
To fail over the source file system to the destination file system, type:
$ fs_replicate -failover src_ufs1:cel=cs100 dst_ufs1
id = 88
name = src_ufs1
acl = 0
in_use = True
type = uxfs
worm = off
volume = v243
pool = clar_r5_performance
member_of = root_avm_fs_group_10:cs100
rw_servers= server_2
ro_servers=
rw_vdms =
ro_vdms =
ckpts = src_ufs1_repl_restart_1:cs100,src_ufs1_repl_restart_2:cs100
stor_devs = APM00034000068-001F,APM00034000068-001E
disks = d21,d15
disk=d21 stor_dev=APM00034000068-001F addr=c16t1l14
server=server_2
disk=d21 stor_dev=APM00034000068-001F addr=c0t1l14
server=server_2
disk=d15 stor_dev=APM00034000068-001E addr=c0t1l13
server=server_2
disk=d15 stor_dev=APM00034000068-001E addr=c16t1l13
server=server_2
done
To verify that the file system is mounted as read/write and is accessible to the network clients,
type:
$ server_mount server_2
Output
server_mount server_2
server_2 :
root_fs_2 on / uxfs,perm,rw
root_fs_common on /.etc_common uxfs,perm,ro
dst_ufs1 on /dst_ufs1 uxfs,perm,rw
Note: If you need to increase your file system’s size and plan to resynchronize your source
and destination sites after a failover, you must complete the resynchronization (fs_replicate
-resync) before increasing the size of your destination file system.
To attempt to resynchronize the source and destination file systems and restart replication from
the destination site, use this command syntax:
[destination_site]$ fs_replicate -resync <dstfs>[:cel=<cel_name>]
<srcfs> -option autofullcopy=yes
where:
<dstfs> = current read-only file system name
<cel_name> = name of Celerra Network Server of the original source site
<srcfs> = current read/write file system name
-option autofullcopy=yes = executes a full copy of the file system if an incremental
resynchronization does not complete.
Example:
To resynchronize the file systems and resume replication, type:
$ fs_replicate -resync src_ufs1:cel=cs100 dst_ufs1
For the system to automatically perform a full copy of the source file system if the incremental
resynchronization fails, type:
$ fs_replicate -resync src_ufs1:cel=cs100 dst_ufs1 -option
autofullcopy=yes
The full copy of the file system using autofullcopy=yes can be time-consuming. Consider when
you want to run this command.
Note: If a disaster occurs during the transfer, some delta sets might become lost. As a result, the
replication process will not be able to completely replicate the source file system on the destination
site.
id = 88
name = src_ufs1
acl = 0
in_use = True
type = rawfs
worm = off
volume = v243
pool = clar_r5_performance
member_of = root_avm_fs_group_10:cs100
rw_servers=
ro_servers= server_2
rw_vdms =
ro_vdms =
backup_of = dst_ufs1 Mon Feb 7 07:25:11 EST 2005
ckpts = src_ufs1_repl_restart_2:cs100,src_ufs1_repl_restart_1:cs100
stor_devs = APM00034000068-001F,APM00034000068-001E
disks = d21,d15
disk=d21 stor_dev=APM00034000068-001F addr=c16t1l14
server=server_2
disk=d21 stor_dev=APM00034000068-001F addr=c0t1l14
server=server_2
disk=d15 stor_dev=APM00034000068-001E addr=c0t1l13
server=server_2
disk=d15 stor_dev=APM00034000068-001E addr=c16t1l13
server=server_2
done
Starting replication...
operation in progress (not interruptible)...id = 126
name = dst_ufs1
acl = 0
in_use = True
type = uxfs
worm = off
volume = v272
pool = clar_r5_performance
member_of = root_avm_fs_group_3
rw_servers= server_2
ro_servers=
rw_vdms =
ro_vdms =
ckpts = root_restart_ckpt_88_2
ip_copies = src_ufs1:cs100
stor_devs = APM00044603845-0008,APM00044603845-0007
disks = d8,d9
disk=d8 stor_dev=APM00044603845-0008 addr=c0t1l2
server=server_2
disk=d8 stor_dev=APM00044603845-0008 addr=c32t1l2
server=server_2
disk=d8 stor_dev=APM00044603845-0008 addr=c16t1l2
server=server_2
disk=d8 stor_dev=APM00044603845-0008 addr=c48t1l2
server=server_2
disk=d9 stor_dev=APM00044603845-0007 addr=c16t1l1
server=server_2
disk=d9 stor_dev=APM00044603845-0007 addr=c48t1l1
server=server_2
disk=d9 stor_dev=APM00044603845-0007 addr=c0t1l1
server=server_2
disk=d9 stor_dev=APM00044603845-0007 addr=c32t1l1
server=server_2
id = 88
name = src_ufs1
acl = 0
in_use = True
type = rawfs
worm = off
volume = v243
pool = clar_r5_performance
member_of = root_avm_fs_group_10:cs100
rw_servers=
ro_servers= server_2
rw_vdms =
ro_vdms =
backup_of = dst_ufs1 Mon Feb 7 07:25:11 EST 2005
ckpts = src_ufs1_repl_restart_2:cs100,src_ufs1_repl_restart_1:cs100
stor_devs = APM00034000068-001F,APM00034000068-001E
disks = d21,d15
disk=d21 stor_dev=APM00034000068-001F addr=c16t1l14
server=server_2
disk=d21 stor_dev=APM00034000068-001F addr=c0t1l14
server=server_2
disk=d15 stor_dev=APM00034000068-001E addr=c0t1l13
server=server_2
disk=d15 stor_dev=APM00034000068-001E addr=c16t1l13
server=server_2
done
id = 140
name = root_new_ckpt_dst_ufs1
acl = 0
in_use = True
type = ckpt
worm = off
volume = vp284
pool = clar_r5_performance
member_of =
rw_servers=
ro_servers= server_2
rw_vdms =
ro_vdms =
checkpt_of= dst_ufs1 Mon Feb 7 07:34:31 EST 2005
used = 3%
full(mark)= 90%
stor_devs = APM00044603845-0008,APM00044603845-0007
disks = d8,d9
disk=d8 stor_dev=APM00044603845-0008 addr=c0t1l2 server=server_2
disk=d8 stor_dev=APM00044603845-0008 addr=c32t1l2 server=server_2
disk=d8 stor_dev=APM00044603845-0008 addr=c16t1l2 server=server_2
disk=d8 stor_dev=APM00044603845-0008 addr=c48t1l2 server=server_2
disk=d9 stor_dev=APM00044603845-0007 addr=c16t1l1 server=server_2
disk=d9 stor_dev=APM00044603845-0007 addr=c48t1l1 server=server_2
disk=d9 stor_dev=APM00044603845-0007 addr=c0t1l1 server=server_2
disk=d9 stor_dev=APM00044603845-0007 addr=c32t1l1 server=server_2
id = 88
name = src_ufs1
acl = 0
in_use = True
type = rawfs
worm = off
volume = v243
pool = clar_r5_performance
member_of = root_avm_fs_group_10:cs100
rw_servers=
ro_servers= server_2
rw_vdms =
ro_vdms =
backup_of = dst_ufs1 Mon Feb 7 07:34:31 EST 2005
ckpts = src_ufs1_repl_restart_2:cs100,src_ufs1_repl_restart_1:cs100
stor_devs = APM00034000068-001F,APM00034000068-001E
disks = d21,d15
disk=d21 stor_dev=APM00034000068-001F addr=c16t1l14
server=server_2
disk=d21 stor_dev=APM00034000068-001F addr=c0t1l14
server=server_2
disk=d15 stor_dev=APM00034000068-001E addr=c0t1l13
server=server_2
disk=d15 stor_dev=APM00034000068-001E addr=c16t1l13
server=server_2
done
Deleting root_restart_ckpt_88_2...
id = 133
name = root_restart_ckpt_88_2
acl = 0
in_use = False
type = ckpt
worm = off
volume =
rw_servers=
ro_servers=
rw_vdms =
ro_vdms =
Deleting root_new_ckpt_dst_ufs1...
id = 140
name = root_new_ckpt_dst_ufs1
acl = 0
in_use = False
type = ckpt
worm = off
volume =
rw_servers=
ro_servers=
rw_vdms =
ro_vdms =
Operation complete
done
Note: This can only be done when both sites are operational.
Action
Note: Use this command on the site where the file system is read/write. In this example, the
destination file system is read/write because of the failover.
Output
fs_replicate -list
Local Source Filesystems
Id Source FlowCtrl State Destination FlowCtrl State Network
7 dst_ufs1 inactive active src_ufs1:cs100 inactive active alive
Note
Notice the file system named dst_ufs1 is acting as the source file system (read/write) and the file
system named src_ufs1 is functioning as the destination file system (read-only).
To initiate a reversal from the original destination site, use this command syntax:
[original_destination_site]$ fs_replicate -reverse
<dstfs>:cel=<cel_name> <srcfs>
where:
<dstfs> = current read-only file system name. This file system will become the read/write file
system.
<cel_name> = name of Celerra Network Server at the original source site.
<srcfs> = current read/write file system name. This file system will become the read-only file
system.
Example:
To reverse the direction of the replication process and return the src_ufs1 file system to read/write
and the dst_ufs1 file system to read-only, type:
$ fs_replicate -reverse src_ufs1:cel=cs100 dst_ufs1
id = 88
name = src_ufs1
acl = 0
in_use = True
type = uxfs
worm = off
volume = v243
pool = clar_r5_performance
member_of = root_avm_fs_group_10:cs100
rw_servers=
ro_servers= server_2
rw_vdms =
ro_vdms =
backup_of = dst_ufs1 Mon Feb 7 07:34:31 EST 2005
ckpts = src_ufs1_repl_restart_2:cs100,src_ufs1_repl_restart_1:cs100
stor_devs = APM00034000068-001F,APM00034000068-001E
disks = d21,d15
disk=d21 stor_dev=APM00034000068-001F addr=c16t1l14
server=server_2
disk=d21 stor_dev=APM00034000068-001F addr=c0t1l14
server=server_2
disk=d15 stor_dev=APM00034000068-001E addr=c0t1l13
server=server_2
disk=d15 stor_dev=APM00034000068-001E addr=c16t1l13
server=server_2
done
• After this command completes, the original source file system is read/write and the original
destination file system is read-only. Replication is now running in the direction it was before the
failover. You can verify this by using the server_mount server_x command.
• When replication is reversed, default values are used for the replication policies. For example,
high water mark and timeout are set to 600. You can specify new policies when you restart
replication using -option <options>.
Note: Aborting replication does not delete the underlying file systems.
Note: Multiple fs_replicate -abort processes are executed sequentially, not in parallel. Only
run one fs_replicate -abort command at a time.
Action
To abort replication on source and destination file systems simultaneously, use this command
syntax from the source site:
$ fs_replicate -abort <srcfs>,<dstfs>:cel=<cel_name>
where:
<srcfs> = name of the source file system
<dstfs> = name of the destination file system
<cel_name> = name of the remote Celerra Network Server
Example:
To stop replication for the replication relationship, type:
[source_site]$ fs_replicate -abort src_ufs1,dst_ufs1:cel=cs110
id = 126
name = dst_ufs1
acl = 0
in_use = True
type = uxfs
worm = off
volume = v272
pool = clar_r5_performance
member_of = root_avm_fs_group_3:cs110
rw_servers=
ro_servers= server_2
rw_vdms =
ro_vdms =
stor_devs = APM00044603845-0008,APM00044603845-0007
disks = d8,d9
disk=d8 stor_dev=APM00044603845-0008 addr=c0t1l2
server=server_2
disk=d8 stor_dev=APM00044603845-0008 addr=c32t1l2
server=server_2
disk=d8 stor_dev=APM00044603845-0008 addr=c16t1l2
server=server_2
disk=d8 stor_dev=APM00044603845-0008 addr=c48t1l2
server=server_2
disk=d9 stor_dev=APM00044603845-0007 addr=c16t1l1
server=server_2
disk=d9 stor_dev=APM00044603845-0007 addr=c48t1l1
server=server_2
disk=d9 stor_dev=APM00044603845-0007 addr=c0t1l1
server=server_2
disk=d9 stor_dev=APM00044603845-0007 addr=c32t1l1
server=server_2
done
Action
id = 126
name = dst_ufs1:cs110
type = playback
playback_state = active
high_water_mark = 600
time_out = 600
current_delta_set = 4
flow_control = inactive
total_savevol_space = 1048576 KBytes
savevol_space_available = 917504 KBytes (Before Flow Control)
communication_state = alive
current_transfer_rate = ~ 13312 Kbits/second
avg_transfer_rate = ~ 13312 Kbits/second
source_ip = 192.168.168.18
source_port = 62817
destination_ip = 192.168.168.20
destination_port = 8888
QOS_bandwidth = 0 kbits/sec
done
id = 125
name = root_suspend_ckpt_126_5
acl = 0
in_use = True
type = ckpt
worm = off
volume = vp246
pool = clar_r5_performance
member_of =
rw_servers=
ro_servers= server_2
rw_vdms =
ro_vdms =
checkpt_of= src_ufs1 Tue Feb 8 13:51:01 EST 2005
used = 6%
full(mark)= 90%
delta_number= 5
stor_devs = APM00034000068-001F,APM00034000068-001E
disks = d21,d15
disk=d21 stor_dev=APM00034000068-001F addr=c16t1l14
server=server_2
disk=d21 stor_dev=APM00034000068-001F addr=c0t1l14
server=server_2
disk=d15 stor_dev=APM00034000068-001E addr=c0t1l13
server=server_2
disk=d15 stor_dev=APM00034000068-001E addr=c16t1l13
server=server_2
id = 126
name = dst_ufs1:cs110
type = playback
playback_state = active
high_water_mark = 600
time_out = 600
current_delta_set = 4
flow_control = inactive
total_savevol_space = 1048576 KBytes
savevol_space_available = 786432 KBytes (Before Flow Control)
communication_state = alive
current_transfer_rate = ~ 13312 Kbits/second
avg_transfer_rate = ~ 13312 Kbits/second
source_ip = 192.168.168.18
source_port = 62817
destination_ip = 192.168.168.20
destination_port = 8888
QOS_bandwidth = 0 kbits/sec
done
id = 126
name = dst_ufs1
type = playback
playback_state = active
high_water_mark = 0
time_out = 10
current_delta_set = 6
flow_control = inactive
total_savevol_space = 1048576 KBytes
savevol_space_available = 786432 KBytes (Before Flow Control)
communication_state = alive
current_transfer_rate = ~ 13312 Kbits/second
avg_transfer_rate = ~ 13312 Kbits/second
source_ip = 192.168.168.18
source_port = 62817
destination_ip = 192.168.168.20
destination_port = 8888
QOS_bandwidth = 0 kbits/sec
done
name = dst_ufs1
acl = 0
in_use = True
type = uxfs
worm = off
volume = v272
pool = clar_r5_performance
member_of = root_avm_fs_group_3
rw_servers=
ro_servers= server_2
rw_vdms =
ro_vdms =
stor_devs = APM00044603845-0008,APM00044603845-0007
disks = d8,d9
disk=d8 stor_dev=APM00044603845-0008 addr=c0t1l2
server=server_2
disk=d8 stor_dev=APM00044603845-0008 addr=c32t1l2
server=server_2
disk=d8 stor_dev=APM00044603845-0008 addr=c16t1l2
server=server_2
disk=d8 stor_dev=APM00044603845-0008 addr=c48t1l2
server=server_2
disk=d9 stor_dev=APM00044603845-0007 addr=c16t1l1
server=server_2
disk=d9 stor_dev=APM00044603845-0007 addr=c48t1l1
server=server_2
disk=d9 stor_dev=APM00044603845-0007 addr=c0t1l1
server=server_2
disk=d9 stor_dev=APM00044603845-0007 addr=c32t1l1
server=server_2
done
Converting filesystem type
id = 126
name = dst_ufs1
acl = 0
in_use = True
type = rawfs
worm = off
volume = v272
pool = clar_r5_performance
member_of = root_avm_fs_group_3
rw_servers=
ro_servers= server_2
rw_vdms =
ro_vdms =
stor_devs = APM00044603845-0008,APM00044603845-0007
disks = d8,d9
disk=d8 stor_dev=APM00044603845-0008 addr=c0t1l2
server=server_2
disk=d8 stor_dev=APM00044603845-0008 addr=c32t1l2
server=server_2
disk=d8 stor_dev=APM00044603845-0008 addr=c16t1l2
server=server_2
disk=d8 stor_dev=APM00044603845-0008 addr=c48t1l2
server=server_2
disk=d9 stor_dev=APM00044603845-0007 addr=c16t1l1
server=server_2
disk=d9 stor_dev=APM00044603845-0007 addr=c48t1l1
server=server_2
disk=d9 stor_dev=APM00044603845-0007 addr=c0t1l1
server=server_2
disk=d9 stor_dev=APM00044603845-0007 addr=c32t1l1
server=server_2
Operation complete
done
To verify that the replication relationship no longer exists, type either of the following two
commands. The -list option no longer displays the replication pair:
$ fs_replicate -list
$ fs_replicate -info src_ufs1
Output
Output
id = 88
name = src_ufs1
fs_state = active
type = replication
replicator_state = inactive
source_policy = NoPolicy
high_water_mark = 600
time_out = 600
current_delta_set = 0
current_number_of_blocks = 0
flow_control = active
total_savevol_space = 1048576 KBytes
savevol_space_available = 0 KBytes (Before Flow Control)
id = 126
name = dst_ufs1:cs110
type = playback
playback_state = active
high_water_mark = 600
time_out = 600
current_delta_set = 146
flow_control = inactive
total_savevol_space = 1048576 KBytes
savevol_space_available = 917504 KBytes (Before Flow Control)
communication_state = down
current_transfer_rate = ~ 0 Kbits/second
avg_transfer_rate = ~ 0 Kbits/second
source_ip = 0.0.0.0
source_port = 0
destination_ip = 192.168.168.20
destination_port = 8888
QOS_bandwidth = 0 kbits/sec
| Source | Destination
Delta|Create Time Dur Blocks|Playback Time Dur Blocks DSinGroup
-----|-------------- ------ ------|-------------- ------ ------ --------
-
To restart a replication relationship, use this command syntax from the source site:
$ fs_replicate -restart <srcfs> <dstfs>:cel=<cel_name>
where:
<scrfs> = name of the source file system
<dstfs> = name of the destination file system
<cel_name> = name of the destination Celerra Network Server
Example:
To restart a replication relationship, type:
$ fs_replicate -restart src_ufs1 dst_ufs1:cel=cs110
stor_devs = APM00044603845-0008,APM00044603845-0007
disks = d8,d9
disk=d8 stor_dev=APM00044603845-0008 addr=c0t1l2
server=server_2
disk=d8 stor_dev=APM00044603845-0008 addr=c32t1l2
server=server_2
disk=d8 stor_dev=APM00044603845-0008 addr=c16t1l2
server=server_2
disk=d8 stor_dev=APM00044603845-0008 addr=c48t1l2
server=server_2
disk=d9 stor_dev=APM00044603845-0007 addr=c16t1l1
server=server_2
disk=d9 stor_dev=APM00044603845-0007 addr=c48t1l1
server=server_2
disk=d9 stor_dev=APM00044603845-0007 addr=c0t1l1
server=server_2
disk=d9 stor_dev=APM00044603845-0007 addr=c32t1l1
server=server_2
Starting replication...
operation in progress (not interruptible)...id = 88
name = src_ufs1
acl = 0
in_use = True
type = uxfs
worm = off
volume = v243
pool = clar_r5_performance
member_of = root_avm_fs_group_10
rw_servers= server_2
ro_servers=
rw_vdms =
ro_vdms =
ckpts = src_ufs1_repl_restart_2,src_ufs1_repl_restart_1
ip_copies = dst_ufs1:cs110
stor_devs = APM00034000068-001F,APM00034000068-001E
disks = d21,d15
disk=d21 stor_dev=APM00034000068-001F addr=c16t1l14
server=server_2
disk=d21 stor_dev=APM00034000068-001F addr=c0t1l14
server=server_2
disk=d15 stor_dev=APM00034000068-001E addr=c0t1l13
server=server_2
disk=d15 stor_dev=APM00034000068-001E addr=c16t1l13
server=server_2
id = 126
name = dst_ufs1
acl = 0
in_use = True
type = rawfs
worm = off
volume = v272
pool = clar_r5_performance
member_of = root_avm_fs_group_3:cs110
rw_servers=
ro_servers= server_2
rw_vdms =
ro_vdms =
backup_of = src_ufs1 Tue Feb 8 08:49:46 EST 2005
stor_devs = APM00044603845-0008,APM00044603845-0007
disks = d8,d9
disk=d8 stor_dev=APM00044603845-0008 addr=c0t1l2
server=server_2
disk=d8 stor_dev=APM00044603845-0008 addr=c32t1l2
server=server_2
disk=d8 stor_dev=APM00044603845-0008 addr=c16t1l2
server=server_2
disk=d8 stor_dev=APM00044603845-0008 addr=c48t1l2
server=server_2
disk=d9 stor_dev=APM00044603845-0007 addr=c16t1l1
server=server_2
disk=d9 stor_dev=APM00044603845-0007 addr=c48t1l1
server=server_2
disk=d9 stor_dev=APM00044603845-0007 addr=c0t1l1
server=server_2
disk=d9 stor_dev=APM00044603845-0007 addr=c32t1l1
server=server_2
done
id = 115
name = root_new_ckpt_src_ufs1
acl = 0
in_use = True
type = ckpt
worm = off
volume = vp246
pool = clar_r5_performance
member_of =
rw_servers=
ro_servers= server_2
rw_vdms =
ro_vdms =
checkpt_of= src_ufs1 Tue Feb 8 08:50:13 EST 2005
used = 4%
full(mark)= 90%
stor_devs = APM00034000068-001F,APM00034000068-001E
disks = d21,d15
disk=d21 stor_dev=APM00034000068-001F addr=c16t1l14
server=server_2
disk=d21 stor_dev=APM00034000068-001F addr=c0t1l14
server=server_2
disk=d15 stor_dev=APM00034000068-001E addr=c0t1l13
server=server_2
disk=d15 stor_dev=APM00034000068-001E addr=c16t1l13
server=server_2
id = 126
name = dst_ufs1
acl = 0
in_use = True
type = rawfs
worm = off
volume = v272
pool = clar_r5_performance
member_of = root_avm_fs_group_3:cs110
rw_servers=
ro_servers= server_2
rw_vdms =
ro_vdms =
backup_of = src_ufs1 Tue Feb 8 08:50:13 EST 2005
stor_devs = APM00044603845-0008,APM00044603845-0007
disks = d8,d9
disk=d8 stor_dev=APM00044603845-0008 addr=c0t1l2
server=server_2
disk=d8 stor_dev=APM00044603845-0008 addr=c32t1l2
server=server_2
disk=d8 stor_dev=APM00044603845-0008 addr=c16t1l2
server=server_2
disk=d8 stor_dev=APM00044603845-0008 addr=c48t1l2
server=server_2
disk=d9 stor_dev=APM00044603845-0007 addr=c16t1l1
server=server_2
disk=d9 stor_dev=APM00044603845-0007 addr=c48t1l1
server=server_2
disk=d9 stor_dev=APM00044603845-0007 addr=c0t1l1
server=server_2
disk=d9 stor_dev=APM00044603845-0007 addr=c32t1l1
server=server_2
done
Deleting root_new_ckpt_src_ufs1...
id = 115
name = root_new_ckpt_src_ufs1
acl = 0
in_use = False
type = ckpt
worm = off
volume =
rw_servers=
ro_servers=
rw_vdms =
ro_vdms =
Operation complete
done
Note: You cannot extend file systems using non-sliced volumes if replication is running.
When using Automatic File System Extension, the slice option must be enabled for an
Automatic Volume Management (AVM) pool when replication is running. Managing EMC
Celerra Volumes and File Systems with Automatic Volume Management describes how to
configure slices in detail.
Note: The virtual size of a source but not destination file system is visible from the
NFS/CIFS client.
To automatically extend the size of a file system (setting a high water mark, maximum size, and
virtual provisioning), use this command syntax:
$ nas_fs -modify <fs_name> -auto_extend yes -hwm <50-99>% -max_size
<integer>[T|G|M] -vp <yes|no>
where:
<fs_name> = name of the file system.
<50-99> = percentage of the file system that is full and in need of being reached before the file
system is automatically extended.
<integer> = file system’s maximum size (entered in TB, GB, or MB).
-vp = used with a specified Maximum Capacity value to report the anticipated or actual file system
size. When turned on, the virtual size is reported to clients.
Example:
To automatically extend the source file system src_ufs1 with a high water mark of 50% and a
minimum size of 70 MB with virtual provisioning enabled, type:
$ nas_fs -modify src_ufs1 -auto_extend yes -hwm 50% -max_size 70M -vp
yes
Output
id = 2707
name = pfs001
acl = 0
in_use = True
type = uxfs
worm = off
volume = v7283
pool = clarata_archive
member_of = root_avm_fs_group_10
rw_servers= 123secnfs
ro_servers=
rw_vdms =
ro_vdms =
auto_ext = hwm=50%,max_size=16777216M,virtual_provision=yes
ckpts = pfs001_ckpt60,pfs001_ckpt61,pfs001_ckpt62,
pfs001_ckpt63,pfs001_ckpt6
stor_devs = APM00043200225-0029,APM00043200225-002C,
APM00043200225-0027,APM00043
disks = d35,d21,d34,d20,d32,d16,d30,d14,d33,d17,d31,
d15,d38,d41,d36,d22,d29,d13,d39,d42,d28,d12,d37,d23
disk=d35 stor_dev=APM00043200225-0029 addr=c16t2l9
server=123secnfs
[nasadmin@lnsgc123 nasadmin]$
Step Action
2. Find the size of the source and destination file system, and compare the values using this
command syntax:
$ nas_fs -size <fs_name>
The following example shows sample output:
total = 100837 avail = 99176 used = 1660 ( 1% ) (sizes in MB) (
blockcount = 209715200 )
volume: total = 102400 (sizes in MB) ( blockcount = 209715200 )
Note: Because total, available, and used values are generated from the operating system
and not updated until the destination file system is refreshed, that data will differ from total
volume data derived by the Control Station. Block # counts agree on the same file system,
however, and can be used to accurately compare totals on source and destination file
systems.
3. To extend the source file system, type the following command from the source site. For
size, type the difference between source and destination file systems determined in step 2:
$ nas_fs -xtend <fs_name> size=<integer>[T|G|M] -option src_only
Note: Because replication always extends the destination first, the source size will never
be larger than the destination size. If you specify the wrong size when you run this
command, the command will report the error and you must rerun the command with the
correct size.
Action
id = 88
name = src_ufs1
acl = 0
in_use = True
type = uxfs
worm = off
volume = v243
pool = clar_r5_performance
member_of = root_avm_fs_group_10
rw_servers= server_2
ro_servers=
rw_vdms =
ro_vdms =
ckpts = src_ufs1_repl_restart_2,src_ufs1_repl_restart_1
ip_copies = dst_ufs1:cs110
stor_devs = APM00034000068-001F,APM00034000068-001E
disks = d21,d15
disk=d21 stor_dev=APM00034000068-001F addr=c16t1l14
server=server_2
disk=d21 stor_dev=APM00034000068-001F addr=c0t1l14
server=server_2
disk=d15 stor_dev=APM00034000068-001E addr=c0t1l13
server=server_2
disk=d15 stor_dev=APM00034000068-001E addr=c16t1l13
server=server_2
Note: Both file systems have the same block # count (6291456).
The nas_fs -xtend command extends the destination file system and then the source file system. If
either file system fails to extend, an error message displays the cause. If neither file system
extends, the following sample output appears:
Error 5008: Remote command failed:
remote celerra = cs0
remote exit status = 5
remote error = 0
remote message = CLSTD : volume(s) are not available
Note: Step 3 of "Recover from Automatic File System Extension failure" on page 100 explains how
to extend both file systems.
If only the destination file system extends, the following output appears:
Error 5008: pfsAvolume(s) are not available
PFS extension failed. Please extend PFS with “nas_fs -xtend... -option
src_only”
Note: Go to step 3 of the "Recover from Automatic File System Extension failure" on page 100 to
extend only the source file system.
Step Action
1. Verify that the destination file system type is set to rawfs. If it is set to uxfs, convert the
destination file system from uxfs to rawfs by using this command syntax:
$ nas_fs -Type rawfs <dstfs> -Force
where:
<dstfs> = name of the destination file system
Example:
To verify that the file system is set to rawfs, type:
$ nas_fs -Type rawfs dst_ufs1 -Force
Note: A read-only file system must be set to rawfs prior to extending a file system or
restarting a replication.
2. Extend the destination file system manually using the same size as the source file system
by using this command syntax:
$ nas_fs -xtend <dst_fs> size=<integer>[T|G|M] -option <options>
where:
<dstfs> = name of the destination file system
<integer> = size of the secondary file system in terabytes, gigabytes, or megabytes
<option> = any comma-separated options, such as slice={y|n}, which specifies
whether the disk volumes used by the file system might be shared with other file systems
using a slice
Example:
To extend the destination file system dst_ufs1 by 2 MB, using the slice option, to match the
source file system extension to the same size, type:
$ nas_fs -xtend dst_ufs1 size=2M slice=y
3. Restart the replication relationship by using this command syntax from the source site:
$ fs_replicate -restart <srcfs> <dstfs>:cel=<cel_name>
where:
<scrfs> = name of the source file system
<dstfs> = name of the destination file system
<cel_name> = name of the destination Celerra Network Server
Example:
To restart a replication relationship, type:
$ fs_replicate -restart src_ufs1 dst_ufs1:cel=cs110
Step Action
1. Verify that the source file system is mounted read-only. If it is not, do so now using this
command syntax:
$ server_mount <servername> -option ro <srcfs>/<srcfs_mountpoint>
2. Convert the source file system from uxfs to rawfs using this command syntax:
$ nas_fs -Type rawfs <srcfs> -Force
3. Manually extend the source file system and match the extended destination file system
size using this command syntax:
$ nas_fs -xtend <srcfs> size=<bytes>
4. Convert the source file system back to uxfs using this command syntax:
$ nas_fs -Type uxfs <srcfs> -Force
5. Resynchronize the destination and original source file systems using this command
syntax:
$ fs_replicate -resync <srcfs>:cel=<cel_name> <dstfs>
6. Reverse the replication and return to the original configuration using this command syntax:
$ fs_replicate -reverse <dstfs> <dstfs>:cel=<cel_name> <srcfs>
Option Behavior
-modify To specify values for the source and destination sites, include both file systems in
the command syntax. The values are effective the next time a trigger for these
policies is reached. For example, if policies are changed from 600 and 300 for high
water mark and time-out interval, respectively, the next time replication reaches
600, the trigger is changed to 300. If you set source high water mark and time-out
interval values without specifying values for the destination, the source values are
applied to the destination site.
Note: Multiple fs_replicate -refresh processes must be run sequentially, not concurrently.
Run only one fs_replicate -refresh command at a time.
To modify the replication policy for a source and destination file system, use this command syntax:
$ fs_replicate -modify <srcfs> -option hwm=<high_water_mark>,
to=<timeout>,dhwm=<high_water_mark>,dto=<timeout>
where:
<srcfs> = name of the source file system
-option hwm=<high_water_mark> = high water mark policy in megabytes
-option dto=<timeout> = time-out policy in seconds
Example:
To reset the high water mark for source and destination file systems, type:
$ fs_replicate -modify src_ufs1,dst_ufs1:cel=cs110 -option
hwm=300,dhwm=300
Output
id = 126
name = dst_ufs1:cs110
type = playback
playback_state = active
high_water_mark = 600 (Pending: 300)
time_out = 600
current_delta_set = 11
flow_control = inactive
total_savevol_space = 1048576 KBytes
savevol_space_available = 917504 KBytes (Before Flow Control)
communication_state = alive
current_transfer_rate = ~ 13312 Kbits/second
avg_transfer_rate = ~ 13312 Kbits/second
source_ip = 172.24.168.123
source_port = 62815
destination_ip = 192.168.168.20
destination_port = 8888
QOS_bandwidth = 0 kbits/sec
Note: All times are in GMT. Block size is 8 KBytes.
done
Any changes to the time-out interval or high water mark occur when the next trigger point is
reached so the Pending entry shown above is removed when the policy value changes.
Action
To change flow-control policies for a file system, use this command syntax:
$ fs_replicate -modify <fs_name>:cel=<cel_name> -option <options>
where:
<fs_name> = name of the source file system.
<cel_name> = name of the remote Celerra Network Server.
<options> = flow-control setting for the source file system. To freeze all I/O to the source file
system use autofreeze=yes. To allow users to continue read-only access to the source file
system use autoro=yes.
Example:
To freeze all I/O to the source file system, src_ufs1, specify the option autofreeze=yes. Type:
$ fs_replicate -modify src_ufs1 -option autofreeze=yes
id = 126
name = dst_ufs1:cs110
type = playback
playback_state = active
high_water_mark = 600
time_out = 600
current_delta_set = 29
flow_control = inactive
total_savevol_space = 1048576 KBytes
savevol_space_available = 917504 KBytes (Before Flow Control)
communication_state = alive
current_transfer_rate = ~ 13312 Kbits/second
avg_transfer_rate = ~ 13312 Kbits/second
source_ip = 192.168.168.18
source_port = 62819
destination_ip = 192.168.168.20
destination_port = 8888
QOS_bandwidth = 0 kbits/sec
done
Note
• Determine when the file system is in a read-only or freeze situation by using the fs_replicate -info
command and checking the fs_state field.
• When the file system is in a read-only situation, the fs_state field is romounted and when no I/O
is allowed, frozen.
Action
To specify the maximum bandwidth used for a replication session, use this command syntax:
$ fs_replicate -modify <fs_name>:cel=<cel_name> -option qos=8000
where:
<fs_name> = name of the source file system
<cel_name> = name of the remote Celerra Network Server
<options> = qos = bandwidth typed in kilobytes per second
Example:
To set the maximum bandwidth for this replication session, type:
$ fs_replicate -modify src_ufs1 -option qos=8000
Note: This setting takes effect the next time data is sent across the IP network.
id = 126
name = dst_ufs1:cs110
type = playback
playback_state = active
high_water_mark = 600
time_out = 600
current_delta_set = 30
flow_control = inactive
total_savevol_space = 1048576 KBytes
savevol_space_available = 917504 KBytes (Before Flow Control)
communication_state = alive
current_transfer_rate = ~ 13312 Kbits/second
avg_transfer_rate = ~ 13312 Kbits/second
source_ip = 192.168.168.18
source_port = 62819
destination_ip = 192.168.168.20
destination_port = 8888
QOS_bandwidth = 8000 kbits/sec
done
Action
id = 126
name = dst_ufs1
acl = 0
in_use = True
type = uxfs
worm = off
volume = v272
pool = clar_r5_performance
member_of = root_avm_fs_group_3:cs110
rw_servers=
ro_servers= server_2
rw_vdms =
ro_vdms =
backup_of = src_ufs1 Tue Feb 8 13:53:59 EST 2005
stor_devs = APM00044603845-0008,APM00044603845-0007
disks = d8,d9
disk=d8 stor_dev=APM00044603845-0008 addr=c0t1l2
server=server_2
disk=d8 stor_dev=APM00044603845-0008 addr=c32t1l2
server=server_2
disk=d8 stor_dev=APM00044603845-0008 addr=c16t1l2
server=server_2
disk=d8 stor_dev=APM00044603845-0008 addr=c48t1l2
server=server_2
disk=d9 stor_dev=APM00044603845-0007 addr=c16t1l1
server=server_2
disk=d9 stor_dev=APM00044603845-0007 addr=c48t1l1
server=server_2
disk=d9 stor_dev=APM00044603845-0007 addr=c0t1l1
server=server_2
disk=d9 stor_dev=APM00044603845-0007 addr=c32t1l1
server=server_2
done
• When this command completes, the current read/write file system (src_ufs1) becomes read-only
and the current read-only file system (dst_ufs1) becomes read/write.
• If you tried to run this command from the incorrect side (read-only), this error message appears:
Error 2247: this command must be issued on the current source site:cs100
Output
$
id = 18
name = src
fs_state = active
type = replication
replicator_state = active
source_policy = NoPolicy
high_water_mark = 60000
time_out = 3600
current_delta_set = 92
current_number_of_blocks = 1
flow_control = inactive
total_savevol_space = 1048576 KBytes
savevol_space_available = 393216 KBytes (Before Flow Control)
id = 30
name = dest:eng168102
type = playback
playback_state = active
high_water_mark = 300
time_out = 600
current_delta_set = 87
flow_control = inactive
total_savevol_space = 1048576 KBytes
savevol_space_available = 393216 KBytes (Before Flow Control)
id = 30
name = dest:eng168102
type = playback
playback_state = active
high_water_mark = 300
time_out = 600
current_delta_set = 87
flow_control = inactive
total_savevol_space = 1048576 KBytes
savevol_space_available = 393216 KBytes (Before Flow Control)
communication_state = alive
current_transfer_rate = ~ 13312 Kbits/second
avg_transfer_rate = ~ 18140.4 Kbits/second
source_ip = 10.168.0.11
source_port = 59068
destination_ip = 10.168.0.180
destination_port = 8888
QOS_bandwidth = 0 kbits/sec
| Source | Destination
Delta|Create Time Dur Blocks|Playback Time Dur Blocks DSinGroup
-----|-------------- ------ ------|-------------- ------ ------ --------
-
91 06/04 10:10:52 0 1
90 06/04 10:08:54 0 1
89 06/04 09:58:54 0 1
88 06/04 09:58:43 0 1
87 06/04 09:48:43 9 14457
86 06/04 09:39:25 0 1 06/04 09:44:35 0 1 1
85 06/04 09:29:25 0 1 06/04 09:34:35 0 1 1
84 06/04 09:19:25 0 1 06/04 09:24:35 0 1 1
83 06/04 09:09:25 0 1 06/04 09:14:35 0 1 1
82 06/04 08:59:25 0 1 06/04 09:04:34 0 1 1
81 06/04 08:49:25 0 1 06/04 08:54:34 0 1 1
80 06/04 08:39:25 0 1 06/04 08:44:34 0 1 1
79 06/04 08:29:25 0 1 06/04 08:34:34 0 1 1
78 06/04 08:19:25 0 1 06/04 08:24:34 0 1 1
77 06/04 08:09:25 0 1 06/04 08:14:34 0 1 1
76 06/04 07:59:25 0 1 06/04 08:04:34 0 1 1
75 06/04 07:49:25 0 1 06/04 07:54:34 0 1 1
74 06/04 07:39:25 0 1 06/04 07:44:34 0 1 1
73 06/04 07:29:25 0 1 06/04 07:34:34 0 1 1
72 06/04 07:19:25 0 1 06/04 07:24:34 0 1 1
Note: All times are in GMT. Block size is 8 KBytes.
To play back all delta sets up to a specified delta-set number, use this command syntax:
$ fs_replicate -refresh dest -option playuntildelta=91
Note: In this example, the system plays back all delta sets up to 91. Any delta sets greater than
that number will not replay.
Output
id = 30
name = dest
type = playback
playback_state = active
high_water_mark = 300
time_out = 600
current_delta_set = 92
flow_control = inactive
total_savevol_space = 1048576 KBytes
savevol_space_available = 393216 KBytes (Before Flow Control)
done
Output
id = 18
name = src
fs_state = active
type = replication
replicator_state = active
source_policy = NoPolicy
high_water_mark = 60000
time_out = 3600
current_delta_set = 92
current_number_of_blocks = 1
flow_control = inactive
total_savevol_space = 1048576 KBytes
savevol_space_available = 393216 KBytes (Before Flow Control)
id = 30
name = dest:eng168102
type = playback
playback_state = active
high_water_mark = 300
time_out = 600
current_delta_set = 92
flow_control = inactive
total_savevol_space = 1048576 KBytes
savevol_space_available = 393216 KBytes (Before Flow Control)
communication_state = alive
current_transfer_rate = ~ 13312 Kbits/second
avg_transfer_rate = ~ 18140.4 Kbits/second
source_ip = 10.168.0.11
source_port = 59068
destination_ip = 10.168.0.180
destination_port = 8888
QOS_bandwidth = 0 kbits/sec
| Source | Destination
Delta|Create Time Dur Blocks|Playback Time Dur Blocks DSinGroup
-----|-------------- ------ ------|-------------- ------ ------ --------
-
91 06/04 10:10:52 0 1 06/04 10:58:42 2 1 5
90 06/04 10:08:54 0 1 06/04 10:48:42 2 1 5
89 06/04 09:58:54 0 1 06/04 10:38:42 2 1 5
88 06/04 09:58:43 0 1 06/04 10:28:42 2 1 5
87 06/04 09:48:43 9 14457 06/04 10:18:42 2 14457 5
86 06/04 09:39:25 0 1 06/04 09:44:35 0 1 1
85 06/04 09:29:25 0 1 06/04 09:34:35 0 1 1
84 06/04 09:19:25 0 1 06/04 09:24:35 0 1 1
83 06/04 09:09:25 0 1 06/04 09:14:35 0 1 1
82 06/04 08:59:25 0 1 06/04 09:04:34 0 1 1
81 06/04 08:49:25 0 1 06/04 08:54:34 0 1 1
80 06/04 08:39:25 0 1 06/04 08:44:34 0 1 1
12 Last Delta Replayed on Destination Issued at the end of a failover and reverse.
Filesystem
ID Description
0 Not an event
Step Action
! !
CAUTION
Do not edit anything in /nas/sys/, as these settings are overwritten with
each code upgrade.
Note: Do not change any other lines in this file without a thorough knowledge of the
potential effects on the system. Contact EMC Customer Service for guidance.
4. Change the parameter to represent the percentage of space you want to allocate.
Note: Changing this value does not require a Control Station restart.
Step Action
1. At each site, you can review the current passphrase using this command syntax:
$ nas_cel -info <cel_name>
where:
<cel_name> = name of Celerra Network Server
2. At each site, establish the new passphrase using this command syntax:
$ nas_cel -modify <cel_name> -passphrase <passphrase>
where:
<cel_name> = name of Celerra Network Server
<passphrase> = new secure passphrase to be used for the connection, which must have
6- to 15-character and be the same on both sides of the connection
Example:
# nas_cel -modify cs110 -passphrase nas_replication
operation in progress (not interruptible)...
id = 5
name = cs110
owner = 503
device =
channel =
net_path = 192.168.168.102
celerra_id = APM000446038450000
passphrase = nas_replication
If restartable checkpoints do not exist, create them. If they do exist, but their
timestamps indicate they are not refreshing with the replication updates, check that
the names are correct and replication is healthy.
"Out-of-sync replication relationship" on page 20 provides more information about
these special checkpoints.
Controlling delta set size
Controlling the size of delta sets is integral to managing efficient replications.
Version 5.4 and later enforce an 8 GB delta-set limit. Manageable delta sets are
preferable to large deltas because:
◆ Large delta sets dramatically increase failover time (blockmap recovery).
◆ Large delta sets consume more operating system resources.
◆ Playback and create times for large delta sets do not rise proportionally with
size. Reasonably sized delta sets are created and replayed faster.
Enlarging SavVol size
Before beginning replication, determine whether the default source side SavVol
size (10 percent), as related to the file system size, is sufficient for the anticipated
delta sets.
Find the SavVol size by calculating 10 percent of the file system size obtained by
using nas_fs -size <fs_name>. If you previously changed the SavVol size, you can
learn its present size using the fs_replicate -info <fs_name> command and
checking the total_savevol_space field. To manage a large network outage or
account for brief intervals when the incoming modification rate significantly exceeds
the network’s ability to send changes to the destination site, you can increase the
size of the replication SavVol. For example, a 500 GB file system that incurs 20 GB
of change daily will, with a 50 GB SavVol, accommodate approximately two and
one-half days of outage. If replication has already begun, change the SavVol size
as follows:
Note: If you are starting replication, specify the SavVol size rather than use the default
value.
3. Restart replication with the revised SavVol and old parameters using this command
syntax:
$ fs_replicate -restart <srcfs> <dstfs>:cel=<cel_name>
savsize=<MB> -sav <srcsavvol_name> -option to=<value>, hwm=<MB>
where:
<srcfs> = name of the source file system
<dstfs> = name of the destination file system
<cel_name> = name of the Celerra Network Server for the file system
savsize=<MB> = size of the SavVol in MB
<srcsavvol_name> = name of the source file system SavVol
-option to=<value> = time-out interval in seconds
-option hwm=<value> = high water mark in MB
Note: This setting may actually increase network traffic. Make the change cautiously,
recognizing it might not improve performance.
◆ To ensure that a stable network transfer rate for delta-set transfers on a Data
Mover, use a dedicated network port.
◆ Correctly set the TCP window size for network latency. Configuring the
tcpwindow parameter sets the window size used by replication (and fs_copy).
This value indicates the data load that can be sent before acknowledgment by
the receiving site. Increasing the value is most effective with a high latency.
Window size is calculated by multiplying the round-trip delay by the appropriate
packet rate.
For example, to send 10 MB/s across an IP network with a round-trip delay of
100 ms, a window size of 1 MB (0.1 sec x 10 MB/s = 1 MB) is needed. Use
server_param <movername> -facility rcp tcpwindow=bytes to configure the
setting.
EMC Celerra Network Server Parameters Guide provides more information on
setting tcp and rcp facilities.
Note: If you restart a replication session and there are unmounted checkpoints, a full data
copy will be initiated instead of a differential copy.
Step Action
2. Create a checkpoint of the source file system using this command syntax:
$ fs_ckpt <srcfs> -Create
where:
<srcfs> = name of the source file system
3. Copy the checkpoint to the destination file system using this command syntax:
$ fs_copy -start <src_ckpt> <dstfs>:cel=<cel_name>
where:
<src_ckpt> = source checkpoint copied of the destination
<dstfs> = name of the destination file system
<cel_name> = name of the destination Celerra Network Server
4. Convert the destination file system to rawfs using this command syntax:
$ nas_fs -Type rawfs <dstfs> -Force
where:
<dstfs> = name of the destination file system
5. Start replication from the source to destination file system using this command syntax:
$ fs_replicate -start <srcfs> <dstfs>:cel=<cel_name>
where:
<srcfs> = name of the source file system
<dstfs> = name of the destination file system
<cel_name> = name of the destination Celerra Network Server
6. Create a second checkpoint of the source file system using this command syntax:
$ fs_ckpt <srcfs> -Create
where:
<srcfs> = name of the source file system
8. Check the copy’s progress and completion using this command syntax:
$ fs_copy -info <srcfs> or fs_copy -list
where:
<srcfs> = name of the source file system
Note: You cannot use nas_fsck if the destination file system is corrupted as a result of an
improper fs_copy operation. File system replication fails due to pending nas_fsck.
Step Action
1. Modify time-out and HWM values to zero using this command syntax:
$ fs_replicate -modify <srcfs> -option hwm=0,to=0
where:
<srcfs> = name of the source file system
Note: Setting trigger points to zero causes Celerra to keep replication active and track
changes, but not cut delta sets.
2. Run nas_fsck on the source file system to replicate and replay changes on the destination
file system using this command syntax:
$ nas_fsck -start <srcfs>
where:
<srcfs> = name of the source file system
Note: Running nas_fsck repairs corruption on the source file system, bringing it into a
consistent, but not original, state. While nas_fsck runs, the file system is not mounted to
avoid system instability. When the command is complete and inconsistencies addressed,
the file system is brought back online.
3. Revert to your previous time-out and HWM values using this command syntax:
$ fs_replicate -modify <srcfs> -option hwm=<MB>, to=<second>
where:
<srcfs> = name of the source file system
hwm=<value> = original high water mark value in MB
to=<value> = original time-out interval in seconds
Step Action
Step Action
Note: The suspend operation lets Celerra track changes in a SavVol that automatically
expands. Less total storage is needed because tracking is done only once in the
checkpoint SavVol, not twice in the checkpoint and replication SavVol. Also, checkpoints
retain only one overall changed block instead of one per delta set.
2. When all replication sessions are suspended, to check that the session you suspended no
longer appears in output for fs_replicate -list, type:
$ fs_replicate -list
Note: Use this command on source and destination Celerra systems to verify that no
sessions are running.
3. Verify the size of the suspend checkpoint to ensure that there is enough disk space to
expand the SavVol. The suspend checkpoint, root_suspend_ckpt, is used to restart
replication and is added by the -suspend option. Verify the size by typing:
$ nas_fs -size root_suspend_ckpt
4. Restart replication after the outage is over using this command syntax:
$ fs_replicate -restart <srcfs> <dstfs>:cel=<cel_name>
where:
<srcfs> = name of the source file system
<dstfs> = name of the destination file system
<cel_name> = name of the destination Celerra Network Server
Note: If replication failed and corrupted the destination file system, or you mistakenly
suspended replication, mounted the destination read/write, and restarted replication, you
must abort the session and restart using a full fs_copy. See "Replication restart methods" on
page 128 for more information.
Step Action
2. Assign the file system type to default using this command syntax:
$ nas_fs -Type uxfs <dstfs> -Force
where:
<dstfs> = name of the destination file system
4. Mount the destination file system read-write using this command syntax:
$ server_mount <movername> -option rw <dstfs> /<dstfs_mountpoint>
where:
<movername> = name of the Data Mover
<dstfs> = name of the destination file system
<dstfs_mountpoint> = point at which the file system is mounted
5. Perform DR testing with an appropriate program after the both sites are read-write. A
number of diagnostics help to ensure that your database can start up correctly, such as:
• Reading/writing to a single file or every file
• Creating new files
• Modifying existing files
• Reading or deleting file systems
6. Make the now-writable destination file system available to clients either by exporting it
through NFS, or sharing it through CIFS (assuming a CIFS server is active on the
destination side) using this command syntax:
$ server_export <movername> -Protocol nfs -option <options>
<pathtoexport>
or
$ server_export <movername> -Protocol cifs -name <sharename> -
option <options> <pathtoshare>
where:
<movername> = name of the Data Mover
7. Unmount the destination file system after testing ends using this command syntax:
$ server_umount <movername> -perm <dstfs>
where:
<movername> = name of the Data Mover
<dstfs> = name of the destination file system
8. Mount the destination file system as read-only using this command syntax:
$ server_mount <movername> -option ro <dstfs> /<dstfs_mountpoint>
where:
<movername> = name of the Data Mover
<dstfs> = name of the destination file system
<dstfs_mountpoint> = point at which the file system is mounted
9. Convert the destination file system to rawfs after the restore ends using this command
syntax:
$ nas_fs -Type rawfs <dstfs> -Force
where:
<dstfs> = name of the destination file system
10. Restart replication at the source site using this command syntax:
$ fs_replicate -restart <srcfs> <dstfs>:cel=<cel_name>
where:
<srcfs> = name of the source file system
<dstfs> = name of the destination file system
<cel_name> = name of Celerra Network Server for the file system
Note: It is highly recommended that slice volumes be used to create and extend file
systems for this fs_copy solution. Managing EMC Celerra Volumes and File Systems with
Automatic Volume Management provides more information about slice volumes.
1. Copy a file system to multiple destinations with fs_copy and create a checkpoint of the
source file system using this command syntax:
$ fs_ckpt <srcfs> -name <src_ckpt1> -Create
where:
<srcfs> = name of the source file system
<src_ckpt1> = first checkpoint on source
Note: If the primary file system extends before checkpoint creation, the fs_copy command
will fail. In this case, you must extend the destination file system manually to keep file
system sizes identical. First convert the file system to rawfs using the nas_fs -Type rawfs
command. Then use the nas_fs -xtend command. If a slice volume is not used, an
incremental copy might fail and you might need to run a full fs_copy from scratch when the
source file system is extended.
2. Copy it to the destination A file system after the checkpoint is created using this command
syntax:
$ fs_copy -start <src_ckpt1> <dstfs>:cel=<cel_name>
where:
<src_ckpt1> = first checkpoint on source
<dstfs> = destination name for file system A
<cel_name> = name of Celerra Network Server for file system A
Note: While an fs_copy is running, the destination file system is inaccessible. To make it
accessible, you must create a checkpoint before the data transfer starts.
3. After the fs_copy operation ends, create a checkpoint of destination file system using this
command syntax:
$ fs_ckpt <dstfs> -name <dst_ckpt1> -Create
where:
<dstfs> = destination file system name A
<dst_ckpt1> = first checkpoint on destination file system A
6. Perform a differential copy between checkpoints 1 and 2 using this command syntax:
$ fs_copy -start <src_ckpt2> <dstfs>:cel=<cel_name> -fromfs
<src_ckpt1>
where:
<src_ckpt2> = second checkpoint on the source
<dstfs> = destination name for file system A
<cel_name> = name of Celerra Network Server for file system A
<src_ckpt1> = first checkpoint on the source
7. After the fs_copy command completes again, refresh the first checkpoint of destination file
system A using this command syntax:
$ fs_ckpt <dst_ckpt1> -refresh
where:
<dst_ckpt1> = first checkpoint on destination file system A
8. Perform steps 2 through 7 for destinations B and C. The source file system is now saved
on destinations A, B, and C.
9. Refresh the copy of the source file system at destinations A, B, and C, refresh the first
source checkpoint using this command syntax:
$ fs_ckpt <src_ckpt1> -refresh
where:
<src_ckpt1> = first checkpoint on source
File File
system system
CLARiiON or CLARiiON or
Symmetrix Symmetrix
storage system storage system
Disk or Disk or
tape tape
Truck
CNS-000767
Note: Use the IP network to make the initial copy of the root file system for a VDM.
Note: You can use any qualified CLARiiON storage system for this transfer.
Step Action
1. List the disks attached to the Celerra Network Server using the nas_disk -list command.
Keep this list to use for comparison later in this procedure.
2. Attach a supported CLARiiON array (for example a CX300) with the appropriately bound
LUNs to the Celerra Network Server.
This procedure assumes you will use a dedicated array. "Setting up the CLARiiON disk
array" on page 147 describes preparing the CLARiiON array.
5. List the disks that are attached to the Celerra Network Server using the nas_disk -list
command. Then perform a diff command between this list and the one created in step 1 of
"Capture data from the source site on disk" on page 141. For example:
> 378 n 260607 APM00034402893-0000 CLSTD d378 2
> 379 n 260607 APM00034402893-0001 CLSTD d379 2
6. Create a file system on the CLARiiON array that is the same size as the source file
system. To do so:
a. Create a user-defined pool by typing:
$ nas_pool -create -name transport_disks -volumes d378,d379
b. Create a file system by typing:
$ nas_fs -name transport_fs -type rawfs -create samesize=src
pool=transport_disks -option mover=server_2
c. Create a mountpoint for the file system.
d. Mount the file system read-only.
Note: Ensure that you create the file system as rawfs and use the samesize= option to
ensure that it is identical in size to the source file system.
When creating the pool, ensure that the disks are added in the same order on source and
destination sites. If you are creating more than one file system, ensure that they are
created in the same order on source and destination sites. EMC recommends that you
create the largest file system first.
8. Copy the source file system checkpoint to the file system created on the new disks by
typing:
$ fs_copy -start src_ckpt1 transport_fs -option convert=no,
monitor=off
11. Verify that the disks were removed by using the nas_disk -list command. The results you
obtain from this step should be the same as those derived from the first step of "Capture
data from the source site on disk" on page 141.
12. Disconnect and uninstall the CLARiiON array from the source site.
Step Action
1. List the disks that are attached to the Celerra Network Server using the nas_disk -list
command. Keep this list to use for comparison later in this procedure.
5. List the disks that are attached to the Celerra Network Server using the nas_disk -list
command. Then perform a diff command between this list and the one created in step 1 of
"Transfer data to the destination site from disk" on page 143. For example:
> 375 n 260607 APM00034402893-0000 CLSTD d375 1
> 376 n 260607 APM00034402893-0001 CLSTD d376 1
Note: Ensure that you create the file system as rawfs and use the samesize= option to
ensure that it is identical in size to the source file system. When creating the pool ensure
that the disks are added in the same order on source and destination sites. If you are
creating more than one file system ensure that they are created in the same order on
source and destination sites.
7. Create a destination file system (in this example the destination Celerra file system is
attached to a Symmetrix storage system). To do so:
a. Create a file system by typing:
$ nas_fs -name dest -type rawfs -create samesize=src:cel=eng16853
pool=symm_std
b. Create a mountpoint and mount the file system.
Note: Ensure that you create the file system as rawfs and use the samesize= option to
ensure that it is identical in size to the source file system.
8. Copy the file system on the transport disk array to the destination file system, created in
step 7 of "Transfer data to the destination site from disk" on page 143:
$ fs_copy -start transport_fs dest -option convert=no
9. The destination file system is now rawfs and contains a copy of the source file system
checkpoint.
10. Delete the disks from the transport disk array. To do so:
a. Unmount the file system transport_fs (server_umount)
b. Delete the mountpoint (server_mountpoint)
c. Delete the file system (nas_fs -delete)
d. Delete the pool (nas_pool -delete)
e. Delete the disks which were the result of the -diff command in step 5 (nas_disk -delete)
of "Transfer data to the destination site from disk" on page 143.
11. Verify that the disks were removed by using the nas_disk -list command. The results you
obtain from this step should be the same as those derived from step 1 of "Transfer data to
the destination site from disk" on page 143.
12. Disconnect and uninstall the CLARiiON array from the destination site.
13. Continue with the next step for setting up replication, Task 6: "Begin replication" on
page 48, in Using EMC Celerra Replicator (V1).
Note: This special backup is used only for transporting replication data.
! !
CAUTION
Backing up file systems from a Unicode-enabled Data Mover and restoring to an
ASCII-enabled Data Mover is not supported. I18N mode (Unicode or ASCII) must be
the same on the source and destination Data Movers.
Step Action
Note: You can back up only the checkpoint of an IP replication read-only target file system
using the NDMP backup feature. If you attempt to back up a replication read-only target
file system, NDMP will fail when replication is updating the change. Celerra Network
Server version 5.5.27 and later supports NDMP backup of integrated checkpoints and
manually created checkpoints of a target replication file system.
2. Set the NDMP environment variable for your backup software. For example, set the
VLC=y NDMP environmental variable before you run the backup.
The NDMP technical module for your particular backup software provides information
about environment variables. For information about how to set this variable, read your
backup software vendor’s documentation.
Note: The source file system and the checkpoint must be mounted on the NDMP Data
Mover.
3. Use your normal backup procedure to back up the source file system checkpoint.
Step Action
1. When the tapes are on the destination Celerra Network Server, create a file system (rawfs)
that is the same size as the source file system. Create the file system on a metavolume,
create a mount point, and then mount the file system. Managing EMC Celerra Volumes
and File Systems Manually describes how to create a file system.
Ensure that you create the file system as rawfs and use the samesize= option to ensure
that it is identical in size to the source file system.
2. Determine the volume number of the destination file system created in step 1. In this
example, the volume number for the rawfs file system is 66:
$ nas_fs -list
id inuse type acl volume name server
1 y 1 0 66 rawfs server_2
2 y 1 0 68 new server_3
3 y 1 0 70 last server_4
3. Using your normal NDMP restore procedure, restore the backup using the following as the
file system name:
/.celerra_vol_<fs_volume_ID>
where:
<fs_volume_ID> = volume number of the rawfs file system (66 in the example)
Note: The file system must be restored to the NDMP Data Mover.
4. The destination file system is now rawfs and contains the source file system checkpoint.
Start the replication between source and destination. Follow the procedure, Task 6: "Begin
replication" on page 48, to set up remote replication.
5. Create a second checkpoint. Follow the procedure, Task 7: "Create a second checkpoint
of the source file system" on page 50.
6. Perform an incremental copy and allow the destination system to convert to uxfs. Follow
the procedure, Task 8: "Copy incremental changes" on page 52.
Be sure to specify the -force option, for example:
$ fs_copy -start src_ufs1_ckpt2 dst_ufs1:cel=cs110 -fromfs
src_ufs1_ckpt1 -Force -option monitor=off
Note: Use the appropriate setup and cabling guide depending on the disk array used.
When configuring the CLARiiON disk array for transporting replication data, run the
appropriate setup script after setting up zoning for the network switches.
To prepare the CLARiiON disk array to receive the copy of the source file system:
◆ "Review the prerequisites" on page 147
◆ "Run the setup script" on page 149
◆ "Create data LUNs" on page 151
Step Action
1. Cable and zone the CXxxx disk array to the Celerra Network Server.
2. Ensure that the required software components are installed on the CLARiiON disk array:
• CXxxx Base Array (EMC FLARE®)
• EMC Navisphere®_ArrayAgent
• Navisphere Management UI
• EMC Access Logix™
Read the E-Lab Interoperability Navigator for the most recent Celerra software and FLARE
microcode compatibility specifications.
5. Create the PPP link by adding a modem and creating a connection as described in the
EMC CLARiiON CX300, CX500, and CX700 Storage Systems Initialization Guide.
Step Action
1. Using the NULL modem cable, connect the service computer to the SP A serial port.
Serial port
CNS-000760
4. Select Connect.
5. Open a Command Prompt Window by selecting Start > Programs > Accessories >
Command Prompt.
The Command Prompt window opens.
7. In the Command Prompt window, change to the \clariion directory on the CD-ROM by
typing the drive letter and then typing cd \clariion.
For example, type:
D:cd\clariion
Checking 128.221.252.1
SP at 128.221.252.1 has responded. Waiting 60 seconds to
ensure it stays up.
SP at 128.221.252.1 is up.
Configuring cache
Cache Configured.
When the script completes, type exit to close the Command Prompt window.
Step Action
Output:
# nas//setup_clariion APM00035106458
1. CX_All_4Plus1_Raid_5
2. CX_Standard_Raid_5
3. CX_Standard_Raid_1
4. CX_Standard_Raid_5_Legacy
5. CX_Standard_Raid_1_Legacy
Please select a template in the range of 1-5 or 'q' to quit: 1
Summary:
2 disk group(s) are created. 8,9
5 spare(s) are created. 200,201,202,203,204
Enclosure(s) 0_0 are installed in the system.
Enclosure info:
----------------------------------------------------------------
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14
----------------------------------------------------------------
0_0: 146 146 146 146 146 146 146 146 146 146 146 146 146 146 146
*8 *8 *8 *8 *8 *HS *9 *9 *9 *9 *9 *HS *HS *HS *HS
----------------------------------------------------------------
"*" indicates a diskgroup/spare which will be configured
Size Type Disks Spares
-------------------------
146 FC 15 5
Enclosure 0_0.
Created disk group 8, luns 16,17
Created spare 200
Created disk group 9, luns 18,19
Created spare 201
Created spare 202
Created spare 203
Created spare 204
Binding complete.
All luns are created successfully!
3. Enclosure info:
----------------------------------------------------------------
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14
----------------------------------------------------------------
0_0: 146 146 146 146 146 146 146 146 146 146 146 146 146 146 146
8 8 8 8 8 HS 9 9 9 9 9 HS HS HS HS
----------------------------------------------------------------
Configuration completed!
4. Register the World Wide Names using the Connectivity Status window in Navisphere.
Note: Do not request a specific support representative unless one has already been
assigned to your particular system problem.
v: Volume ID of the source file system. In the above example, 363 is the volume
ID.
ad: Address at the block level (1 block = 512 bytes) on the SavVol where the delta
set is created.
g: Chunk number on the SavVol. If all delta sets consist of one chunk, this number
is the same as the delta-set number.
nc: Number of chunks in the delta set. One chunk equals 128 MB.
Table 14 on page 155 shows an example message generated in the server_log for
a Data Mover performing replication at the destination site:
Playback: v:361, Delta:3557, g:3557, ad:263168, nDelta:7
Table 14 Sample server_log message from destination site
Delta: First delta-set number in the group replayed to the destination file system.
g: First chunk number in the first delta set. If all the delta sets consist of one
chunk, this number is the same as the delta-set number.
ad: Address on the SavVol where the first delta set in the group is located.
nDelta: Number of delta sets in the group replicated to the destination file system.
4 Not applicable.
8 Not applicable.
Return
Error ID Message description
code
0 10246 This warning indicates that the time skew between the local
and remote Celerra may have exceeded 10 minutes or that
there is a passphrase mismatch between the local and remote
Celerra.
0 10247 This warning indicates that the remote Celerra could not be
reached with an http connection.
1 2102 This CLI usage error indicates that an invalid command option,
such as the file system name, Celerra system name, interface
name, or IP address, was specified.
6 2103 This CLI usage error occurred during argument verification and
indicates that an invalid command option, such as convert,
autofullcopy, qos, or resync, was specified.
6 2207 This error occurred when the transaction was aborted due to
an internal failure or abnormal termination of the command.
Return
Error ID Message description
code
6 2227 This error indicates that the specified destination file system
has FLR status of Enterprise. FLR-C enabled file systems
(source or destination) are not supported for fs_copy or Celerra
Replicator (V1).
6 2241 This error indicates that the command executed on the remote
system failed.
6 2245 This error indicates that the file system type is invalid.
6 3128 This error indicates that the source file system was restored
from the checkpoint.
6 3134 This error indicates that the checkpoint specified is not newer
than the destination file system.
6 3136 This error occurred because the destination file system has
replication set up.
6 3138 This error indicates that the file system is already part of
another copy session.
6 3139 This error indicates that the file system is not part of a copy
session.
6 4019 This error occurred when polling the progress of the copy
session and indicates that the command failed to complete.
6 4103 This error occurred because the file system is not mounted.
6 4109 This error occurred because the file system is not mounted.
Return
Error ID Message description
code
6 4424 This error occurred because fsck is being executed on the file
system.
6 4425 This error occurred because aclck is being executed on the file
system.
6 10233 This error indicates that the query executed on the remote
system failed.
6 10272 This error occurred because the destination file system could
not found.
6 10274 This error occurred because either the user aborted the copy
session or there is a problem with the Data Mover.
6 10277 This error indicates that the checkpoint specified is older than
than the replication configured on the source file system.
6 10311 This error indicates that the clean up process for the copy
session on destination failed.
6 10312 This error occurred because the size of the source and
destination file systems does not match.
7 10310 This error occurred because the system was unable to retrieve
copy session information from the Data Mover.
Field Description
current_delta_set Current delta set being processed. This reflects the current
delta set being tracked in memory.
current_number_of_block Number of modified blocks in the current delta set. One block
s size is 8 KB.
Field Description
current_delta_set Lists the next delta set to replay to the destination file system.
outstanding delta sets The delta sets on the destination SavVol the replication
service has not played back to the destination file system.
Field Description
current_transfer_rate The rate at which the last 128 MB of data was sent across the
IP network.
avg_transfer_rate The average rate at which the last 128 sets of data were sent
across the IP network.
When using the verbose option with the fs_replicate -info command, the replication
service generates this additional output. Table 20 describes these output fields.
Table 20 fs_replicate -info -verbose output fields (page 1 of 2)
Field Description
Create Time Date and start time the delta set was created.
Field Description
Playback Time Date and start time the delta set replayed to the destination file
system.
Dur Duration of time (in seconds) to replay the delta set or DsinGroup.
DSinGroup Number of delta sets in the group played back. In some instances,
the playback service can play back more than one delta set at
once. In this case, the Dur and Blocks fields refer to the group as a
whole, not an individual delta set.
B D
bandwidth size data flow control 31
changing policy 109 delta set
modifying 109 checking status 115
definition 6
minimum size 30
C overview 30
cautions delta-set
graceful shutdown 9 transport failure 156
serial replication sessions 9 disk transport 140
system 9
Unicode to ASCII replication 9
Celerra Replicator E
cautions 9 exit codes for fs_copy 157
checking status of 55, 114 extending file system
log files 154 after replication failover 104
restarting replication 89
restrictions 7 F
starting replication 48 failover
system requirements 22 initiate 64
upgrade considerations 24, 25 options 62
checkpoint failure
definition 6 delta-set transport 156
Commands failure, fs_replicate 50
fs_replicate, flow control options 32 file system
nas_fs, calculate SavVol size 33 automatically extending size 98
commands events 121
fs_ckpt, copy checkpoints 50 manually extending size 101
fs_ckpt, creating 43 flow control
fs_ckpt, using 59 freeze 107
fs_copy, checkpoint copy 46 read-only 107
fs_copy, copy checkpoints 45 fs_copy return codes 157
fs_copy, copy incremental changes 52
fs_copy, events for 122
fs_copy, using 46, 52 H
fs_replicate, aborting 79 high water mark
fs_replicate, changing bandwidth 109 definition 6
fs_replicate, check status 55 resetting policy 105
fs_replicate, control SavVol size 34 HTTP communication 17
fs_replicate, description of output fields 165
fs_replicate, failover 64 I
fs_replicate, failure 50
information, related 163
fs_replicate, output definitions 165
Initial Copy
fs_replicate, resynchronize 68
transporting by disk 144
fs_replicate, reverse 76
initial copy
fs_replicate, starting 48
transporting by disk 140
fs_replicate, suspending 81
transporting by tape 144
fs_replicate, using 57
U
upgrading 24, 25
V
Virtual Data Mover, definition 7
virtual provisioning
definition 7