Vous êtes sur la page 1sur 135

1

SnapMirror

SnapMirror allows mirroring volumes or qtrees

FEATURES AND BENEFITS OF SNAPMIRROR


SnapMirror provides a fast and flexible enterprise solution for replicating data over local area, wide area,
and Fibre Channel networks. SnapMirror addresses multiple application areas such as mission-critical
data protection, and business continuance in case of a disaster.
Data migration from one storage system to another can be done without interrupting network service.
SnapMirror replication allows the distribution of large amounts of data to remote sites as a read-only
replica. Remote data access provides fast access to data by local clients.

2
If critical data is replicated to a different location, in case of a disaster at the source site, the replica can be
made available to clients across the network until the damage caused by the disaster is repaired.
Additionally, as the source data can be replicated at a time chosen by systems administrators, the solution
minimizes network utilization.
SnapMirror is also used for backup offloading. SnapMirror technology attaches the off-site storage device
to the SnapMirror destination system, offloading tape backup overhead from production servers.

REPLICATION MODES
The Data ONTAP SnapMirror feature enables an administrator to replicate data either asynchronously or
synchronously.
The SnapMirror Async mode replicates Snapshot copies from a source volume or qtree to a destination
volume or qtree. Incremental updates are based on a schedule or are performed manually using the
snapmirror update command. Async mode works with both volume SnapMirror and qtree
SnapMirror.
SnapMirror Sync mode replicates writes from a source volume to a destination volume at the same time it
is written to the source volume. SnapMirror Sync is used in environments that have zero tolerance for
data loss.
SnapMirror Semi-Sync provides a middle-ground solution that keeps the source and destination systems
more closely synchronized than Async mode, but with less impact on performance.

VOLUME AND QTREE SNAPMIRROR

VOLUME SNAPMIRROR
Volume SnapMirror enables block-for-block replication. The entire volume, including its qtrees, and all
the associated Snapshot copies, are replicated to the destination volume.
.

Volume SnapMirror can be initialized using a tape device and a volumes replica can be cascaded in a
series.
Volume SnapMirror can be asynchronous, synchronous, or semi-synchronous.
The source volumes are visible, accessible, and writable by the clients.
The destination volumes are visible, accessible, and read-only and usually on a separate system to which
the source volumes are replicated.

INITIAL TRANSFER AND REPLICATION

4
To initialize a volume, you first have to restrict the destination volume in which the replica will reside.
During the baseline transfer, the source storage system takes a Snapshot copy of the volume. All data
blocks referenced by this Snapshot copy, including volume metadata such as language translation settings,
as well as all Snapshot copies of the volume, are transferred and written to the destination volume.
After the initialization completes, the source and destination file systems have one Snapshot copy in
common. Updates occur from this point and are based on the schedule specified in a flat-text
configuration file known as the snapmirror.conf file or by using the snapmirror update command.

REQUIREMENTS AND LIMITATIONS


With volume SnapMirror, the destination must run a version of Data ONTAP that is equal to or more
recent than the source. In addition, the source and destination must be on the same Data ONTAP release.
Volume SnapMirror replication can only occur with volumes of the same type: both traditional volumes
or both flexible volumes.
Volume SnapMirror requires that the size of the destination volume be equal to or greater than the size of
the source volume. Administrators can thin provision the destination so that it appears to be equal to or
greater than the size of the source volume.
The source volume disks checksum type (block or zone checksum) must be identical to the destination
disks checksum type.
NOTE: For traditional volume, disks checksum type, size, and geometry must be identical. The
destination volume has to contain the same number of disks and the same size disks as the source
volumes, allowing more efficient deployment of resources. This limitation applies only to traditional
volumes.
Quotas cannot be enabled on destination volume.
It is recommended that you allow a range of TCP ports from 10565 to 10569

QTREE SNAPMIRROR
Qtree SnapMirror is a logical replication. All the files and directories in the source file system are created
in the target destination qtree.
Qtree SnapMirror replication occurs between qtrees regardless of the type of the volume (traditional or
flexible). You can mirror qtrees using SnapMirror from a traditional volume to a flexible volume and
from a flexible volume to a traditional volume. Note that volume size and the disk geometry do not make
any difference for qtree SnapMirror.
Qtrees from different sources can be replicated to a destination and qtree SnapMirror replication can
occur between different releases of Data ONTAP.
With qtree SnapMirror, the source volume and qtree are online and writable. The destination qtree is readonly, while the destination volume remains writable and may contain replicated qtrees from multiple
source volumes, and qtrees or nontree data not managed by SnapMirror.
NOTE: Unlike volume SnapMirror, qtree SnapMirror does not require that the size of the destination
volume be equal to or greater than the size of the source qtree.

INITIAL TRANSFER AND REPLICATION


To initialize a qtree, you do not need to create a destination qtree; the qtree is automatically created when
the baseline transfer is started. The baseline transfer is created when qtree SnapMirror creates a Snapshot
copy of the source volume that contains the qtree to be replicated. This Snapshot copy contains all the
source volume data, including both the data in the qtree to be replicated and the metadata.
After the initialization completes, the source and destination file systems have one Snapshot copy in
common. Updates occur from this point and are based on the schedule specified in a flat-text
configuration file known as the snapmirror.conf file or by using the snapmirror update command.
Qtree SnapMirror determines changed data by first looking through the inode file for inodes that have
changed and then looking through the changed inodes of the replicated qtree for changed data blocks.
Only new or changed blocks associated with the designated qtree are transferred to the destination. Qtree
SnapMirror does not transfer the Snapshot copy from the source to the destination. When the transfer
completes, qtree SnapMirror creates a Snapshot copy of the destination volume associated with the
replicated qtree.

REQUIREMENTS AND LIMITATIONS

7
Qtree SnapMirror is available in asynchronous mode only. The destination volume must contain 5% more
free space than the source qtree. A destination qtree cannot be the /etc file. Qtree SnapMirror cannot be
initialized using a tape device and does not support cascading of mirrors. Qtree SnapMirror performance
is impacted by deep directory structure and large number (tens of millions) of small files replicated.
To determine changed data, qtree SnapMirror looks at the inode file and defines which inodes are in the
qtree of interest and which inodes have changed. If the inode file is large, but the inodes of interest are
few, qtree SnapMirror spends a lot of time going through the inode file to find very few changes.

LICENSING SNAPMIRROR
SnapMirror requires a snapmirror license on both the source and the destination storage systems. If
the SnapMirror source and destination are on the same storage system, only one license is to be installed.
SnapMirror Sync and Semi-Sync require an additional snapmirror_sync free license available in
the Data ONTAP Data Protection Online Backup and Recovery Guide on the NOW site.

CONFIGURATION PROCESS
The SnapMirror configuration process consists of the following four steps:
1. Install the SnapMirror license on the source and destination systems: license add <code>
2. On the source, specify the host name or IP address of the SnapMirror destination systems you wish to
authorize to replicate this source system.
options snapmirror.access host=dst_hostname1,dst_hostname2
3. For each source volume or qtree to replicate, perform an initial baseline transfer. For volume
SnapMirror, restrict the destination volume first: vol restrict dst_vol
Then initialize the volume SnapMirror baseline, using the following syntax on the destination:
snapmirror initialize -S src_hostname:src_vol dst_hostname:dst_vol
For a qtree SnapMirror baseline transfer, use the following syntax on the destination:
snapmirror initialize S src_hostname:/vol/src_vol/src_qtree dst_hostname:/vol/dst_vol/dst_qtree
4. After the initial transfer completes, set the SnapMirror mode of replication by creating the
/etc/snapmirror.conf file in the destinations root volume.

SNAPMIRROR.CONF
The snapmirror.conf configuration file entries define the relationship between the source and the
destination, the mode of replication, and the arguments that control SnapMirror when replicating data.
The syntax for entries in the snapmirror.conf file is as follows:
src_system:/vol/src_vol/[src_qtree]
dest_system:/vol/dest_vol[/dest_qtree] [arguments] [schedule]

9
The arguments field lets you define the transfer speed and the restart mode. In this field you can also
enable checksum, set the synchronicity level and the visibility interval. A dash (-) indicates that all
arguments default values apply. The schedule consists of four space-separated fields in order: minute,
hour, day_of_month, and day_of_week.
All possible values can be applied with an asterisk (*). A single dash (-) means never and prevents this
schedule entry from executing.
NOTE: SnapMirror updates can be scheduled to occur as frequently as every minute.

SNAPMIRROR.CONF FILE EXAMPLES


src:/vol/vol1/q1 dst:/vol/vol1/q1 15 * * *
The source qtree q1 is replicated to the destination qtree q1 every hour on the hour 15 minutes past the
hour, every day of the week, and every day of the month.
src:vol2 dst:vol2 kbs=2000 10 8,20 * *
The source volume vol2 is replicated to the destination volume vol2 at 10 minutes past 8:00 a.m. and 8:00
p.m. every day of the month and every day of the week. In other words, the source volume vol2 is
replicated at 8:10 a.m. and 8:10 p.m. Data ONTAP can use a maximum of 2,000 kilobytes per second to
transfer data.

10

MONITORING TRANSFER
Use the snapmirror status command either from the source or the destination storage system to
monitor the transfer progress, check relationships state and status, and control the transfer lag (age of
backup).
Command syntax:
snapmirror status [options] [system:][path]]
The value for options can be -l or -q
The -l option displays the long format of the output.
The -q option displays which volumes or qtrees are quiesced or quiescing.
The snapmirror quiesce command waits for all existing transfers to both volumes and qtrees to complete
and blocks any further updates. If a qtree is not in a stable state (is in transition), the snapmirror
quiesce command forces it into a stable state. You can quiesce only volumes and qtrees that are online
and that are SnapMirror destinations. You cannot quiesce a restricted or offline volume or a qtree in a
restricted or offline volume.

11

LISTING SNAPSHOT COPIES


Use the snap list command to list all Snapshot copies, including the SnapMirror Snapshot copies
stored in the source and the destination volumes.
SnapMirror Snapshot copies are distinguished from system Snapshot copies by a more elaborate naming
convention. The default name of a SnapMirror volume Snapshot copy is as follows:
dest_system(sysid)_name.number
dest_system is the host name of the destination storage system
sysid is the destination system ID number
name is the name of the destination volume
number is the number of successful transfers for the Snapshot copy, starting at 1. Data ONTAP
increments this number for each transfer.
The snap list command displays the keyword snapmirror next to the necessary Snapshot copy.
CAUTION: Do not delete Snapshot copies that SnapMirror creates in the source volume. The most
recent SnapMirror Snapshot copy is referred to as the newest common Snapshot copy. Incremental
changes to the destination depend on this Snapshot copy. If SnapMirror cannot find the required Snapshot
copy on the source, it cannot perform incremental changes to the destination. The affected relationship
will have to be reinitialized.

LOG FILES
The SnapMirror logs record whether the transfer finished successfully or failed. If there is a problem with
the updates, it is useful to look at the log file to see what has happened since the last successful update.
The logs include the start and end of each transfer, along with the amount of data transferred.
Use the option snapmirror.log.enable to record SnapMirror data transfer logs. By default, the
option is on.
options snapmirror.log.enable [on|off]
Log files are stored on the source and the destination storage systems root volume, in the
/etc/logs/snapmirror directory.
.

12
A new log file is generated every week as snapmirror.0. Older log files are renamed snapmirror.[1-5] and
the oldest log file is deleted.

NEARSTORE PERSONALITY
NearStore Personality allows you to utilize FAS systems as secondary systems. This feature requires the
nearstore_option license.
When enabled, the nearstore_option license increases the number of possible concurrent
destination qtree SnapMirror and SnapVault replications by optimizing the transfer resources required
for those replications. This license should not be installed on these storage systems if they intend to
handle primary application workloads.

13

CONCURRENT TRANSFERS
Each storage system model supports a maximum number of simultaneous replication operations.
In Data ONTAP 7.3, the volume SnapMirror and qtree SnapMirror maximum stream counts are increased
for certain platforms. This enhancement allows customers to accommodate the use of large numbers of
flexible volumes and multiple concurrent transfers.
The new concurrent stream counts apply only to the snapmirror initialize and the snapmirror update Data
ONTAP commands.
.

MANAGING TRANSFERS
In Data ONTAP 7.3, you can specify the number of volume SnapMirror transfers for which you want
resources by using the following Data ONTAP option:
options replication.volume.reserved_transfers <n>
<n> is the number of volume SnapMirror transfers for which you want resources reserved.
The default value is 0.
The reason for reserving transfers is that reserved resources will not be available for other replication
types like qtree SnapMirror or SnapVault transfers.
The stream count setting was increased in Data ONTAP 7.3. You can revert back to the previous stream
count setting by using the following option:
options replication.volume.transfer_limits [current | previous]
If the value is set to current, the maximum for the current release will be used. If the value is set to
previous, the maximum for Data ONTAP 7.2.0 will be used. The default value is current.

14

THROTTLING NETWORK
Throttle network usage can be configured on a per transfer basis, using the kbs argument in the
snapmirror.conf.
Dynamic throttle allows you to change the throttle value for a SnapMirror relationship while the transfer
is active. This feature is available from Data ONTAP 7.1 and later.
snapmirror throttle <n> dst_hostname:dst_path
<n> is the new throttle value in kilobytes per second
System-wide throttling is available from Data ONTAP 7.2 and later and limits the total bandwidth used
by all transfers at any time (SnapMirror and SnapVault transfers).
There are three options.
Enable or disable system-wide throttling on all systems: replication.throttle.enable [on|
off]
Set maximum bandwidth for all incoming transfers:
replication.throttle.incoming.max_kbs <value>
Set maximum bandwidth for all outgoing transfers:
replication.throttle.outgoing.max_kbs <value>
The default value is unlimited, which means there is no limit on total bandwidth used. Valid transfer rate
values are 1 to 125,000 kilobytes per second.

15

SPACE GUARANTEE
In Data ONTAP 7.3 and later, space is guaranteed for SnapMirror destination flexible volumes. This new
feature preallocates space in the aggregate for the volume. However, it is still possible for transfers to fail
when the aggregate is full. When you create a flexible volume, by default its volume guarantee is enabled.
Support for space guarantee on the SnapMirror destination volume allows for maintenance of this
guarantee after the initial baseline transfer is completed. The destination volume is set to an internal
guarantee type called Replica (RAID label). The guarantee is also maintained when the volume is offline.
When the SnapMirror destination storage system is upgraded to Data ONTAP 7.3 or later, you can enable
or disable the space guarantee on the existing SnapMirror destination volumes using the vol options
command.
Note that file guarantee is not supported on SnapMirror destination volumes.
dst> vol options vol_name guarantee [volume | none]
An upgraded_replica volume option is added to display if a volume is an upgraded SnapMirror
destination as a part of the vol status command.

SYNCHRONOUS SNAPMIRROR

SYNC MODE

16
SnapMirror in synchronous mode is a mode of replication that sends updates from the source to the
destination as they occur, rather than according to a predetermined schedule. This guarantees that data
written on the source system is protected on the destination even if the entire source system fails.
Volume SnapMirror Async is the base of SnapMirror Sync. The first step involved in SnapMirror sync
mode replication is a one-time baseline transfer of the source volume. When the baseline transfer is
completed, SnapMirror transitions through a series of states, becoming more and more synchronous until
the relationship gets in synchronous mode.
To configure SnapMirror Sync mode, the schedule field is replaced by sync in the snapmirror.conf
configuration file.
Example:
SystemA:vol1 SystemB:vol1 sync
Note that when changes are made to the snapmirror.conf file for a SnapMirror Sync entry, the SnapMirror
relationship will go out of sync momentarily and then attempt to return to synchronous state.

CP AND NVLOG FORWARDING


SnapMirror Sync forwards all consistency point (CP) writes to the destination to keep consistency
between the source and the destination volumes.
A CP is generally taken when the NVRAM is half-full, when the timer generates a CP (10 seconds), and
when a Snapshot copy is created although other events can cause a CP.
Before Data ONTAP 7.2.2, the source CP will not complete until the destination had completed its CP.
With Data ONTAP 7.2.2 and later, CPs are not synchronized but are forwarded to the destination and data
is saved in memory.
SnapMirror Sync forwards the NVRAM logs to the destination to ensure that NVRAM operations on
the source are replicated to the destination for replay in case of a disaster on the source system.
NVLOG data is treated as a stream of writes to a pair of special files named the NVLOG files
/etc/sync_snapmirror_nvlog/<dstfsid>.log[0|1]

17

Before Data ONTAP 7.2.2, NVLOG files are written in the root volume of the destination system. With
Data ONTAP 7.2.2 and later, NVLOG files are written in the parent aggregate of the destination volume.

THEORY OF OPERATION
Client writes are acknowledged after the writes have been logged to the NVRAM on the source and to the
NVLOG files on the destination. Because the source waits for each transaction to be acknowledged from
the destination before moving forward, it may impact client application performance.
The source system receives a write request from a client. The request is journaled in the systems
NVRAM and recorded in cache memory.
The request and the NVLOG metadata are forwarded to the SnapMirror destination system where they are
also journaled in NVRAM and cache memory.
The destination responds to the source system.
Data ONTAP acknowledges the write to the client system, and the application that requested the write is
free to continue processing.
When a consistency point is triggered, Data ONTAP uses the transaction data in cache memory to build a
list of data block changes that need to be written to disk.
This list of data blocks is sent to the destination, which initiates its own write to disk and returns an
acknowledgment to the WAFL software on the source system.

18

SEMI-SYNC MODE
Versions of Data ONTAP earlier than 7.3, allowed for Semi-Sync mode, which provided different
synchronicity levels to control synchronicity versus performance.
The field in the configuration file that controls the synchronicity level is the outstanding argument. This
variable allows you to modify the amount of time or the number of operations a destination waits before
sending a write acknowledgment to the source.
outstanding={x ops | x ms | x s}
Default is no value: SnapMirror operates in a fully synchronous manner.
The ops suffix allows x number of outstanding write operations before forcing the clients to wait for an
acknowledgment.
s or ms defines the amount of time (seconds or milliseconds) a destination waits before sending a write
acknowledgment.
When the outstanding value is less than 10 seconds, the source forwards the NVLOG like it would in
sync mode, but it does not wait for the acknowledgment from the destination system. This provides
performance improvement for the client writing to the source volume. However, there is a possibility of
data loss during that interval should the source crash prior to forwarding the NVLOG.

19
When the outstanding value is set to 10 seconds or more, only the CP streams are forwarded to the
destination; NVLOG data is not forwarded. Eliminating NVLOG forwarding reduces the load on the
storage systems.

SEMI-SYNC MODE (CONT.)


In Data ONTAP 7.3, the outstanding argument is deprecated. To configure Semi-Sync mode, replace the
schedule field by Semi-Sync in the snapmirror.conf file.
Example:
src:vol1 dst:vol1 semi-sync
Only the CP streams are forwarded to the destination. NVLOG data is not forwarded. This mode
minimizes performance impact on client applications, reduces storage systems processing power, and
saves bandwidth.

20

DEPLOYMENT EXAMPLES
Example 1: Qtree SnapMirror
src:/vol/vol1/q1 dst:/vol/vol1/q1 15 * * *
The source qtree q1 is replicated to the destination qtree q1 every 15 minutes, every day of the week and
every day of the month.
Example 2: Volume SnapMirror
src:vol2 dst:vol2 kbs=2000 10 8,20 * *
The source volume vol2 is replicated to the destination volume vol2 at 10 minutes past 8:00 a.m. and 8:00
p.m. every day. In other words, the source volume vol2 is replicated daily at 8:10 a.m. and 8:10 p.m. Data
ONTAP can use a maximum of 2,000 kilobytes per second to transfer data for this relation.
Example 3: SnapMirror Sync
src:vol3 dst:vol3 sync
The source volume vol3 is synchronously replicated to the destination volume vol3.
Example 4: SnapMirror Semi-Sync
src:vol4 dst:vol4 semi-sync
The source volume vol4 is replicated to the destination volume vol4 in Semi-Sync mode. Only the CPSync mechanism is used. NVLOG data is not forwarded to the destination.

21

PERFORMANCE TUNING
In Sync mode, changes are shown on the destination only after the source takes a Snapshot copy of the
source volume (every three minutes by default) and then transfers it to the destination. To control the view
of the data on the destination, you use the visibility_interval argument in the snapmirror.conf
file. If visibility_interval is set too low, the source system will be kept very busy creating
Snapshot copies and this can impact performance. Changing the visibility_interval default
value of three minutes is not recommended.
In versions of Data ONTAP earlier than 7.2.2, NVLOG data is written in the root volume. This may
impact on how quickly NVLOG data may be committed to disk. For traditional volumes, ensure that the
root volume spans enough disks. Note that with Data ONTAP 7.2.2 and later, the NVLOG files are
written in the destination volumes parent aggregate.
The checksums algorithm is used to protect SnapMirror transmitted data. Cyclic redundancy check
checksums, also known as crc32c, are computed by the CPU on the destination storage system and may
have undesired effects on performance. TCP checksums are computed directly on the network interface
card, or NIC, and are less CPU-intensive; therefore the TCP checksums computation is the recommended
method.

22

REQUIREMENTS AND LIMITATIONS


SnapMirror Sync can only be used on volumes, not qtrees.
SnapMirror Sync follows the same volume type matrix as volume SnapMirror. Replication must be from
traditional volume to traditional volume or flexible volume to flexible volume.
Bidirectional SnapMirror Sync is supported on storage systems using Data ONTAP 7.2.1 and later.
Replications are allowed only between identical storage system platforms running the same major Data
ONTAP release.
One source system cannot have SnapMirror Sync relationships to multiple destination systems.
Cascading sync relationships is not supported.
The source and destination of the SnapMirror Sync relationship cannot be on the same storage system,
such as: SystemA:src_vol->SystemA:dst_vol
A high availability configuration cannot have SnapMirror Sync relationships from one half of the
configuration to the other half of the configuration.

23

SNAPMIRROR BREAK SCENARIOS

SNAPMIRROR BREAK SCENARIOS


In normal operations, primary data is being replicated from the source to the destination storage system.
You use the snapmirror break command when testing or when failing over to the destination
storage system.
You use the snapmirror resync command when resuming normal operations after a test or failover
scenario.
Re-establishing normal operations from failover mode involves taking care that production data is not
erased:
1. Because the source volume has been offline, the destination volume has more up-to-date data. You
must move the production data that has been written to the disaster recovery site over to the data center
storage system with a snapmirror resync command executed from the data center storage system.
The SnapMirror relationship is reversed at that point. The source is the new data written to the disaster
recovery site. The destination is the data center storage system that had been offline.
2. With the SnapMirror relationship reversed, the second step is to resume normal operations with a
snapmirror resync command executed from the disaster recovery site. The original source and
destination relationship will be restored and normal operations can resume.

NORMAL OPERATIONS

24
Normal operations have been established with the SnapMirror initialize command executed
from the Disaster Recovery Site storage system.
The syntax of the command is:
destination>snapmirror initialize S source:source_vol
destination:destination_vol.

TEST MODE
To test the mirror, you break the SnapMirror relationship, continue writing to the source, and test writing
to the destination volume.
The syntax of the command is:
destination>snapmirror break destination_vol

25

RESUME NORMAL OPERATIONS


The snapmirror resync command will resume a SnapMirror relationship and resynchronize the
contents of the source to the destination volume or qtree, without repeating the initial transfer.
The command is executed from the destination storage system. The snapmirror resync command
will compare the newest common Snapshot copy and overwrite all new data on the destination volume or
qtree to replicate the data that is on the source. The In test scenarios, you have not written valuable data to
the destination storage system that you need to preserve.
The syntax of the snapmirror resync command is:
destination>snapmirror resync destination_vol.

DISASTER MODE
Disaster strikes. In this example, a backhoe has dug up the network cables that connect the data Center to
clients. The Data Center volume (dc_vol) is unavailable.
From the Disaster Recovery Site, break the mirror and the SnapMirror replica becomes writable.
The syntax of the snapmirror break command is:
destination>snapmirror break destination_vol.
After breaking the mirror, direct clients to the Disaster Recovery volume (dr_vol), and they continue
reading and writing their data.
The Data Center volume is offline and becoming out of data. The last shared Snapshot copy is preserved,
however. After the problem is fixed, a combination of snapmirror resync and snapmirror
break commands will help you to resume normal operations.

26

RE-ESTABLISH NORMAL OPERATIONS 1


With the problem fixed, you can now move the new production data to the Data Center with the
snapmirror resync command executed from the Data Center storage system.
The syntax of the snapmirror resync command is:
destination>snapmirror resync destination_vol.
Executing the snapmirror resync command from the Data Center storage system has the effect of
reversing the direction of the SnapMirror relationship. The Data Center storage system is now the
destination storage system; the Data Center volume (dc_vol) is now the destination volume. The Disaster
Recovery Center volume is now the source of the new production data written while the Data Center
storage system was offline.
While the Data Center source volume is receiving data from the snapmirror resync operation,
clients are still accessing their data from the disaster recovery site.

27

RE-ESTABLISH NORMAL OPERATIONS 2


The next step in re-establishing normal operations is to stop user access to the disaster recovery volume
and complete the update of the Data Center volume with any production data written since the beginning
of the snapmirror resync command.
Execute the snapmirror update command from the Data Center storage system.
The syntax of the command is:
destination>snapmirror update S source:source_volume destination_vol.
(In our example, the Data Center storage system became the destination when the snapmirror
resync command was used to move the Disaster Recovery production data to the Data Center storage
system.)

28

RE-ESTABLISH NORMAL OPERATIONS 3


The Data Center volume (dc_vol) has all of the production data; however, at this point dc_vol is a readonly SnapMirror replica of dr_vol, the Disaster Recovery volume. You must now reverse the direction of
the SnapMirror relationship by breaking the mirror with the SnapMirror break command executed
from the Data Center storage system.
The syntax of the snapmirror break command is:
destination>snapmirror break destination_vol.

RE-ESTABLISH NORMAL OPERATIONS 4


The final step to resuming normal operations is to execute the snapmirror resync command from
the Disaster Recovery site.
The syntax of the snapmirror resync command is:
destination>snapmirror resync destination_vol.
This final SnapMirror resync command executed from the Disaster Recovery Site will return
SnapMirror to the original source and destination relationship.

29

ADVANCED FEATURES

SNAPMIRROR OVER MULTIPLE PATHS


SnapMirror supports up to two paths for a particular SnapMirror relationship. The paths can be Ethernet,
Fibre Channel, or a combination of Ethernet and Fibre Channel.
Multiple paths are supported by SnapMirror Async and Sync replication modes.
The two paths can be used in one of two modes:
Multiplexing mode: SnapMirror uses both paths at the same time, essentially load balancing the transfers.
If one path fails, the transfers occur on the remaining path. After the failed path is repaired, the transfers
resume using both paths.
Failover mode: SnapMirror uses the first specified path as the desired path and uses the second specified
path only after the first path fails.

30

CONFIGURING MULTIPLE PATHS


To implement multiple paths between the source and destination storage system, edit the snapmirror.conf
file to add a connection name line that defines the mode of the connection and what the two connections
are. Then, edit the schedule entry to reflect the new connection name as the source system.
In this following illustration, the source volume vol1 on the storage system FAS1 is synchronously
replicated to the destination volume vol1 on the storage system FAS2. Two gigabit Ethernet paths are
configured and replication occurs using both connections in multiplexing mode as specified in the
snapmirror.conf file.
FAS1_conf = multi (FAS1-e0a,FAS2-e0a) (FAS1-e0b,FAS2-e0b)
The first entry defines the connection name (FAS1_conf), the mode of the connection (multi) and what
the two connections are (FAS1-e0a connected to FAS2-e0a and FAS1-e0b connected to FAS2-e0b).
FAS1_conf:vol1 FAS2:vol1 sync
The second entry defines the sync relationship for volume vol1.

SNAPMIRROR OVER FIBRE CHANNEL


SnapMirror over Fibre Channel enables you to use the SnapMirror Async and Sync features over a Fibre
Channel SAN environment.
SnapMirror over FC requires at least one Fibre Channel switch in the data path. The supported switch
vendors are Cisco or Brocade. To comply with SnapMirror over Fibre Channel certification, use only
switches from one vendor in the SnapMirror data path. Supported switches and firmware versions are
specified in Requirements for SnapMirror Over Fibre Channel Transport (Asynchronous, Synchronous,
and Semi-synchronous modes) on the NOW site.
The storage system must be configured with NetApp X1024 FC HBA (two 2-GB ports) or X1124 (two 4GB ports) for IP over Fibre Channel functionality. The adapters translate the SnapMirror IP packets to and
from FC frames and supports the multiple path configurations.

31
To begin the SnapMirror over FC configuration process, first configure the FC NIC adapters and connect
the systems to the Fibre Channel switches. Optionally, you can set up multiple SnapMirror traffic zones.
SnapMirror traffic must be configured within dedicated zones. After the switches have been set up,
configure SnapMirror and baseline the initial transfers. For details on SnapMirror over Fibre Channel
configuration steps, refer to the latest Data ONTAP Data Protection Online Backup and Recovery Guide
on the NOW site.

SNAPMIRROR TO TAPE
SnapMirror to tape is a deployment that supports SnapMirror replication over low-bandwidth connections
by accommodating the initial transfer between the source and the destination systems using a physically
transported tape. When baseline transfer has been carried out with the local tape device, incremental
SnapMirror updates can be performed over the network. The SnapMirror-to-tape function is available for
volume replication only.
On the source system using Data ONTAP 8.0 7-Mode and later, use the smtape backup command to
copy all volume Snapshot copies, including the base Snapshot copy, to tape. If more than one backup tape
is necessary, use the smtape continue command to continue the copying on a new tape. NOTE: For
systems using earlier versions of Data ONTAP, use the snapmirror store command to copy all volume
Snapshot copies to tape.
Physically transport the backup tapes from the source system to the destination system.
On the destination system, use the vol create and vol restrict commands to set up a
SnapMirror target volume.
For systems using Data ONTAP 8.0 7-Mode and later, use the smtape restore command to copy
the initial SnapMirror tape to the destination system. If the backup spans more than one tape, use the
smtape continue command to continue the restore on a subsequent tape. NOTE: For systems using
earlier versions of Data ONTAP, use the snapmirror retrieve command to restore the data from
tape and the snapmirror use command to continue when more than one backup tape is required.

32
Use the snapmirror update command to trigger an incremental update from the source to the
destination system over the low-bandwidth connection, or edit the snapmirror.conf file to set up an
incremental update schedule from the source to the destination volume.
Finally, use the snapmirror release command to eliminate the source-to-tape relationship and
associated Snapshot copy.

MIGRATING SNAPMIRROR VOLUMES


SnapMirror can migrate data between volumes and redirect NFS clients to the new volume without
rebooting the storage system or remounting the volume on NFS clients. The migration must be run on two
volumes that are currently the source volume and destination volume in a SnapMirror relationship.
You use the snapmirror migrate command on the storage system, which holds the source volume.
snapmirror migrate src_hostname:src_volume dst_hostname:dst_volume
The SnapMirror migration process does the following:
Stops NFS and CIFS services to the source volume
Performs a SnapMirror incremental transfer to the destination volume
Migrates NFS file handles to the destination volume
Makes the source volume restricted
Makes the destination volume read-write
Note that SnapMirror does not transfer IP addresses, license keys, or quota information. You must
remount on the NFS clients. SnapMirror does not migrate CIFS clients. You must reestablish CIFS client
sessions after migrating data to the destination volume.

33

CASCADING SNAPMIRROR VOLUMES


Instead of propagating data from one central master site to many destinations, which would require
expensive network connections and excessive CPU time, you can propagate data from one volume to
another volume and from that one to the next, in a series.
In a volumes cascade, SnapMirror creates and retains the Snapshot copies on the original source volume.
The SnapMirror Snapshot copies are cascaded down the line to be able to replicate the volumes on each
destination system.
In this illustration, volume vol1 on storage system FAS1 is replicated to seven storage systems. To set up
cascading volumes on each storage system as shown in the diagram, the snapmirror.conf entries would
look like this:
FAS1:vol1 FAS2:vol1 - 15 * * 1,2,3,4,5
FAS1:vol1 FAS5:vol1 15 * * 1,2,3,4,5
FAS2:vol1 FAS3:vol1 35 * * 1,2,3,4,5
FAS3:vol1 FAS4:vol1 45 * * 1,2,3,4,5
FAS5:vol1 FAS6:vol1 - 25 * * 1,2,3,4,5
To remove a destination from the cascade, use the snapmirror release command from the
immediate source. SnapMirror will delete the Snapshot copies associated with that destination.
This table can be reduced to two simple rules:
1. Synchronous SnapMirror is allowed only on the source.
2. Qtree SnapMirror cannot cascade more than one hop.

34

SNAPMIRROR INTERACTIONS

35

36

SNAPMIRROR PERFORMANCE

VOLUME SNAPMIRROR PERFORMANCE


Volume SnapMirror performance is centered on the update frequency, the network bandwidth, and the
storage system utilization. Volume SnapMirror Async performance is particularly affected by the volume
size, the rate of data changed, and the disk geometry for traditional volumes.
Disk geometry
For versions of Data ONTAP earlier than 7.0 and traditional volumes, it is recommended that the source
and destination volumes contain disks of the same size, and be organized in the same RAID group
configuration to gain optimal performance. For flexible volumes, disk geometry matching is no longer a
consideration.
Snapshot COPY creation and update frequency

37
SnapMirror creates a Snapshot copy before every update and deletes a Snapshot copy at the end. On
heavily loaded storage systems, Snapshot copy creation time can stretch out and restricts the frequency of
SnapMirror updates. Stretched SnapMirror schedules result in SnapMirror creating many Snapshot copies
on the source storage system at the same time, which can impact client access. For this reason staggered
SnapMirror schedules are recommended to avoid system blockages.
Volume size and changed blocks
To perform an incremental update, the block map in the new Snapshot copy is compared to the block map
in the baseline Snapshot copy. The time required to determine the block changes depends on the volume
size. With Data ONTAP 7.0 and later, you can use the snap delta command to determine the rate of
data change between Snapshot copies on a volume.

QTREE SNAPMIRROR PERFORMANCE


Qtree SnapMirror performance is impacted by deep directory structure and large numbers, such as tens of
millions, of small files replicated.
Directory structures and large numbers of small files
To determine changed data, qtree SnapMirror looks at the inode file and defines which inodes are in the
qtree of interest and which inodes have changed. If the inode file is large, but the inodes of interest are
few, qtree SnapMirror spends a lot of time going through the inode file to find very few changes. Disk
I/Os used to access the data become small and inefficient.
Transfer size
When a qtree SnapMirror update is transferring, the snapmirror status l command shows how
many kilobytes have been transferred so far; the value may be greater than the expected delta (changes
expected). This overhead is due to metadata transfer, for example: 4-KB header, file creation, deletion,
ACLs, and so on.
When the update has completed, you can use the Data ONTAP df command for the destination volume
to verify that the expected change size is correct.

CONCURRENT TRANSFER LIMITATION


The transfer fails when the system reaches the maximum number of simultaneous replication operations.
Each transfer beyond the limit will reattempt to run once per minute.

38

To optimize SnapMirror deployment, it is recommended that the schedules be staggered. For qtree
SnapMirror, if there are too many qtrees per destination volume, the solution is to re-baseline those qtrees
to another volume.

CPU UTILIZATION
SnapMirror consumes available CPU cycles on a storage system.
When the source storage system shows that the CPU utilization is up to 100%, it does not mean that the
system performance or the SnapMirror throughput is degraded.
SnapMirror may have some impact, but in the majority of cases, it is not very significant.
You can monitor storage system CPU using Operations Manager Performance Advisor or the Data
ONTAP sysstat command.

SYSTEM ACTIVITIES
On heavily loaded systems, SnapMirror competes with other processes and may impact response times.
To address this problem you can set the system priority to High or Very High on dedicated storage
systems for SnapMirror replication using FlexShare software.
You can also schedule SnapMirror updates at times when NFS or CIFS traffic is low and reduce the
frequency of updates.
Finally, consider upgrading to a more powerful NetApp controller when the system resources become the
system blockage.

NETWORK DISTANCE AND BANDWIDTH


When deploying SnapMirror, you have to consider the round-trip travel time of a packet from the source
to the destination storage system, because network distance causes write latency. The round trip has a
latency of approximately 2 milliseconds if the source and the destination storage systems are 100 miles
apart.
Networking issues impacting SnapMirror performance can be addressed by limiting the bandwidth using
the system-wide or per-transfer network throttle features.

39
Networking issues can also be addressed by using a dedicated path for SnapMirror transfers or using
multiple paths for load balancing and failover.
If the network still does not perform up to expectations, look for typical network problems. For example,
duplex mismatches can cause networks to be very slow.

@@@@@@@@@@@@@@@@@@@@
Data protection using SnapMirror
SnapMirror is a feature of Data ONTAP that enables you to replicate data. SnapMirror
enables you to
replicate data from specified source volumes or qtrees to specified destination
volumes or qtrees,
respectively. You need a separate license to use SnapMirror.
You can use SnapMirror to replicate data within the same storage system or with
different storage
systems.
After the data is replicated to the destination storage system, you can access the
data on the

40
destination to perform the following actions:
You can provide users immediate access to mirrored data in case the source goes
down.
You can restore the data to the source to recover from disaster, data corruption
(qtrees only), or
user error.
You can archive the data to tape.
You can balance resource loads.
You can back up or distribute the data to remote sites.
You can configure SnapMirror to operate in one of the following modes:
Asynchronous mode: SnapMirror replicates Snapshot copies to the destination at
specified,
regular intervals.
Synchronous mode: SnapMirror replicates data to the destination as soon as the
data is written to
the source volume.
Semi-synchronous mode: SnapMirror replication at the destination volume lags
behind the source
volume by 10 seconds. This mode is useful for balancing the need for synchronous
mirroring with
the performance benefit of asynchronous mirroring.
SnapMirror can be used with traditional volumes and FlexVol volumes.
How SnapMirror works
SnapMirror replicates data from a source volume or qtree to a partner destination
volume or qtree,
respectively, by using Snapshot copies. Before using SnapMirror to copy data, you
need to establish a
relationship between the source and the destination.
You can specify a SnapMirror source and destination relationship between volumes
or qtrees by
using one of the following options:
The /etc/snapmirror.conf file
The snapmirror.access option
The /etc/snapmirror.allow file
The SnapMirror feature performs the following operations:
1. Creates a Snapshot copy of the data on the source volume.
2. Copies it to the destination, which can be a read-only volume or qtree.
3. Updates the destination to reflect incremental changes on the source, as per the
schedule you
specify.
The result of this process is an online, read-only volume or qtree that contains the
same data as the

41
source at the time of the most recent update.
Each of the following replication methods consists of a pair of operations, one
operation each at the
source storage system and the destination storage system:
Volume SnapMirror replication
Qtree SnapMirror replication
SnapVault replication
If a storage system is the source for one replication and the destination for another
replication, it uses
two replication operations. Similarly, if a storage system is the source as well as the
destination for
the same replication, it uses two replication operations.

What synchronous SnapMirror is

In the synchronous mode, SnapMirror enables you to replicate data to the


destination as soon as it is
written to the source volume.
Synchronous SnapMirror is a feature of SnapMirror. You can use synchronous
SnapMirror to
replicate data between systems situated at remote sites, using either an IP or a
Fibre Channel
connection.
You can use synchronous SnapMirror only with volumes, not with qtrees. The source
and destination
volumes must be of the same type: traditional volumes or FlexVol volumes.
Synchronous SnapMirror modes
There are two modes available for synchronous SnapMirror replication: sync and
semi-sync. The
semi-sync mode helps in achieving a balance between the benefits of synchronous
and
asynchronous replication.
Note: You cannot set up a synchronous or semi-synchronous SnapMirror
relationship between the
two nodes of an HA pair.
You can specify either of the following two modes, when defining a SnapMirror
relationship in the
snapmirror.conf file:
sync: The source system acknowledges a client write operation only after both the
source and
destination systems have completed the write operation. The sync option provides a
recovery
point objective of 0 seconds.

42
semi-sync: The source system acknowledges the client write operation
immediately after the
source receives the data. The destination system is synchronized with the source at
intervals of
approximately 10 seconds. The semi-sync option provides a recovery point objective
of about
10 seconds. This means that if the source becomes unavailable, you might lose up
to 10 seconds
worth of data changes. The semi-sync mode provides a performance advantage
over the sync
mode.
Note: If neither of these two modes is specified, then the SnapMirror relationship is
set as
asynchronous.

How SnapMirror replicates data


synchronously
Before Data ONTAP saves data to the disk, it collects written data in NVRAM. Then,
at a point in
time called a consistency point, it sends the data to disk.
As data is added or changed, the data is not directly written to the disk. Changes or
additions to data
are temporarily stored in the NVRAM. Then, at a consistency point, SnapMirror
writes the data to
the disks on the destination system. When the synchronous SnapMirror feature is
enabled, the source
system forwards data to the destination system as it is written in NVRAM. Then, at
the consistency
point, the source system sends its data to disk and tells the destination system to
also send its data to
disk. Finally, the source system waits for the destination system to acknowledge
that it sent data to
disk before continuing with the next write.

How synchronous SnapMirror handles network


issues
If SnapMirror encounters any network issues that restrict the operation of
synchronous replication,
SnapMirror goes into the asynchronous mode.
The source and destination systems communicate with each other continuously. If a
network failure
disrupts the communication, SnapMirror initiates the following process:
1. SnapMirror sets the replication to the asynchronous mode.

43
2. In the asynchronous mode, the source system tries to communicate with the
destination system
once a minute.
3. When the source system reestablishes communication with the destination
system, the source
system asynchronously replicates data to the destination.
4. SnapMirror gradually transitions the replication relationship to the synchronous
mode.
If the latest common Snapshot copy is deleted from the source, SnapMirror does not
transition back
from asynchronous to synchronous mode. The SnapMirror relationship should be
broken and
resynchronized, by using the snapmirror break and snapmirror resync commands.
However,
you can avoid this situation and let the relationship go to synchronous mode
automatically, by setting
the replication.volume.use_auto_resync option to on. The default value of this
option is off.

Prerequisites for SnapMirror


You must purchase and enable the SnapMirror license. If the SnapMirror source
and destination
are on different systems, you must enable the SnapMirror license on each system.
For SnapMirror volume replication, you must create a restricted volume to be used
as the
destination volume.
SnapMirror does not automatically create a volume.
For SnapMirror volume replication, the destination system must use a version of
Data ONTAP
that is the same as or later than that of the SnapMirror source system.
For SnapMirror qtree replication, you must not create a qtree to be used as a
destination qtree; the
snapmirror initialize command creates the destination qtree automatically .
Note: If you upgrade your system to a later version of Data ONTAP, you must
upgrade the
SnapMirror destination before upgrading the SnapMirror source.
Restrictions on using SnapMirror
The source volume must be online.
For SnapMirror volume replication, the capacity of the destination volume must be
greater than or
equal to the capacity of the source volume.
To support SnapMirror qtree replication, the destination volume must contain 5
percent more free
space than the source qtree consumes.
The SnapMirror destination volume cannot be the root volume of a storage
system.

44
The SnapMirror source volume, however, can be the root volume.
A destination qtree can be on the root volume, but the /etc qtree cannot be a
destination qtree.
Points of caution while using SnapMirror
Do not delete Snapshot copies that SnapMirror creates in the source volume
before copying the
data to the destination. The most recent SnapMirror Snapshot copy is referred to as
the newest
common Snapshot copy (NCS). Incremental changes to the destination depend on
the NCS. If
SnapMirror cannot find the required Snapshot copy on the source, it cannot perform
incremental
changes to the destination.
Do not use the snapmirror release or snapmirror break command on the
destination
volume or qtree unless you no longer need to copy incremental changes from the
source. The
destination must be actively functioning as a destination to receive incremental
updates.
Do not restrict or take the destination volume offline while SnapMirror is
configured to transfer.
Taking the destination offline prevents SnapMirror from performing updates to the
destination.

Possible conflicts between SnapMirror operation and


Snapshot copy schedule
Some of the operations of SnapMirror might conflict with the actions of a Snapshot
copy
management schedule. Certain steps enable you to avoid these conflicts.
If the SnapMirror feature is scheduled to perform Snapshot copy management at the
same time as a
snap sched activity, then the Snapshot copy management operations scheduled
using the snap
sched command might fail, generating syslog messages such as: Skipping creation
of
hourly snapshot and Snapshot already exists.
To avoid this situation, you should stagger the Snapshot copy update schedules so
that SnapMirror
activity does not begin, or end at the exact minute that a snap sched command
operation attempts
to create a Snapshot copy.
Destination accessibility when using CIFS with SnapMirror
Before copying a directory on a SnapMirror volume that supports CIFS clients, you
should ensure

45
that the directories are in the Unicode format. This ensures that the read-only
directory copied on the
destination is in the Unicode format. This also enables requests through CIFS to
access the directory
and its files on the destination, and prevents Access denied errors.
You can ensure that both source volume and destination volume directories are in
the Unicode format
by using one of the following methods:
Method 1
On the system console for the source volume, enter these two commands.
vol options vol_name convert_ucode on
Use this command to convert any existing directories in a volume to the Unicode
format.
Data protection using SnapMirror | 89
vol options vol_name create_ucode on
Use this command to ensure that any new directories created in a volume are in the
Unicode
format.
Considerations before using synchronous SnapMirror
You need to consider certain issues when planning to use SnapMirror for
synchronous replication.
One source volume cannot have synchronous SnapMirror relationships to multiple
destinations
volumes.
You cannot create a synchronous SnapMirror relationship between FlexVol volumes
within the
same system or within the same HA pair.
You must ensure that the source and destination have the same version of Data
ONTAP installed.
You cannot use synchronous or semi-synchronous SnapMirror to replicate volumes
that use
deduplication.
A volume with a synchronous or a semi-synchronous SnapMirror relationship
should not be
placed in the same aggregate as a volume using deduplication.
Synchronous SnapMirror needs three Snapshot copies to get into synchronization.
Therefore, you should limit the combined total of Snapshot copies retained on any
one volume to 252 or fewer

Comparison between volume SnapMirror and qtree SnapMirror


You can configure SnapMirror replication for either entire volumes or individual
qtrees on a volume.
You should consider the differences between the two options.

46

SnapMirror commands

47

48
How the snapmirror initialize command matches source and destination
volume size
When you use the snapmirror initialize command to initialize a volume replication,
SnapMirror sets the vol options fs_size_fixed option to on. This option forces the file
system
on the destination volume to remain the same size as the file system on the source
volume.
What you can do if an initial SnapMirror transfer fails
If an initial SnapMirror transfer fails, you can resume the transfer by re-entering the
snapmirror
initialize command, under certain conditions.
If the following conditions are met, you can resume the initialization of a SnapMirror
relationship:
The value for restart mode in the /etc/snapmirror.conf file is set to always or is set
to the
default, and the next scheduled update has not begun.
The output of the snapmirror status command displays that the process has a
restart
checkpoint.
The Snapshot copy used for the initial SnapMirror transfer still exists.
The disk geometry has not changed.
Note: If these conditions are not satisfied, you cannot resume the initial transfer.
You need to start
the initial SnapMirror transfer again.
SnapMirror does not automatically retry to initialize a destination
What the snapmirror.conf file does
The snapmirror.conf file is used to specify the details related to the copying of data,
by using
SnapMirror, from the source to the destination. This file resides on the destination
storage system.
The /etc/snapmirror.conf file defines:
The relationship between the source and the destination.
The schedule used by the destination to copy data.
The arguments that control SnapMirror when copying data.
Converting asynchronous SnapMirror replication to synchronous
You can change an asynchronous volume SnapMirror relationship to replicate data
synchronously by
editing the snapmirror.conf file on the destination system.
About this task
Synchronous replication is not supported for qtree SnapMirror relationships.
Therefore, a qtree
SnapMirror relationship cannot be converted to a synchronous SnapMirror
relationship.
Step
1. To convert an asynchronous SnapMirror relationship to a synchronous SnapMirror
relationship,

49
on the administration host, edit the snapmirror.conf file on the destination system to
change
the schedule to sync.
What the quiesce command does
The snapmirror quiesce command waits for all volume and qtree SnapMirror
transfers to
complete, and blocks any further updates. If a qtree is not in a stable state (is in
transition), the
snapmirror quiesce command forces it into a stable state.
You can quiesce only volumes and qtrees that are online and that are SnapMirror
destinations. You
cannot quiesce a restricted or offline volume or a qtree in a restricted or offline
volume.
The snapmirror quiesce command stops a volume or qtree from acting as a
SnapMirror
destination, but does not prevent it from acting as a SnapMirror source.
You can use the snapmirrorresumecommand to restore the capability for data transfer to a
volume or qtree you have quiesced.

Releasing partners from a SnapMirror relationship


To permanently end a SnapMirror relationship between a source and destination pair
of volumes or
qtrees, you need to use the snapmirror release command on the source and the
snapmirror
break command on the destination.
About this task
Releasing a source from a destination volume or qtree allows the source to delete
its base Snapshot
copy for the SnapMirror relationship. After breaking the relationship, you need to
take additional
steps to scrub the destination. Unless these extra steps are performed, the
Snapshot copies associated
with the broken relationship remain stored on the destination system, and a
snapmirror status
command continues to list the former destination object as a current destination
object.

SnapMirror data transfer logs


You can use the optionssnapmirror.log.enablecommand to check SnapMirror data transfer
logs. You can find out whether transfers are occurring as planned, how long the transfers take, and
how well the system setup works. You find this information in the SnapMirror log file.
The SnapMirror log file provides the following information:
The start time and the end time of the SnapMirror logging process.
The start time, end time, and size of each transfer.
Any abnormal termination and restart of a transfer.
Other SnapMirror-related activities.
You can use the raw information provided to do the following:
Calculate the average transfer size.
Calculate the average transfer time.

50
Look at the number of successful transfers and the failure rate.
Tune the schedule.
Create a notifier for aborted transfers.
Monitor performance on a per-volume level.
Be assured that things are working as planned.

Managing SnapMirror operations through the OnCommand System


Manager
You can use the OnCommand System Manager to perform different SnapMirror
operations, such as
creating, deleting, and managing SnapMirror relationships.
You can perform the following tasks as part of managing SnapMirror relationships:
Initializing SnapMirror destinations
Updating SnapMirror relationships
Quiescing SnapMirror relationships
Resuming quiesced SnapMirror relationships
Breaking SnapMirror relationships
Resynchronizing and reverse resynchronizing SnapMirror relationships
Aborting SnapMirror data transfers

Troubleshooting of SnapMirror issues


When using SnapMirror, you might face issues when you change the name of the destination volume.
There can also be issues when SnapMirror Snapshot copies are deleted.

What happens if you change a SnapMirror destination


volume name
If you change the name of a SnapMirror destination volume, you need to manually
correct the
SnapMirror relationships affected by the change. SnapMirror is unable to replicate
source volume
data to a newly named destination volume whose configuration information is
incomplete.
In the following case, the destination, volJobak, was renamed to volStatbak. After the
renaming,
the snapmirror status command does not display the source. Instead, the entry is
shown with a
dash () in the source column.

51
If you change the volume name of a SnapMirror source or destination, you need to
make the
following changes.
1. Update the snapmirror.conf file, if there is an old entry.
2. Use the snapmirror release command to update the old destination name, and
SnapMirror
releases the soft lock and the old Snapshot copy.
3. Use the snapmirror update command on the new volume name, and status registry
is updated
with the new volume name.
4. Update the /etc/exports file.
Note: If a system is running at its limit of concurrent transfers, and you attempt to
initiate more
transfers through by using the snapmirror update command, the attempted transfer
will fail.
Accidental deletion of SnapMirror Snapshot copies
SnapMirror Snapshot copies stored on either the SnapMirror source or destination
location must not
be deleted. If the base Snapshot copy (most common Snapshot copy) is accidentally
deleted from
either the source or destination location, you can attempt recovery.
You might be able to recover without reinitializing the destination by breaking the
SnapMirror
relationship and then resynchronizing the source and the destination.
As long as there is at least one Snapshot copy common to both the source and the
destination,
resynchronization will succeed.
If there is no Snapshot copy common to both the source and the destination, you
need to use the
snapmirror initialize command over the network. Or, if the source and destination
are
volumes, you must use the smtape backup command to store the source volume on
tape and then
use the smtape restore command to restore the volume from the tape to the
destination.
Space issues when volume space guarantee is enabled for a destination
volume
When volume space guarantee is enabled on a SnapMirror destination volume, if the
destination
volume is larger than the source volume, the destination volume consumes more
space than the
source volume. You can reduce the size of the destination volume by using the vol
size command.

Applications of SnapMirror
SnapMirror is used to replicate data. Its qualities make SnapMirror useful in several scenarios,
including disaster recovery, data backup, and data restoration.
You can copy or use the data stored on a SnapMirror destination. The additional advantages of

52

SnapMirror make it useful in data retrieval situations such as those described in the following
table

Guidelines for growing an aggregate with a synchronous SnapMirror


destination volume
When increasing the size of an aggregate that contains a synchronous SnapMirror destination
volume, you need to follow several guidelines.
Add a minimum of four disks.
Ensure that any new RAID group created by the addition of new disks has at least four data disks.
Ensure that the RAID group size is 16 or fewer disks.

Turning SnapMirror on
Before using SnapMirror you need to enable the SnapMirror license on both the source and the
destination systems.
Steps

1. To turn SnapMirror on, enter the following command on both the source system and
destination system:
optionssnapmirror.enableon

53

Alternatively, you can use the snapmirroroncommand to turn SnapMirror on.


Note: This setting persists across reboots.
2. Depending on whether you use the snapmirror.accessoption or the /etc/
snapmirror.allowfile to specify allowed destinations, choose one of the actions from the
following table:

Recommended actions while using SnapMirror


While using SnapMirror, you can increase the efficiency of data copying by performing certain
actions. This includes the staggering of Snapshot copy schedules and SnapMirror update schedules.
To optimize performance, stagger your Snapshot copy update schedules so that SnapMirror
activity does not begin or end at the exact minute a snapschedcommand operation attempts to
create a Snapshot copy.
If the SnapMirror feature is scheduled to perform Snapshot copy management at the same time as
a snapschedactivity, then the Snapshot copy management operations scheduled using the
snapschedcommand might fail with syslog messages: "Skippingcreationofhourly
snapshot"and "Snapshotalreadyexists."
For optimum SnapMirror volume replication performance, ensure that the SnapMirror source
volume and destination volume contain disks of the same size, organized in the same RAID
configuration.

If the SnapMirror source and destination are FlexVol volumes, the RAID configurations do
not make a difference.
If the SnapMirror source and destination are qtrees, volume size and configuration do not
make any difference.

Deduplication with volume SnapMirror


Starting with Data ONTAP 8.1, two copies of the deduplication metadata are maintained for a
FlexVol volume. One copy of the metadata resides in the volume while the other copy resides in
the aggregate.
When replicating data using volume SnapMirror, the deduplication metadata for the volume is

54

replicated along with the volume. The data in the volume is usable both on the source and the
destination.
Note: When configuring volume SnapMirror and deduplication, you should ensure that

deduplication and volume SnapMirror operations do not run at the same time. You should start
volume SnapMirror transfer of a deduplicated volume after the deduplication operation is
complete. This prevents any impact to the replication performance while deduplication is in
progress and sending of undeduplicated data and additional temporary deduplication metadata
files over the network.
To achieve maximum space savings on the destination volume, you must scan the entire file
system to re-create the deduplication metadata for the destination volume. Use the sisstart
scommand to do so.
Note: The destination volume is accessible for read-write operations when the deduplication

scan is in progress.

Data compression with qtree SnapMirror


Because qtree SnapMirror operates at the logical level, when data compression is enabled on the
source system, the data is uncompressed in memory before being replicated to the destination
system.
If data compression is enabled on the secondary system, then all transfers are compressed on the
secondary system.
When data compression is enabled on the source volume, no bandwidth savings are achieved
over the network because the data is uncompressed on the source volume before it is sent for
replication.
If inline compression is enabled on the destination volume, the data is compressed inline at the
destination before it is written to the disk. If inline compression is not enabled on the destination
volume, you must manually compress the data after the qtree SnapMirror transfer is completed to
achieve storage space savings in the destination volume.
Note: Inline compression does not guarantee compression of all the data that is being transferred

using qtree SnapMirror. The space savings at the destination and the source systems are the same
if inline compression is enabled on the source system.

SnapMirror deployment variations


Source to destination to tape variation: A common variation to the basic SnapMirror backup
deployment adds a tape backup of the destination volume. By running a tape backup off the
SnapMirror destination volume (as shown in the following illustration), you do not subject the
heavily-accessed source volume to the performance degradation and complexity of a direct tape
backup.
Source to tape to destination variation: A SnapMirror deployment that supports SnapMirror

55
replication over low-bandwidth connections accommodates an initial mirroring between a source and
destination volume using physically-transported tape (as shown in the following illustration). After
the large base Snapshot copy has been replicated, smaller, incremental Snapshot copy updates can be
carried out over a low-bandwidth connection.

Cascading destinations variation: A variation on the basic SnapMirror deployment and function
involves a writable source volume replicated to multiple read-only destinations. The function of this
deployment is to make a uniform set of data available on a read-only basis to users from various
locations throughout a network and to allow for updating that data uniformly at regular intervals.

Migration from traditional volumes to FlexVol volumes


You can use only qtree SnapMirror to migrate data from traditional volumes to FlexVol volumes, if
you use SnapMirror for this type of migration. You cannot use volume SnapMirror for this purpose
because it cannot replicate to a different type of volume.

SnapMirror files
SnapMirror uses configuration files, log files, and other files.

56

Setting up a basic SnapMirror operation


Before initiating SnapMirror operations, you must enable the appropriate licenses on the source
and destination systems. Also, you need to specify the destination systems that can access the
source system for updates.
Before you begin

If your source volumes contain directories that are accessed by CIFS clients, you should have
ensured that those directories are in the Unicode format before replicating the volume using
SnapMirror.
Also, you should have ensured that you have appropriate SnapMirror licenses for both the source
and destination systems.
Steps

1. For both the source and the destination system consoles, enter the following command to
enable the SnapMirror license on the source and destination systems:
licenseaddsnapmirror_license_key

57

2. On the source system console, use the optionssnapmirror.accesscommand to specify


the host names of systems that are allowed to copy data directly from the source system.
Example
optionssnapmirror.accesshost=d_systemA

3. On the destination system, create or edit the /etc/snapmirror.conffile to specify the


volumes and qtrees to be copied and the schedule (minutehourday_of_month
day_of_weekor sync) on which the destination is updated.
Example

The following entry specifies Snapshot copy replication from vol0of s_systemAto vol1of
d_systemAat a maximum of 2,000 kilobytes per second 15 minutes past every hour, Monday
through Friday:
s_systemA:vol0d_systemA:vol1kbs=2000,restart=always15**1,2,3,4,5
To synchronously mirror vol0to vol1, you must use the following entry:
s_systemA:vol0d_systemA:vol1sync
For more information about schedule entries in the /etc/snapmirror.conffile of the

destination system, see the na_snapmirror.conf(5) man page.


4. On both the source and destination system consoles, use the snapmirroroncommand to
enable SnapMirror on the source and destination systems.
5. Prepare the destination system appropriately, depending on whether you are setting up
SnapMirror volume or qtree replication.

6. On the destination system console, use the snapmirrorinitializecommand to create


an initial complete (baseline) copy of the source on the destination and start the mirroring
process. For SnapMirror volume replication:
Example

Invoking the following command transfers a complete copy of the source volume (vol0on
systemA) to the destination volume (vol2on systemB):
snapmirrorinitializeSsystemA:vol0systemB:vol2

The destination volume must be configured as restricted and read-only.


For SnapMirror qtree replication:
Example

The following command creates a destination qtree ( qtree4on vol1on systemB) and transfers
a complete copy of the source qtree ( qtree4on vol1on systemA) to that destination qtree:
snapmirrorinitializeSsystemA:/vol/vol1/qtree4systemB:/vol/vol1/

58
qtree4

The volume in which the destination qtree is created must be online and writable.
After using the snapmirrorinitializecommand, the scheduled Snapshot copy replication
that you specified in Step 3 automatically updates the destination volume or qtree at the specified
times.
After you finish

If the SnapMirror source volume or qtree becomes unavailable, you can use the snapmirrorbreak
command to make the destination volume or qtree writable. This enables you to provide continued
access to data for the clients who are no longer able to access the unavailable source.

Firewall usage with SnapMirror


SnapMirror uses the typical socket/bind/listen/accept sequence on a TCP socket.
SnapMirror source binds on port 10566. The destination storage system contacts the SnapMirror
source storage system at port 10566 using any of the available ports assigned by the system. The
firewall must allow requests to this port of the SnapMirror source storage system.
Synchronous SnapMirror requires additional TCP ports to be open. The source storage system listens
on TCP ports 10566 and 10569. The destination storage system listens on TCP ports 10565, 10567,
and 10568. Therefore, you should ensure that the firewall allows a range of TCP ports from 10565 to
10569.

How the snapmirror initialize command copies volumes


When the snapmirrorinitializecommand copies a volume, it creates a Snapshot copy of all
the data on the source and transfers it to the destination. The destination is a volume that you have
already created and marked restricted. After SnapMirror finishes transferring the data, it brings the

59
destination online in a read-only state. This version of the destination is the baseline for the first
incremental update.
While the initial data transfer is taking place, the destination is marked invalidin the output of a
volstatuscommand. The volume becomes valid and goes online after the initial transfer is
complete.
Note: Any attempt to bring this volume online manually will only succeed after the initial transfer

is complete.

How the snapmirror initialize command copies qtrees


To use SnapMirror to copy a qtree, you do not create a destination qtree because the snapmirror
initializecommand creates it. The volume where you want the destination qtree to be must be
online. After the destination qtree is initialized, it is no longer writable. However, the rest of the
volume where that qtree resides is still writable.
The destination Snapshot copy created by qtree initialization is marked busyin the output of the
snaplistcommand until the next transfer is complete.

What happens after SnapMirror makes the initial copy to the


destination
After you initialize a SnapMirror volume replication, the files and Snapshot copies in the source
volume are available on the destination. After you initialize a SnapMirror qtree replication, the files
on the source qtree are available on its destination qtree.
You can export the destination for NFS mounting or add a share corresponding to the destination for
CIFS sharing.

How to check the initialization of a volume


To check that a destination volume has been initialized, you can use the snapmirrorstatus
command.
If you specify no options or arguments, the snapmirrorstatuscommand displays the status of
the volumes in the system, as shown in the following example. You also can use the volstatusor
the qtreecommand to check whether the volume or qtree is a SnapMirror destination.

How the snapmirror initialize command matches source and


destination volume size
When you use the snapmirrorinitializecommand to initialize a volume replication,
SnapMirror sets the voloptionsfs_size_fixedoption to on. This option forces the file system
on the destination volume to remain the same size as the file system on the source volume.

What happens after SnapMirror makes incremental updates to the


destination
The destination reflects the changes on the source after SnapMirror completes the transfer. If the
SnapMirror transfer is incomplete or interrupted, the changes on the destination are not visible till the
transfer is complete. After SnapMirror completes the destination update, you can see the changes
when you open the file.

60

Note: SnapMirror automatically deletes old Snapshot copies that are no longer necessary for

updating data.

SnapMirror over multiple paths


You might want more than one physical path for a SnapMirror relationship. SnapMirror supports up
to two paths for a particular SnapMirror relationship.
When using multiple paths, you need to set up the configuration in one of the following ways:
Set up static routes to ensure different routes are used for different IP connections.
Use different subnets for the two connections.
The paths can be Ethernet, Fibre Channel, or a combination of Ethernet and Fibre Channel. The two
paths can be used in one of these two modes:
Multiplexing modeSnapMirror uses both paths at the same time, essentially load balancing the
transfers. If one path fails, the transfers occur on the remaining path. After the failed path is
repaired, the transfers resume using both paths.
Failover modeSnapMirror uses the first specified path as the desired path and uses the second
specified path only after the first path fails.
Note: The failover mode using only one pair of connections is not supported with SnapMirror

network compression.

Setting up a multipath SnapMirror relationship


You can use multiple paths between the source and destination systems for baseline initialization.
About this task

You can set up SnapMirror to use multiple paths at the outset. You can also convert a single path
SnapMirror relationship to use multiple paths.
Steps

1. Ensure that you have two valid paths using the pingcommand from the source system to each of
the IP addresses on the destination system.
2. On the source system console, use the optionssnapmirror.accesscommand to specify the
host names of systems that are allowed to copy data directly from the source system.
Example
optionssnapmirror.accesshost=d_systemA

3. Edit the snapmirror.conffile on the destination system to add an entry that defines the mode
of the connection and what the two connections are. The format of the entry is as follows:
name=mode(src_systeme0,dst_systeme0)(src_systeme1,dst_systeme1)
where modeis either multior failover. See the na_snapmirror.conf(5) man page for details.

4. Edit the /etc/snapmirror.conffile on the destination system to specify the volumes and
qtrees to be copied and the schedule (minute, hour, day_of_month, day_of_week, sync, or
semisync) on which the destination is updated. Use the connection name specified in previous
step as the source system.

61

Checking SnapMirror data transfer status


You need to check the data transfer status, by using the snapmirrorstatuscommand, to
determine the status of all existing SnapMirror relationships on the system.
Step

1. Enter the following command:


snapmirrorstatus[options][[system:][path]...]
optionscan be one of the following.
ldisplays the long format of the output, which contains more detailed information.
qdisplays which volumes or qtrees are quiesced or quiescing.
tdisplays which volumes or qtrees are active.
systemis the name of the source system.
pathis the name of the source volume or the path to and name of the source qtree.
Note: When you use the toption, the output displays the active relationships. A relationship

is considered active if the source or destination is involved in one of the following:


Data transfer to or from the network.
Reading or writing to a tape device.
Waiting for a tape change.
Performing local on-disk processing or cleanup.
Result

If no arguments or options are given, SnapMirror displays a message that indicates whether a
transfer
is in progress, how much of the data transfer has been completed, the state of the destination, and
the
amount of time since the last Snapshot copy was created and transferred successfully.

62

63

64

65

Adjusting the TCP window size for a SnapMirror relationship


The TCP window size for SnapMirror might have an impact on SnapMirror performance. You can
change the default value of the TCP window size to suit the network configuration. You can specify
the window size for a particular SnapMirror relationship by modifying the corresponding entry in the
snapmirror.conffile.
Before you begin

Ascertain the round-trip time between the source and the destination for a SnapMirror
relationship.
Determine the bandwidth available for the SnapMirror relationship.
The default TCP window size for a SnapMirror relationship is 1,994,752 bytes.
Adjustment of the TCP window size is applicable only for asynchronous SnapMirror
relationships.
For qtree SnapMirror relationships, TCP window sizes higher than the default value are not
supported.
Note: You should only adjust the TCP window size for a SnapMirror relationship if there are

throughput issues related to bandwidth utilization.


About this task

The TCP window size specifies the amount of data that a source can send through a connection
before it requires an acknowledgement from the destination for the data received. A larger TCP
window size can increase SnapMirror throughput in certain scenarios. You can change the TCP
window size to optimize SnapMirror transfers for the network in use. Therefore, you can change
the TCP window size to optimize SnapMirror transfers.
Note: When using higher TCP window sizes than the default, the system might not be able to

achieve the maximum concurrent replication operations specified for the system. This is due to
increased resource utilization by the higher TCP window sizes.
The maximum TCP window size that you can specify for a SnapMirror relationship depends on
the connection type, as given in the following table.
Connection type
Default TCP window size
Maximum TCP window size
Single path
1,994,752 bytes
7,340,032 bytes (7 MB)
Multipath
1,994,752 bytes
14,680,064 bytes (14 MB)
Note: To limit the network bandwidth used by a particular SnapMirror relationship, use the kbs
parameter for the relationship entry in the snapmirror.conffile.

2. Specify the required TCP window size by adding the following option to the SnapMirror
relationship entry in the snapmirror.conffile:
wsize=value
valueis the required TCP window size (in bytes), as calculated in the preceding step.
Example

The following entry specifies a TCP window size of 3,250,000 bytes for the SnapMirror
relationship:
src_system:src_voldst_system:dst_volwsize=3250000****

About moving SnapMirror sources

66
Whether you are moving a volume SnapMirror source or qtree SnapMirror source to new systems or
newer drives, as long as there is a Snapshot copy in common on the source and destination, the
transition goes smoothly.
Volume SnapMirror transfers all of the Snapshot copies as part of the SnapMirror replication process.
For qtree SnapMirror, the source and destination have only one Snapshot copy in common. Different
qtree SnapMirror destinations have no common Snapshot copy, unless the Snapshot copy is
specifically replicated.
In a production environment, you should perform the process of moving SnapMirror relationships
from one volume or system to another only in a maintenance or out-of-service window. You should
also ensure that new data is not added to the original source during the move.

Moving volume SnapMirror sources


You can move a volume SnapMirror source volume to another source volume.
Before you begin

Ensure that the Data ONTAP release on the destination system is from a release family that is the
same as, or later than, the Data ONTAP release on the new source system.
About this task

The following terms are used in the task description for moving volume SnapMirror sources.
oldsourceThe original system on which the source resides.
newsourceThe system to which you are moving the source.
destinationThe system to which the source is replicated.
oldsourcevolThe original source volume.
newsourcevolThe new source volume to which you are moving.
destinationvolThe volume to which the source is replicated.
Steps

1. Copy the original source to the new source using the following command:
newsource>snapmirrorinitializeSoldsource:oldsourcevol
newsource:newsourcevol
Note: This might take some time to finish.

2. You should make oldsource read-only before continuing.


3. Create a manual Snapshot copy on the oldsource system by using the following command:
oldsource>snapcreateoldsourcevolcommon_Snapshot

4. Update newsource and destination based on oldsource using the following commands.
newsource>snapmirrorupdateSoldsource:oldsourcevol
newsource:newsourcevol
destination>snapmirrorupdateSoldsource:oldsourcevol
destination:destinationvol
Note: The common_Snapshot Snapshot copy is on all volumes because all Snapshot copies are

mirrored using volume SnapMirror.


5. Quiesce and break the SnapMirror relationship between oldsource and destination, and oldsource
and newsource by using the following commands.

67
destination>snapmirrorquiescedestinationvol
destination>snapmirrorbreakdestinationvol
newsource>snapmirrorquiescenewsourcevol
newsource>snapmirrorbreaknewsourcevol

6. Using an editor, update the /etc/snapmirror.conffile on the destination for the new
relationship by replacing the oldsource information with newsource information.
Before edit
oldsource:oldsourcevoldestination:destinationvolrestart=always0***
*

After edit
newsource:newsourcevoldestination:destinationvolrestart=always0***
*

7. Establish the new SnapMirror relationship by using the following command:


destination>snapmirrorresyncSnewsource:newsourcevol
destination:destinationvol
Note: The SnapMirror relationship discards any Snapshot copies older than

common_Snapshot: namely, the ones used for the last SnapMirror update. This is expected and
no data is lost if you ensure that no new data was added to the original source volume during
the move.
The new SnapMirror relationship automatically picks the newest Snapshot copy in common to
mirror. This is the common_Snapshot Snapshot copy.
8. Verify that the SnapMirror relationship is resynchronizing by using the following command:
destination>snapmirrorstatus

Moving qtree SnapMirror sources


With qtree SnapMirror you must create a Snapshot copy on the source and force its propagation
to the destination and new source. This behavior is unlike volume SnapMirror, in which all of the
Snapshot copies from the source are replicated to the destination.
About this task

The process of moving the qtree SnapMirror source involves creating a Snapshot copy on the
original source and then replicating the Snapshot copy on the destinations, both the new source
and the existing destination. After this is done, the Snapshot copy is common on all volumes,
allowing for the SnapMirror relationship to be broken from the original source and established
between the new source and the existing destination.
The following terms are used in the task description for moving qtree SnapMirror sources:
oldsourceThe original system on which the source resides.
newsourceThe system to which you are moving the source.
destinationThe system to which the source is replicated.
oldsourcevolThe original source volume.
newsourcevolThe new source volume to which you are moving.
destinationvolThe volume to which the source is replicated.
Steps

68

1. Copy the original source to the new source by using the following command:
newsource>snapmirrorinitializeSoldsource:/vol/oldsourcevol/qtree
newsource:/vol/newsourcevol/qtree
Note: This might take some time to finish.

2. Create a manual Snapshot copy on the oldsourcesystem by using the following command:
oldsource>snapcreateoldsourcevolcommon_Snapshot

3. Update the destinations by using the following commands:


newsource>snapmirrorupdateccommon_Snapshotscommon_SnapshotS
oldsource:/vol/oldsourcevol/qtreenewsource:/vol/newsourcevol/qtree
destination>snapmirrorupdateccommon_Snapshotscommon_Snapshot
Soldsource:/vol/oldsourcevol/qtree
destination:/vol/destinationvol/qtree
The soption of the snapmirrorupdatecommand synchronizes newsource with oldsource
and destination with oldsource based on common_Snapshot. The coption of the snapmirror
updatecommand creates the common_Snapshot Snapshot copy on the destination systems.

4. Quiesce and break the SnapMirror relationship between oldsource and destination, and
oldsource and newsource, using the following commands:
destination>snapmirrorquiesce/vol/destinationvol/qtree
destination>snapmirrorbreak/vol/destinationvol/qtree
newsource>snapmirrorquiesce/vol/volnewsourcevol/qtree
newsource>snapmirrorbreak/vol/volnewsourcevol/qtree

5. Using an editor, update the /etc/snapmirror.conffile on the destination for the new
relationship by replacing the oldsource information with newsource information.
Before edit
oldsource:/vol/oldsourcevol/qtree
destination:/vol/destinationvol/qtree
restart=always0****

After edit
newsource:/vol/newsourcevol/qtree
destination:/vol/destinationvol/qtree
restart=always0****

6. Establish the new SnapMirror relationship using the following command on the destination
system:
snapmirrorresyncSnewsource:/vol/newsourcevol/qtree
destination:/vol/
destinationvol/qtree
Note: SnapMirror discards any Snapshot copies older than the common Snapshot copy,

namely, the ones used for the latest SnapMirror update. This is expected and no data is lost if
you ensure that no new data was added to the original source volume during the move.
The new SnapMirror relationship automatically picks the newest common Snapshot copy for
replication. This is the common Snapshot copy.

69

7. Verify that the SnapMirror relationship is resynchronizing by using the following command:
destination>snapmirrorstatus

Methods to migrate data between volumes


You can migrate data between volumes by using the snapmirrormigratecommand or by
performing the volume move operation to move data nondisruptively.

Migrating data between volumes by using SnapMirror


SnapMirror can migrate data between volumes and redirect NFS clients to the new volume
without rebooting the system or remounting to volume on NFS clients.
Before you begin

If you are migrating data within the same storage system and the source volume contains
LUN(s), then you must have unmapped the source LUN(s) by using the lununmapcommand.
About this task

The migration must be run on two volumes which are currently the source volume and
destination volume in a SnapMirror relationship. When you start the migration process,
SnapMirror does the following:
Performs a SnapMirror incremental transfer to the destination volume.
Stops NFS and CIFS services on the entire system with the source volume.
Migrates NFS file handles to the destination volume.
Makes the source volume restricted.
Makes the destination volume read-write.
SnapMirror does not transfer IP addresses, license keys, or quota information. You must remount
on the NFS clients unless one of the following is true:
The IP address of the source system is transferred to the destination system independently after
the migration.
The source and destination volumes reside on the same system, in which case, the IP address to
access either volume is the same.
SnapMirror does not migrate CIFS clients. You must reestablish CIFS client sessions after
migrating data to the destination volume.
Step

1. Enter the following command:


snapmirrormigrate[src_system:]src_vol[dst_system:]dst_vol
src_systemis the source system.
src_volis the source volume.
dst_systemis the destination system.
dst_volis the destination volume.

Conversion of a destination to a writable volume or qtree


You can use the snapmirrorbreakcommand to convert a SnapMirror destination, with read-only

70
status, to a writable volume or qtree.
You might want to convert a destination to a writable volume or qtree to perform one of the following
tasks:
Data migrationMoving data from one volume or qtree (original source) to another volume or
qtree (present destination) and make the data on the destination accessible and writable.
Disaster recoveryIf your source becomes unavailable, and you want your present destination to
substitute as the users retrieval and input source.
Application testingYou want to make your current destination volume or qtree writable to test a
new application on a mirrored replication of your current data rather than risk corruption of
original data on the source volume or qtree.
Converting the destination to a writable volume or qtree enables you to use data on the destination,
especially when the original source is unavailable .

Quota restrictions
Quotas are always disabled on a SnapMirror volume destination, regardless of whether quotas are
enabled on the source volume. If you try to enable quotas on a volume destination, SnapMirror
displays an error message. Quotas are not disabled on SnapMirror destination qtrees.
If the source volume or qtree and the destination reside on different storage systems, and you want
the same quota restrictions to be applied after you make the destination writable, the destination
system must have an /etc/quotasfile that includes all the entries from the /etc/quotasfile used
by the source system.
If you use SnapMirror replication for data migration, you can copy the /etc/quotasentries
from the source system to the /etc/quotasfile of the destination system before you use the
snapmirrorbreakcommand to make the destination writable.
If you use SnapMirror replication for backup and potential disaster recovery, you must keep a
copy on the destination system of all /etc/quotasentries used by the source system at all
times. That way, you can apply the quota entries to the destination volume or qtree if the source
system becomes unavailable.

Converting a SnapMirror destination to a writable volume or


qtree
You can convert a SnapMirror destination to a writable volume or qtree.
Steps

1. On the destination system, use the snapmirrorbreakcommand to make the destination


volume or qtree writable.
To make a destination volume writable, enter the following command on the destination
system.
snapmirrorbreakvolume_name

To make a destination qtree writable, enter the following commands on the destination system.
snapmirrorquiesce/vol/volume_name/qtree_name

71
snapmirrorbreak/vol/volume_name/qtree_name

2. If you want to enable quotas on the former destination volume, carry out the following steps:
a. Edit the /etc/quotasfile on the former destination system so that, after the conversion, the
former destination includes the same quota restrictions as the source volume.
If the original source volume uses per-volume quotas, replace the original source volume
name with the former destination name in the quota entries.
b. Enter the following command to enable quotas on the former destination:
quotaonvolume_name

3. Consider the following optional measures:


If you want to stop a SnapMirror source from trying to update a broken-off destination, you
can delete or comment out the entry in the /etc/snapmirror.conffile. Otherwise,
SnapMirror continues to try to update the destination.
You might also want to use the optionsfs_size_fixedoffcommand to turn off the
option that restricts the size of the file system on a destination volume.
Note: If you set optionsfs_size_fixedoff, the ability of the destination and source

volumes to resync is not guaranteed.

After using the snapmirror break command


After using the snapmirrorbreakcommand to temporarily break a SnapMirror relationship
between a source and destination, you can use other SnapMirror commands to either make the break
permanent, or restore or redefine the SnapMirror relationship.
Use the snapmirrorreleasecommand to make the break permanent.
Use the snapmirrorresynccommand to restore or redefine the SnapMirror relationship.

Resizing a SnapMirror source and destination volume


pair for a FlexVol volume
You can increase the size of the source volume for a FlexVol volume. After you increase the size
of a source volume and perform a SnapMirror transfer, the size of the destination volume is
automatically increased to the same size as that of the source volume, provided the destination
aggregate has sufficient space to contain the resized volume.
Steps

1. To increase the size of the SnapMirror source volume, enter the following command:
volsizevol_namesize
vol_nameis the name of the SnapMirror source volume.
sizeis the required size of the SnapMirror source volume.
Note: SnapMirror updates the size of the destination volume to match the source in the next

SnapMirror transfer.
2. On the destination system, enter the following command to check the size of the destination
volume and the file system:

72
volstatusvol_nameb
vol_nameis the name of the SnapMirror destination volume.

The size of the destination volume must have increased to the same size as that of the source
volume.

Resizing a SnapMirror source and destination volume pair


for traditional volumes
You can increase the size of the source volume for traditional volumes. If the destination volume is
not large enough to contain the resized source volume, you have to manually increase the size of the
destination volume to the same size as that of the resized source volume.
About this task

Before increasing the size of a source volume, you must compare the size of the source and
destination volumes for a specific volume SnapMirror relationship. If the destination volume is not
large enough to contain the larger source volume, you must manually resize the destination volume.
Note: In an active volume SnapMirror relationship, the size of the source and destination file

systems is identical. However, the size of the destination volume can be larger than or equal to the
size of the source volume. The size of the file systems must be identical to allow the source to be
restored from the destination.
Steps

1. On the source system, enter the following command to check whether the fs_size_fixed
option is off:
volstatusvol_namev
vol_nameis the name of the source volume for SnapMirror .

Note: If the fs_size_fixedoption is set to off, the size of the destination volume's file

system might be different from that of the source volume. To restore the source volume from
the SnapMirror destination, the size of the source and destination file systems should be
identical. When the status of a volume SnapMirror relationship is Idle, and the size of the
destination volume is increased, then you can change the fs_size_fixedoption to on.

3. On the source system, enter the following command to find the size of the source volume and the
size of the file system:
volstatusvol_nameb
vol_nameis the name of the SnapMirror source volume.

73

7. To increase the size of the SnapMirror destination volume, enter the following command:
voladdvol_namedisks
vol_nameis the name of the SnapMirror source volume.
disksis the number of disks that you want to add to the traditional volume.
You can use the volstatuscommand to confirm the increase in the source volume size.
Note: You can increase the destination volume size to be equal to or greater than the intended

increased size of the source volume.


8. Perform Step 7 for the source volume to increase the size of the SnapMirror source volume.
9. On the source system, enter the following command:
voloptionsvol_namefs_size_fixedon
vol_nameis the name of the source volume for SnapMirror.

This option ensures that the size of the SnapMirror source and destination file systems are
identical.
Note: SnapMirror updates the size of the destination file system to match the source in the next

SnapMirror transfer for the volume pair.

Converting asynchronous SnapMirror replication to

74

synchronous
You can change an asynchronous volume SnapMirror relationship to replicate data
synchronously by editing the snapmirror.conffile on the destination system.
About this task

Synchronous replication is not supported for qtree SnapMirror relationships. Therefore, a qtree
SnapMirror relationship cannot be converted to a synchronous SnapMirror relationship.
Step

1. To convert an asynchronous SnapMirror relationship to a synchronous SnapMirror


relationship,
on the administration host, edit the snapmirror.conffile on the destination system to change
the schedule to sync.

Stabilizing destinations before a Snapshot copy


You might need to temporarily stop transfers to a destination, by using the snapmirror
quiescecommand. For example, if you want to create a Snapshot copy of a SnapMirror
destination volume or qtree that contains a database, you need to ensure that its contents are
stable during the Snapshot copy.
Step

1. Enter the following command on the system on which you want to block transfers:
snapmirrorquiesce{dest_volume|/vol/volume_name/qtree_name}
dest_volumeis the name of the destination volume.
qtree_nameis the name of a qtree in volume_name.
Example
systemA>snapmirrorquiescevol1
snapmirrorquiesce:inprogress.
snapmirrorquiesce:vol1:successfullyquiesced

SnapMirror stops any further data transfers to vol1.

Example
systemA>snapmirrorquiescevol2
snapmirrorquiesce:inprogress.
Thiscanbealongrunningoperation.UseControlCtointerrupt.
......................................
snapmirrorquiesce:vol2:successfullyquiesced

SnapMirror waits for a transfer to finish and stops any further data transfers to vol2.
Example
systemA>snapmirrorquiesce/vol/vol1/qtree1

SnapMirror stops data transfers to qtree1 in vol1.


If you use the snapmirrorbreakcommand on a destination that is quiesced, the quiesce
condition is automatically cleared when the destination becomes writable.

75
Note: If you decide to abort a SnapMirror quiesce operation, press Ctrl-C or enter the
snapmirrorresumecommand at any time.

A SnapMirror destination volume might have been deleted after the volume was quiesced. If you
want to create a SnapMirror destination volume with the same name as the deleted volume, first
use the snapmirrorreleasecommand. This step would ensure that the SnapMirror
relationship is set up properly.

What the quiesce command does


The snapmirrorquiescecommand waits for all volume and qtree SnapMirror transfers to
complete, and blocks any further updates. If a qtree is not in a stable state (is in transition), the
snapmirrorquiescecommand forces it into a stable state.
You can quiesce only volumes and qtrees that are online and that are SnapMirror destinations.
You cannot quiesce a restricted or offline volume or a qtree in a restricted or offline volume.
The snapmirrorquiescecommand stops a volume or qtree from acting as a SnapMirror
destination, but does not prevent it from acting as a SnapMirror source.
Note: The quiesced state persists across reboots.

Resuming transfers after quiescing a destination


You can use the snapmirrorresumecommand to restore the capability for data transfer to a
volume or qtree you have quiesced.
Step

1. Enter the following command for the system on which you want to resume transfers:
snapmirrorresume{dest_volume|/vol/vol_name/qtree_name}
dest_volumeis the name of the destination volume.
qtree_nameis the name of a qtree in vol_name.
Example
systemA>snapmirrorresumevol2
snapmirrorresume:vol2:Successfullyresumed

SnapMirror resumes normal data transfer capability for vol2.

Aborting a SnapMirror transfer


You can use the snapmirrorabortcommand to abort a volume or qtree replication
operation before the transfer is complete. You can abort a scheduled update, a manual update, or
an initial SnapMirror transfer.
About this task

76

You should consider the following issues before aborting a SnapMirror transfer:
If you abort a copy operation, data transfer stops and SnapMirror is put in a restartable mode.
If you use the h(hard abort) option with the snapmirrorabortcommand, you cannot
restart the transfer.
Step

1. From either the source or the destination system, enter the following command:
snapmirrorabort[h]{[dest_system:]dest_volume|[dest_system:]/vol/
volume_name/qtree_name...}
hspecifies a hard abort; the transfer cannot be restarted. SnapMirror stops the transfer and

clears the restartable transfer log. This option applies only to the SnapMirror destination.

dest_systemis the name of the destination system.


dest_volumeis the destination volume.
/vol/ volume_name/qtree_nameis the path name of a destination qtree.
Note: If no destination system is specified, the local hosts name is used for the system name.

You can enter more than one destination volume.


You can obtain the destination system and volume from the snapmirrorstatusoutput.
Note: If no destination volume or qtree is specified, the command returns an error message; it
does not abort all transfers. To abort all transfers, use the snapmirroroffcommand.

If you enter an invalid SnapMirror destination (one that is not displayed in the output of the
snapmirrorstatuscommand), the command fails and displays an error message.
Example
systemA>snapmirrorabortvol1systemB:vol2systemC:/vol3/qtree3
snapmirrorabort:Abortingtransfertovol1systemB:vol2systemC:/
vol3/qtree3
SnapMirror aborts the transfer to vol1on systemA, where the command was entered, and
aborts the transfer to vol2on systemBand the transfer to qtree3in vol3on systemC.

Releasing partners from a SnapMirror relationship


To permanently end a SnapMirror relationship between a source and destination pair of volumes or
qtrees, you need to use the snapmirrorreleasecommand on the source and the snapmirror
breakcommand on the destination.
About this task

Releasing a source from a destination volume or qtree allows the source to delete its base Snapshot
copy for the SnapMirror relationship. After breaking the relationship, you need to take additional
steps to scrub the destination. Unless these extra steps are performed, the Snapshot copies associated
with the broken relationship remain stored on the destination system, and a snapmirrorstatus
command continues to list the former destination object as a current destination object.
Steps

1. On the source system, enter the following command:


snapmirrorrelease{source_volume|qtree_path}[dest_system:]
{dest_volume|qtree_path}

77
source_volumeor qtree_pathis the name of the source volume or path to the qtree that

you want to release from the destination.


dest_systemis the name of the system where the destination is located.
dest_volumeor qtree_pathis the name of the volume or path to the qtree that is the
destination.
If you do not enter the name of the destination system, SnapMirror uses the name of the system
on which you entered the command.
Example

For a SnapMirror volume relationship:


systemA>snapmirrorreleasevol0systemB:vol2
Example

For a SnapMirror qtree relationship:


systemA>snapmirrorreleasevol/vol1/qtree2systemB:/vol/vol2/qtree5

SnapMirror frees all resources on the source system that had been dedicated to the SnapMirror
relationship.
2. On the destination system, enter the following command to break the SnapMirror relationship
between the source and destination objects.
snapmirrorbreak{vol_name|qtree_path}
vol_nameis the name of the volume that you want to release from the relationship.
qtree_pathis the path of the qtree that you want to release from the relationship.

3. On the destination system, use the snapmirrorstatuslcommand to determine which


Snapshot copy basename is associated with the SnapMirror relationship that you just broke.
For a broken SnapMirror volume relationship:
snapmirrorstatusldest_vol

For a broken SnapMirror qtree relationship:


snapmirrorstatusl/vol/dest_vol/dest_qtree

In the detailed output that is displayed, note the Snapshot copy basename associated with the
SnapMirror relationship that you just broke.
4. On the destination system, use the following command to delete the Snapshot copy set that
you displayed in the previous step.
snapdeletedest_volsnapshot_basename
5. Through the Admin host client, edit the /etc/snapmirror.conffile on the destination system.
Locate and delete the entry that specifies the SnapMirror relationship you want to end.

SnapMirror data transfer logs


You can use the optionssnapmirror.log.enablecommand to check SnapMirror data
transfer logs. You can find out whether transfers are occurring as planned, how long the transfers
take, and how well the system setup works. You find this information in the SnapMirror log file.
The SnapMirror log file provides the following information:
The start time and the end time of the SnapMirror logging process.
The start time, end time, and size of each transfer.

78

Any abnormal termination and restart of a transfer.


Other SnapMirror-related activities.
You can use the raw information provided to do the following:
Calculate the average transfer size.
Calculate the average transfer time.
Look at the number of successful transfers and the failure rate.
Tune the schedule.
Create a notifier for aborted transfers.
Monitor performance on a per-volume level.
Be assured that things are working as planned.

Checking for SnapMirror logging


SnapMirror logging is on, by default. However, you can find out whether SnapMirror logging is
on by using the snapmirror.log.enableoption.
Step

1. Enter the following command on the system for which you want the information:
optionssnapmirror.log.enable

SnapMirror reports whether logging is enabled.


Example
systemA>optionssnapmirror.log.enable
snapmirror.log.enableon

Turning SnapMirror logging on


You can turn on SnapMirror logging by setting the snapmirror.log.enableoption to on.
About this task

SnapMirror keeps the current log on the root volume of the system as
/etc/log/snapmirror.0.
A new log file is generated every week as /etc/log/snapmirror.0. Older log files are
renamed /etc/log/snapmirror.[15]and the oldest log file is deleted. You can read the
log files using a text editor.
Step

1. Enter the following command on the system for which you want the log:
optionssnapmirror.log.enableon
Note: This setting is persistent across reboots.
Result

SnapMirror enables the logging of transfers for the system.

Location of SnapMirror logs


Starting with Data ONTAP 8.2.2, an option to disable or enable all the SnapMirror logging,
including non-default vFilers into vfiler0 is provided. In releases prior to Data ONTAP 8.2.2, the
SnapMirror
logs of each non-default vFiler were stored in the respective non-default vFiler root volume, separate
from the vfiler0 and other non-default vFiler's logs.

79
The option snapmirror.vfiler0.logging.enableis ON by default in Data ONTAP 8.2.2. All
SnapMirror logging activity in the vfiler0 and non-default vFiler context is now saved into a single
SnapMirror log file/vol/vol0/etc/log/snapmirror.
If you prefer the earlier behavior, you can turn the snapmirror.vfiler0.logging.enable
option OFF.

Format of SnapMirror log files


Understanding the format of SnapMirror log files can help you better handle issues related to
SnapMirror transfers.
The log file is in the following format:
typetimestampsource_system:source_pathdest_system:dest_pathevent_info
typecan be one of the following: src, dst, log, cmd. typespecifies whether the record is for the
source side (src) or destination side (dst) of the transfer. Certain events apply to only one side. The
type logindicates a record about the logging system itself, for example, Start_Loggingand
End_Logging. The type cmdindicates a record of user commands, for example, Release_command
and Resync_command.
timestampis expressed in ctimeformat, for example:
FriJul2720:41:09GMT.
event_infoincludes the following event names:
Request(IPaddress|transfertype)StartRestart(@numKB)End(numKB
done)Abort(error_msg)Defer(reason)Rollback_startRollback_end
Rollback_failedStart_LoggingEnd_LoggingWait_tapeNew_tapeSnapmirror_on
Snapmirror_offQuiesce_startQuiesce_endQuiesce_failedResume_command
Break_commandRelease_commandAbort_commandResync_commandMigrate_command
The Requestevent on the source side includes the IP address of the system that made the transfer
request; the Requestevent on the destination side includes the type of transfer. At the end of each
successful transfer, the Endevent also reports the total size of the transfer in KB. Error messages are
included with the Abortand Deferevents.

Turning SnapMirror logging off


You can turn off the SnapMirror log process by setting the snapmirror.log.enableoption
to off.

Step

1. Enter the following command on the system for which you want to disable SnapMirror
logging:
optionssnapmirror.log.enableoff

Listing SnapMirror Snapshot copies


You can use the snaplistcommand to list all Snapshot copies, including the SnapMirrorspecific Snapshot copies that are stored on the system.
Step

1. In the console of either your source or destination system, enter the following command:

80
snaplistvol_name
Result

A list of all Snapshot copies stored on your system is displayed. SnapMirror Snapshot copies are
distinguished from system Snapshot copies by a more elaborate naming convention and the label
snapmirror in parentheses.

Naming conventions for Snapshot copies used by


SnapMirror
When you run the snaplistcommand, you can distinguish SnapMirror Snapshot copies from
the regular system Snapshot copies by their naming conventions.
For volume replication, SnapMirror creates a Snapshot copy of the entire source volume that is
copied to the destination volume.
A SnapMirror volume Snapshot copy name is in the following format:
dest_system(sysid)_name.number
dest_systemis the host name of the destination system.
sysidis the destination system ID number.
nameis the name of the destination volume.
numberis the number of successful transfers for the Snapshot copy, starting at 1. Data

ONTAP increments this number for each transfer.


Note: In the output of the snaplistcommand, SnapMirror Snapshot copies are followed by

the SnapMirror name in parentheses.


Volume example
systemA(0016791363)_vol0.9(snapmirror)

For qtree replication, SnapMirror creates Snapshot copies of one or more source qtrees on the
source volume that are copied to a qtree on the destination volume.
A qtree SnapMirror Snapshot copy name is in the following format:
dest_system(sysid)_namesrc|dst.number
dest_systemis the host name of the destination system.
sysidis the destination system ID number.
nameis the name of the destination volume or qtree path.
src|dstis the source or destination name.
numberis an arbitrary start point number for the Snapshot copy. Data ONTAP increments this

number for each transfer.


Qtree example
systemA(0016789302)_vol1_qtree3dst.15(snapmirror)
Attention: You should not delete manually-created Snapshot copies marked snapmirrorin the
output of the snaplistcommand. Otherwise, later SnapMirror updates might fail.

81

Use of the snap list command to display SnapMirror updates


on the destination volume
The snaplistcommand displays information for each Snapshot copy on a storage system.
Along with the name of the Snapshot copy, it displays when the Snapshot copy was created and
the size of the Snapshot copy.
Example
The following example describes SnapMirror Snapshot copies that are created on a source
volume and copied to a destination volume. In this example, data is copied from vol1 of
systemA (the source) to vol2 of systemB (the destination).
To create a baseline version of a destination volume, systemA creates a Snapshot copy named
systemB(0016782130)_vol2.1 on systemA. All Snapshot copies in vol1 of systemA, including
systemB(0016782130)_vol2.1, are transferred to vol2 of systemB. When replicating a qtree,
SnapMirror transfers only the qtrees data in the Snapshot copy for the qtree.
If the administrator runs the snaplistcommand on the destination systemB after the
systemB(0016782130)_vol2.1 Snapshot copy is transferred from systemA to systemB, a listing
similar to the following example is generated.
systemB>snaplistvol2
working.....
%/used%/totaldatename

0%(0%)0%(0%)Nov1710:50systemB(0016782130)_vol2.1(snapmirror)
1%(0%)0%(0%)Nov1710:00hourly.0
1%(0%)0%(0%)Nov1700:00nightly.0
1%(0%)0%(0%)Nov1516:00hourly.1
1%(0%)1%(0%)Nov1515:00hourly.2
2%(0%)1%(0%)Nov1514:00hourly.3
2%(0%)1%(0%)Nov1513:00hourly.4
2%(0%)1%(0%)Nov1512:00hourly.5

When it is time to update the destination, another Snapshot copy is created on systemA.
The snaplistcommand on systemA generates the following display after the
systemB(0016782130)_vol2.2 Snapshot copy is created on systemA.
systemA>snaplistvol1
working....
%/used%/totaldatename

0%(0%)0%(0%)Nov1710:52systemB(0016782130)_vol2.2(snapmirror)
0%(0%)0%(0%)Nov1710:51systemB(0016782130)_vol2.1(snapmirror)
1%(0%)0%(0%)Nov1710:00hourly.0
1%(0%)0%(0%)Nov1700:00nightly.0
1%(0%)0%(0%)Nov1516:00hourly.1
1%(0%)1%(0%)Nov1515:00hourly.2

After the systemB(0016782130)_vol2.2 Snapshot copy is transferred from systemA to


systemB, both Snapshot copies exist on systemB. On systemA, however,
systemB(0016782130)_vol2.1 is no longer needed and is deleted; only
systemB(0016782130)_vol2.2 is retained to be used for the next transfer.

82

You can see a list of each SnapMirror Snapshot copy on the server, and the qtrees it contains,
and the client sources of those qtrees and their timestamps by using the snaplistq
command.
You can use the snaplistocommand to display the names, timestamps, and sources (if
they are copies) of the qtrees in a specified volume or at a path name.

What SnapMirror restarts and retries are


In SnapMirror, a retry is an automatic attempt to start the transfer process after an interruption,
whether or not any data was successfully transferred. A restart is the resumption of a previous
transfer process from a restart checkpoint.
SnapMirror sets a restart checkpoint every 5 minutes during a transfer. SnapMirror restarts the
previous transfer where it left off, if the following conditions are met:
A restart checkpoint exists.
All Snapshot copies being transferred still exist.
The value for the restart mode in the snapmirror.conffile is set to alwaysor is not set,
and the next scheduled update has not arrived.
If the conditions are not met, SnapMirror creates a new Snapshot copy and starts a new transfer.
If a scheduled transfer fails (for example, due to network failure), SnapMirror automatically
retries the transfer the next minute. If a transfer fails due to an error that renders it unfit for a
retry (for example, if a user aborts the transfer), or if the source denied the transfer for any
reason, the transfer is not retried the next minute. In such cases, an update is always attempted
according to the schedule specified in the snapmirror.conffile.
Note: If a manual update fails, the update is not tried automatically and the user is informed. The

user needs to reissue the command if an update is required.


After a reboot, SnapMirror does not automatically retry a transfer that was interrupted; however,
the next scheduled or manual transfer restarts it at that restart checkpoint, if the checkpoint is still
valid. An initial transfer can be restarted but will not be retried automatically. To restart an initial
transfer, enter the snapmirrorinitializecommand again. Scheduled incremental updates
automatically retry the transfer.

What the snapmirror resync command does


You can use the snapmirrorresynccommand to reestablish the connection between the source
and the destination. This command is applied after the SnapMirror relationship between the source
and destination is broken.
You can apply the snapmirrorresynccommand to either the original SnapMirror destination or
the original source:
When applied to the original destination, the snapmirrorresynccommand puts a volume or

83
qtree back into a SnapMirror relationship and resynchronizes its contents with the source without
repeating the initial transfer.
When applied to the source volume, the snapmirrorresynccommand turns the source

volume into a copy of the original destination volume. In this way, the roles of source and
destination are reversed.
Note: Resynchronization is not possible if SnapMirror cannot find a common Snapshot copy on
the source and destination to use as the basis for resynchronization. SnapMirror generates a "No
commonsnapshottouseasthebaseforresynchronization" error message and

terminates the command. You must reinitialize the destination to establish the SnapMirror
relationship.
When you run the snapmirrorresynccommand on the source, a reverse relationship from
the destination to the source is established. This resynchronizes all the updated content from the
destination to the source. However, you cannot resynchronize using the Snapshot copies taken on
the destination volume when the destination qtree was in a mirrored state.

Resynchronizing a SnapMirror relationship


You can use the snapmirrorresynccommand to restore or redefine a SnapMirror source or
destination relationship that was broken with the snapmirrorbreakcommand.
About this task

You might want to resynchronize a source and a destination volume or qtree under the following
circumstances:
When you change the current source to a different volume or qtree.
When you make a destination volume writable for application testing and then want to make it a
SnapMirror destination again.
When you need to recover from a disaster that disabled the source.
When you want to reverse the functions of the source and the destination.
Note: When you perform resynchronization for the destination system, the contents on the

destination are overwritten by the contents on the source.


Steps

1. From the destination system, enter the following command:


snapmirrorresync[options][dest_system:]{dest_volume|/vol/
qtree_path}
optionscan be any of the following:
ndoes not execute the resynchronization, but displays what is done if the snapmirror
resynccommand is run. You can use this option to find whether you have a Snapshot copy

on the source and the destination that can be used as the newest common Snapshot copy (base
Snapshot copy) so that you can resynchronize a specific SnapMirror relationship.
Data protection using SnapMirror | 187
fforces the operation to proceed without prompting you for confirmation.
knsets the maximum transfer speed to nkilobytes per second. This option has the same
effect as the kbsargument in the /etc/snapmirror.conffile.
S[source_system:]{source_volume| qtree_path} specifies the system and volume or
qtree you want to use as the source for resynchronization.
The source specified by the Soption must match a source entry in the /etc/
snapmirror.conffile. If entries exist but the source does not match, the operation displays

84
an error message and terminates. If there is no entry for the specified source, the command
runs.
Note: If the Soption is not set, the source must be specified in the /etc/
snapmirror.conffile. If it is not specified, the operation displays an error message and

terminates.
csnapshot_namecreates a Snapshot copy (with the name snapshot_name) of a qtree on
the destination after the resynchronization transfer completes (so that it does not compete with
any ongoing updates).
SnapMirror does not lock or delete this Snapshot copy.
Note:
snapshot_namecannot be minutely.x, hourly.x, nightly.x, or weekly.x,

because these names are reserved for scheduled Snapshot copies.


This option is valid only for a qtree SnapMirror replication.
ssnapshot_namespecifies an existing Snapshot copy of a source qtree to be transferred.
This prevents the normal action of the source creating a Snapshot copy to transfer. SnapMirror
does not lock or delete this Snapshot copy.
Note: This option is valid only for a qtree SnapMirror replication.
dest_systemis the name of the destination system.
dest_volumeor /vol/qtree_pathis the destination volume or qtree. If it is a scheduled
destination as specified in the /etc/snapmirror.conffile, that source volume or qtree is

considered to be the source. If the destination volume or qtree specified is not in a scheduled
relationship, then the Soption must be used to provide a source.
SnapMirror identifies the newest common Snapshot copy, which is to be used as the base for
resynchronization, and generates a list of Snapshot copies on the destination volume that meet the
following criteria:
The Snapshot copies are newer than the base Snapshot copy and are deleted.
The Snapshot copies are older than the base Snapshot copy and have already been deleted
from the source.
Note: For qtree resynchronization, only the common Snapshot copy is displayed.

SnapMirror then prompts you to choose whether to continue.

85

Example

SnapMirror resynchronization for volumes


systemB>snapmirrorresyncsystemB:vol2
Theresyncbasesnapshotwillbevol2(0001234567)_d.4
Thesenewersnapshotswillbedeletedfromthedestination:
hourly.0
hourly.1
Theseoldersnapshotshavealreadybeendeletedfromthesourceand
willbedeletedfromthedestination:
vol2(0001234567)_d.3
Areyousureyouwanttoresyncthevolume?
Example

SnapMirror resynchronization for qtrees


systemB>snapmirrorresyncSsystemA:/vol/vol2/qtreeBobsystemB:/vol/
vol3/qtreeBak
Theresyncbasesnapshotwillbevol2(0001234567)_d.4
Datacouldbelostasaresultofthisoperation.
Areyousureyouwanttoresyncthevolume?

How the snapmirror resync command helps minimize data loss


The snapmirrorresynccommand enables you to reestablish a broken SnapMirror relationship
without a lengthy baseline transfer.

86

This command offers the choice of either source or destination to serve as the source in the restarted
SnapMirror relationship. It finds the newest common Snapshot copy (NCS) shared by the two
volumes or qtrees, and removes all newer information on the system on which the command is run.
Note: The snapmirrorresynccommand requires that the two volumes or qtrees have at least

one Snapshot copy in common. You can resynchronize a volume or qtree to any other volume or
qtree as long as both have at least one Snapshot copy in common.
Resynchronization causes the loss of all data written to the destination after the base Snapshot copy
was made. The snapmirrorresynccommand informs you what data might be lost during the
resynchronization and requests permission to proceed. If you want to save the data on the destination,
you can stop the resynchronization, manually copy the desired data elsewhere, and reissue the
snapmirrorresynccommand to the destination.

Resynchronization of FlexVol volumes


If there are any changes on the destination system that you need to replicate to the source, you can
use qtree SnapMirror to resynchronize data.
You can resynchronize the data at the source with any changes made at the destination by using the
snapmirrorresynccommand. When resynchronizing data, the Data ONTAP version restrictions
apply in case of volume SnapMirror.

Retrieving data for disaster recovery: A special use of snapmirror


resync
When disaster disables the source of a SnapMirror relationship, you can use the snapmirror
resynccommand as part of a strategy to update the repaired source and reestablish the original
configuration of the systems.
About this task

In the following example, the original source (the one disabled by the disaster) is systemA:vol/
volAand the original destination is systemB:/vol/volB. You use a combination of snapmirror
breakand snapmirrorresyncor snapmirrorinitializecommands to perform the
following tasks:
Temporarily make systemB:volBthe source and systemA:volAthe destination to restore
mirrored data back to systemA:volAand to update systemA:volA.
Restore systemA:/vol/volAand systemB:volBto their original roles as SnapMirror source
and SnapMirror destination volume.
In this example, all data from the last scheduled SnapMirror Snapshot copy before the source was
disabled and all the data written to systemB:vol/volBafter it was made writable is preserved. Any
data written to systemA:vol/volAbetween the last SnapMirror Snapshot copy and the time that
systemA:vol/volAwas disabled is not preserved.
Steps

1. After the source volume (in this case, systemA:volA) is disabled, use the snapmirrorbreak

87
command to make the destination volume, systemB:volB, writable.
snapmirrorbreaksystemB:volB

2. Redirect the clients of source systemAto source systemB.


The former clients of systemAare now accessing and writing to systemB.
3. Temporarily make the original source volume a read-only destination volume.
If systemA:volAis recoverable, and its data is intact, then use the snapmirrorresync
command on systemAto resynchronize systemAwith systemB.
snapmirrorresyncSsystemB:VolBsystemA:volA

If systemA:volAis unrecoverable, make a new volAon systemA, and from systemA,


initialize systemA:volAfrom systemB.
snapmirrorinitializeSsystemB:volBsystemA:volA
This command also makes systemA:volAa read-only destination.
Note: These commands need to be performed on the original source system.

4. Redirect the clients from systemBto systemA.


The clients cannot access or write to systemA:volA, but they are no longer writing new data to
systemB:volB.
5. Update systemA:volAfrom systemBto transfer the latest data from systemB.
Example

Perform the following step from systemA:


snapmirrorupdateSsystemB:volBsystemA:volA

6. Use the snapmirrorbreakcommand to make systemA:volAwritable. On systemA, enter


the following command:
snapmirrorbreakvolA

7. On systemB, use the snapmirrorresynccommand to make systemB, the original


destination, the destination again.
snapmirrorresyncvolB

Comparison between SnapMirror and the vol copy command


You can use SnapMirror or the volcopycommand to copy volumes from the source to the
destination. There are some similarities between them. However, there are significant differences
too. The following points list the similarities between SnapMirror and the volcopycommand:
Both enable you to copy Snapshot copies from a source to a destination volume.
The source and destination volumes should both be either traditional volumes or FlexVol
volumes.
Note: You can use qtree SnapMirror to replicate data between traditional volumes and FlexVol volumes.

The volumes should be of the same type.

88

Transfer of LUN clones using qtree SnapMirror

89
In versions earlier than Data ONTAP 7.3, SnapMirror considers each LUN clone as a new LUN.
Therefore, during the initial transfer of the LUN clone, all data from the LUN clone and the original
Data ONTAP LUN is transferred to the secondary system.
For descriptions of data backup and restore on volumes containing Data ONTAP LUNs, see the Data
ONTAP SAN Administration Guide for 7-Mode.
The transfer of LUN clones using SnapMirror works the same way as the transfer of LUN clones
using SnapVault in the non-optimized mode.
Note: Qtree SnapMirror transfers LUN clones in the non-optimized mode only. Qtree SnapMirror

does not have the option of optimized transfers.


The use of snapmirrorresyncfor restoring data to a source qtree with LUN clones is not
supported. When you replicate qtrees with LUN clones, each LUN clone within the qtree is stored as
a LUN within the destination qtree. Therefore, when you recover data from such a destination qtree,
the original LUN clones are restored as complete LUNs.
Attention: If you attempt to recover data from the destination to a source qtree with LUN clones,

using a snapmirrorresyncoperation, the system displays the following error message:


cannotresyncasqtreehasoneormorelunclones
Abortingresync.

To recover data for a qtree with LUN clones, you can replicate the destination qtree to a new qtree.
Attention: For a qtree with LUN clones, ensure that the volume has enough free space to store the
LUN clones as complete LUNs before you initiate data recovery using qtree SnapMirror .

Managing SnapMirror operations through the NetApp


Management Console data protection capability
You can use the NetApp Management Console data protection capability graphical user interface
to perform some of the management tasks in a SnapMirror environment.
You can perform the following tasks by using the NetApp Management Console data protection
capability.
Creating and managing asynchronous SnapMirror relationships.
Creating and managing policies for replication and failover.
Reporting on relationships and lag times.
Configuring alerts about replication state changes.
Scheduling replica updates.
Visualizing relationships.
Simplifying data services recovery after a failure.
For more information, see the OnCommand Unified Manager Guide to Common Provisioning
and Data Protection Workflows for 7-Mode.

Managing SnapMirror operations through the OnCommand


System Manager
You can use the OnCommand System Manager to perform different SnapMirror operations, such
as creating, deleting, and managing SnapMirror relationships.

90

You can perform the following tasks as part of managing SnapMirror relationships:
Initializing SnapMirror destinations
Updating SnapMirror relationships
Quiescing SnapMirror relationships
Resuming quiesced SnapMirror relationships
Breaking SnapMirror relationships
Resynchronizing and reverse resynchronizing SnapMirror relationships
Aborting SnapMirror data transfers

Use of SnapMirror with SnapDrive


If you are using SnapDrive software, you can use SnapMirror to replicate your data.
SnapDrive supports the use of volume SnapMirror in the asynchronous mode.
For volume SnapMirror in the synchronous or semi-synchronous mode, you need to use
SnapDrive 5.0.
SnapDrive does not support qtree SnapMirror replication.

How FlexClone volumes impact SnapMirror


You can create FlexClone volumes from SnapMirror source or destination volumes. However, you
should understand the behavior of the resulting FlexClone volume before creating it.
FlexClone volumes create a nearly instantaneous replica of a volume within the same aggregate.
The following two sections list the key differences between cloning volumes that use qtree
SnapMirror and cloning volumes that use volume SnapMirror.
Volume SnapMirror and FlexClone
When a clone is created on a volume SnapMirror destination system, Data ONTAP locks the
Snapshot copy that the clone is based on. To protect the clone, Data ONTAP does not allow you
to delete this Snapshot copy. Data ONTAP also puts a soft lock on the corresponding Snapshot
copy on the SnapMirror source system.
Although Data ONTAP will not delete this Snapshot copy that is the source of a clone, you can
manually delete this Snapshot copy on the SnapMirror source volume. If you delete the Snapshot
copy on the source volume, the next SnapMirror update will fail because it attempts to delete the
corresponding Snapshot on the destination volume. All SnapMirror updates to the destination
volume continue to fail until the clone is destroyed or split.
Attention: Delete Snapshot copies carefully when SnapMirror and FlexClone are involved.

Always create a clone from the most recent Snapshot copy in the SnapMirror destination, because
that copy is guaranteed to exist in the source volume.
If a FlexClone volume is created from a Snapshot copy in the destination volume that is not the
most recent copy, and that Snapshot copy no longer exists on the source volume, all the
SnapMirror updates to the destination volume will fail until the clone is destroyed or split. This
happens because SnapMirror update attempts to delete the snapshot copy on the destination
system, which is locked due to the creation of a FlexClone volume.

91
Qtree SnapMirror and FlexClone
Qtree SnapMirror does not maintain the same Snapshot copies of the volume on the source and
destination systems. Because of this characteristic, a FlexClone volume created from a Snapshot
copy on the qtree SnapMirror destination does not lock that Snapshot copy on the source volume.
Accordingly, deleting that Snapshot copy on the source volume has no impact on the replication
or the destination volume. Therefore, the advantage of qtree SnapMirror is that a FlexClone
volume can live for a long time on the SnapMirror destination system without space implications
on the source system.
If a Snapshot copy is not specified when creating a FlexClone volume on the qtree SnapMirror
destination volume, the volclonecommand creates a new Snapshot copy on that volume.
If a FlexClone volume is created using the qtree SnapMirror baseline Snapshot copy, the qtree in
the FlexClone volume will be writable.
If a FlexClone volume is created on the qtree SnapMirror destination volume without specifying a
backing Snapshot copy for the clone creation, a separate SnapMirror relationship appears in the
snapmirrorstatuscommand output.

Setting up a SnapMirror relationship between two FlexClone volumes


The SnapMirror relationship between two FlexClone volumes that have the common base snapshot
helps you to achieve a SnapMirror relationship without transferring the common snapshot data again
to the destination system. Both the FlexClone volumes should be created from the SnapMirror source
and the corresponding SnapMirror destination volumes with a common base snapshot.
About this task

Setting up a SnapMirror relationship between the two FlexClone volumes does not consume any
extra space on the parent aggregate for shared Snapshot copies. In this way, you save disk space and
network resources. If you delete any of the inherited Snapshot copies from the parent system or
transfer new data from the SnapMirror source clone to the SnapMirror destination clone, additional
disk space is consumed.
Step

1. Establish the SnapMirror relationship between the two FlexClone volumes by entering the
following command on the destination FlexClone volume:
snapmirrorresyncSsource:sourcevoldestination:destinationvol
Note: The snapmirrorresynccommand cannot establish the SnapMirror relationship if the
background scanners are still processing the parent destination volume .

Example for setting up a SnapMirror relationship between two FlexClone


volumes
You have two systems SystemA and SystemB. The SystemA system has a volume volA and
the SystemB system has a volume volB. The volA and volB volumes are in the SnapMirror
relationship where volA is the source volume and volB is the destination volume. A FlexClone
volume cloneA is created from volA and a FlexClone volume cloneB is created from volB with
a common base snapshot.

92

Establish the SnapMirror relationship between two FlexClone volumes by entering the
following command on SystemB:
snapmirrorresyncSSystemA:cloneAcloneB

If you are updating the /etc/snapmirror.conffile on SystemB, add the following entry to
the file:
SystemA:cloneASystemB:cloneB

After updating the /etc/snapmirror.conffile, the following command establishes the


SnapMirror relationship between the two FlexClone volumes:
snapmirrorresynccloneB
Note: The SnapMirror relationship where cloneB is the SnapMirror source and cloneA is the

SnapMirror destination is also supported. The cloneB FlexClone volume is created from the
parent destination volume and the cloneA FlexClone volume is created from the parent
source volume.

How SnapMirror works with the dump command


You can use the dumpcommand to back up data from a SnapMirror destination volume. The dump
command picks the most recent Snapshot copy and copies that to tape.
Note: The use of the dumpcommand to copy data from a synchronous SnapMirror destination

volume is not supported.


You can back up any Snapshot copy displayed by the snaplistcommand on the destination. You
can also create a Snapshot copy on the source volume, copy the Snapshot copy to the destination, and
use the dumpcommand to back up this Snapshot copy from the destination to tape.

SnapMirror over Fibre Channel


SnapMirror over Fibre Channel enables you to use the SnapMirror feature over Fibre Channel in
a SAN environment.
SnapMirror over Fibre Channel includes all the features that are available with SnapMirror over
Ethernet. The operational concepts and the command interfaces are identical for both. However,
there are a few differences between them.

Hardware requirements for SnapMirror over FC


You must install a Fibre Channel (FC) adapter on the system for using SnapMirror over FC.
You can install any of the following adapters on the SnapMirror source and destination systems:

93

Troubleshooting of SnapMirror issues


When using SnapMirror, you might face issues when you change the name of the destination volume.
There can also be issues when SnapMirror Snapshot copies are deleted.

What happens if you change a SnapMirror destination volume name


If you change the name of a SnapMirror destination volume, you need to manually correct the
SnapMirror relationships affected by the change. SnapMirror is unable to replicate source volume
data to a newly named destination volume whose configuration information is incomplete.
In the following case, the destination, volJobak, was renamed to volStatbak. After the renaming,
the snapmirrorstatuscommand does not display the source. Instead, the entry is shown with a
dash () in the source column.

systemB>volrenamevolJobakvolStatbak
volJobakrenamedtovolStatbak
youmayneedtoupdate/etc/exports
systemB>snapmirrorstatusvolJobak
Snapmirrorison.
systemB>snapmirrorstatusvolStatbak
Snapmirrorison.
SourceDestination
State
Lag
systemB:volStatbakSnapmirrored00:03:22

Status
Idle

If you change the volume name of a SnapMirror source or destination, you need to make the
following changes.
1. Update the snapmirror.conffile, if there is an old entry.
2. Use the snapmirrorreleasecommand to update the old destination name, and
SnapMirror releases the soft lock and the old Snapshot copy.
3. Use the snapmirrorupdatecommand on the new volume name, and status registry is
updated with the new volume name.
4. Update the /etc/exportsfile.
Note: If a system is running at its limit of concurrent transfers, and you attempt to initiate more
transfers through by using the snapmirrorupdatecommand, the attempted transfer will fail.

Accidental deletion of SnapMirror Snapshot copies


SnapMirror Snapshot copies stored on either the SnapMirror source or destination location must not
be deleted. If the base Snapshot copy (most common Snapshot copy) is accidentally deleted from
either the source or destination location, you can attempt recovery.
You might be able to recover without reinitializing the destination by breaking the SnapMirror
relationship and then resynchronizing the source and the destination.
As long as there is at least one Snapshot copy common to both the source and the destination,
resynchronization will succeed.

94
If there is no Snapshot copy common to both the source and the destination, you need to use the
snapmirrorinitializecommand over the network. Or, if the source and destination are
volumes, you must use the smtapebackupcommand to store the source volume on tape and then
use the smtaperestorecommand to restore the volume from the tape to the destination.

Space issues when volume space guarantee is enabled for a


destination volume
When volume space guarantee is enabled on a SnapMirror destination volume, if the destination
volume is larger than the source volume, the destination volume consumes more space than the
source volume. You can reduce the size of the destination volume by using the volsizecommand.
Before you reduce the size of the destination volume, compare the size of the destination and source
volumes by using the volstatusbcommand. This is because if volume space guarantee is
enabled on a volume, space is allocated to that volume based on the size of that volume.
For example, if you create a destination volume that is larger than the source volume or you reduce
the size of the source volume after a volume SnapMirror relationship is established, the destination
volume is allocated more space than the source volume. In such a scenario, the VolSizeof the
destination volume is larger than the FSSizeor the size of the file system on the destination
volume.

Top 5 performance killers of NetApp SnapMirror exposed

f you are responsibile for helping maintain the health and performance of NetApp SnapMirror
within your storage and/or across your network, here are 5 potential performance killers (and
their cure!) which you should be aware of. They are ...
(1) Latency. (2) Packet Loss. (3) Limited Bandwidth. (4) Competition from Other Apps, and
(5) Competition from Other SnapMirror Transfers.
Understanding the perils and cure for each of these will help ensure that you can sleep better at
night knowing SnapMirror will have the performance it needs to meet your target Recovery
Point Objectives each and every time.
Performance Killer #1. Latency.
We'll start with an easy one,... a performance killer that is relatively common and wellunderstood (both with respect to its effects and its cure). Latency.

95

When using any TCP-based replication application, an improperly-configured TCP window size
can literally kill end-to-end throughput. And the reduction in throughput is increasingly severe
as the latency between source and destination filers increases.

For example, imagine you have 1 Gbps of


WAN capacity available to SnapMirror. And let's assume for the moment that it is a perfectly
clean link with 0% packet loss. (We'll get to that next.) If SnapMirror's default TCP window
size (1,994,752 bytes) were left unchanged, the chart on the right shows the maximum
throughput attainable for any given SnapMirror transfer as a function of latency. If, on the other
hand, you were to adjust the TCP window size appropriately (as described by NetApp here, you
would be able to sustain up-to 1 Gbps. (Again, this assumes you don't have to contend with
performance killers #2 through #5 below)
Within this topic, there are a couple important caveats to keep in mind.
1. It is not possible to adjust the TCP window size for Q-Tree based SnapMirror replication.
2. Increasing the TCP window size restricts the maximum number of SnapMirror sessions
that can active simultaneously.
One way you can overcome the two obstacles above and completely eliminate the need to tune
TCP window sizes on your filer is to use a WAN optimization device. Such devices give you a
single place to tune the performance of SnapMirror from all filers as well as for any other TCP
applications traversing the WAN. There is no need to adjust the SnapMirror TCP window sizes
away from their default values, giving you the ability retain the maximum number of
simultaneous SnapMirror flows supported, and WAN performance is improved for all forms of
SnapMirror (including Qtree-based SnapMirror, or QSM).
Performance Killer #2. Packet Loss.
This performance killer is quite a bit harder to contend with from within the storage
infrastructure itself. Quite impossible, actually.

96

As a quick definition, a "packet loss event" occurs any time a TCP packet fails to arrive at its
destination. If SnapMirror is using WAN infrastructure such as MPLS or VPN over the Internet,
packet loss events are a common occurrence.
The performance challenge associated with these packet loss events is that TCP interprets them
as a sign of congestion. And, by design, TCP will then cut its rate of transmission by 1/2, then
slowly ramp up its transmission rate again until the next packet loss event. The result Instead
of having a nice consistent rate of transmission that fully utilizes available bandwidth, you end
up with a "saw tooth" pattern for end-to-end throughput, where available WAN capacity can be
severely under-utilized. (This blog post provides a nice explanation of this behavior.)

The chart on the left shows just


dramatically SnapMirror throughput can decrease as a function of increased latency AND packet
loss. (Note: 0.1% packet loss is the typical *average* SLA for MPLS circuits. VPN over the
Public Internet can commonly exhibit 1-5% packet loss.)
Overcoming this performance killer can't really be done within the storage infrastructure, as it
involves transforming the behavior of the TCP protocol itself so that it is impervious to packet
loss events. Here's where a WAN optimization device is basically a "must have".
The data in the table comes from a performance study conducted by a NetApp & Riverbed VAR
to determine how well a Steelhead WAN Optimization device can improve end-to-end
throughput when faced with real-world levels of latency and packet loss. (Note: It's not fun
being on the bad side of a 60x performance delta!)

97

Performance Killer #3. Limited Bandwidth.


With data growing 2x every 2 years, according to IDC's digital universe study Extracting value
from chaos, and IT budgets remaining flat YoY for most organizations, according to ESG's most
recent IT Spending Intentions Survey (January 2013), you're not alone if you find yourself in a
situation where you just don't have enough bandwidth to meet the ever-increasing throughput
requirements for NetApp SnapMirror.
Here are a few solutions that can help curb the growth of WAN-bound traffic for NetApp
SnapMirror.
1. SnapMirror network compression. This can generally yield anywhere from 1.5-3x
reduction in WAN-bound traffic, but can cost a hefty amount of Filer CPU resource,
depending on the compressibility of the data and the amount of WAN capacity available
to SnapMirror. For example, in one tech post produced by NetApp, it was shown how
55% of a FAS 3070's CPUs would be consumed by SnapMirror compression when
transferring a 3.5x compressible data set over a 155 Mbps WAN link.
2. WAN optimization appliance. Through both data deduplication and compression
techniques, bandwidth savings of 4-8x are typical. Note: It is important to ensure that the
WAN optimization device has been right-sized to handle the ingress requirements for
SnapMirror AND any other WAN traffic it is processing. Also look for devices which
leverage solid-state drives (SSD), as opposed to spinning disk drives (HDD). These tips
will help you attain great results and even unabashed joy, such as that experienced by this
particular blogger.
Performance Killer #4. Competition from Other Apps.
One of the strategies network architects employ to curb the costs of WAN infrastructure is by
consolidating multiple WAN links into fewer links which are shared by multiple applications. To

98

often, however, this type of sharing does not equal caring when it comes to maintaining the
performance SLAs required by apps which may have previously had their own dedicated WAN
infrastructure (eg. DR apps).
Network Quality of Service (QoS) mechanisms are the typical solution to this problem. Network
QoS policies related to enforcing minimum bandwidth and/or maximum latency guarantees can
be implemented in multiple places:
1. Within the WAN itself, such as that delivered via MPLS WAN infrastructure. Here, there
are typically a relatively small number of "classes" which you can map a given
application's traffic to (eg. Default, Bulk, Preferred, Critical). Mapping is typically done
by tagging the traffic with predefined DSCP tags understood by the MPLS network.
2. WAN router. Here there is generally more flexibility with regards to the number of types
of traffic classes possible. Traffic is identified and characterized according to their
network attributes.
3. WAN Optimization device. The best Network QoS solutions are capable of identifying
and characterizing traffic types based on their actual payload signatures. This is known
as Deep Packet Inspection (DPI) and carries a number of advantages. First, it eliminates
the need to know what the network attributes (IP address, subnet, IP port, etc.) are which
identify a given application. And further, it allows deeper classification in cases when
multiple applications share the same network attributes. For example, many different
apps utilize HTTP and port 80.
Performance kill #5. Competition from other SnapMirror transfers.
Sometimes we are our own worst enemy. Well, the same could be said for SnapMirror in certain
circumstances. Consider the following story from one of our customers, a legal firm based in
California.
The NetApp filer in our head quarter's data center hosts a number of applications. Email is
one of our most critical apps, and its replication cycle runs every 1/2 hour. Then we've got a
different set of volumes for user home directories. And then there's another set of volumes which
store data for our legal cases. We like to replicate the volumes which store case data every 1/2
hour as well. But one of our challenges is that when lawyer's are working a case, they can
unpredictable put a bunch of new data onto these volumes. This then floods the network with
new case data to be replicated, and in turn causes our email replication to get backed up,
sometime by several hours. We would be fine to have the replication cycles for the case data
elongated during these spikes. But we don't want to compromise the RPO of the email volumes.
There's just not a good way to manage these spikes. We do use network QoS to dynamically
shape traffic for SnapMirror as a whole amongst a host of other consumers of the WAN (VOIP,
File Sharing, Web). But we can't take that to the next level to shape based on source volume as
well."

99

A solution to this problem recently emerged with the latest release of the Riverbed Optimization
System (RiOS), the software that powers Steelhead WAN optimization appliances. With RiOS
version 8.5, Network QoS capabilities have been expanded for NetApp SnapMirror such that you
can actual provide fine-grain control of NetApp on a per-volume (or Q-Tree) basis, not just for
SnapMirror as a whole. There's a set of new admin panes designed with storage adminstrators in
mind, where they can define relative priority and minimum bandwidth guarantees for different
volumes, as appropriate for their business. And since the Steelhead devices typically have
visibility into all the traffic traversing a WAN, it can dynamically expand and contract the
amount of bandwidth allocated to a given volume dynamically based on whatever else happens
to be going across the link at that time. There's no need to set "fixed" bandwidth throttles which
can end up being too low during off-hours and too high during peak production periods.
Here is one blogger's take on why this is so cool. :)
To Conclude...
I hope this post has helped highlight some of the more common pitfalls you can encounter when
maintaining and/or expanding your NetApp SnapMirror deployments. And, more importantly, I
hope it has provide you an understanding of how to effectively combat each one.
Reply with your comments on ways you've been able to ensure consistent and fast performance
for SnapMirror in your environments!
(About the author: Joshua Dobies is Group Product Manager for Steelhead WAN Optimization
products at Riverbed Technology with a focus on Data Center-to-Data Center and DR solutions.)
snapmirror.allow and snapmirror.access

To set a SnapMirror Trust relationship between source filer and destination filer you have to
allow the destination to pull from source. In other words the source filer has to allow the
destination filer to replicate the data from the entire volume/qtree. There are basically two ways
to do it:
snapmirror.access

snapmirror.access is an option that let us provide the list of the filers that have a permission to
pull the data from the source filer. To print the current setting just go with:

filerA>options snapmirror.access
snapmirror.access
host=filerB,filerC AND if=vif-10,vif-11
What does it mean? It means that filerB, and filerC has an access (as SnapMirror Destination) to
pull data from SnapMirror Source volume/qtrees. The data can be accessed only by the network
interfaces vif-10, and vif-11 (again it is just an example).

100

If you would like to set it up by yourself, you can just go with:

filerA>options snapmirror.access host=filerC,10.12.12.13


filerA>
filerA>options snapmirror.access
snapmirror.access host=filerC,10.12.12.13
snapmirror.allow

snapmirror.allow is a file. The location of the file is /etc/snapmirror.allow


and it can be edited with your favorite wrfile command :). The syntax of the
file is pretty simple:
filerA>rdfile /etc/snapmirror.allow
filerB
FilerC
But there is one trick. If you would like to use snapmirror.allow you have to
set the snapmirror.access option, because this is the first thing that is
checked.
filerA>options snapmirror.access legacy
If the snapmirror.access is not set to legacy option, the filer will not check
the snapmirror.allow file at all.
Troubleshoot the access

The first thing would be to check if a proper license is installed on both


source and destination, but Im sure you already checked that.
If you use the host-name instead of IP make sure that the filer can resolve
the name and the host is reachable, the easiest way is to ping it:
filerA>ping filerC
filerC is alive
If you can ping the host by IP but not by the host-name, make sure the filer
can resolve the name (check /etc/nsswitch.conf and optionally /etc/hosts).
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@

101

Considerations before using synchronous SnapMirror


You need to consider certain issues when planning to use SnapMirror for synchronous
replication.

One source volume cannot have synchronous SnapMirror relationships to multiple


destinations volumes.

You cannot create a synchronous SnapMirror relationship between FlexVol volumes


within the same system or within the same HA pair.

You must ensure that the source and destination have the same version of Data ONTAP
installed.

You cannot use synchronous or semi-synchronous SnapMirror to replicate volumes that


use deduplication.

A volume with a synchronous or a semi-synchronous SnapMirror relationship should not


be placed in the same aggregate as a volume using deduplication.

@@@@@@@@@@@@

How to correctly delete a SnapMirror relationship


Perform the following steps to correctly delete a SnapMirror relationship:
7-Mode Systems:
1. Quiesce and break the SnapMirror.
o Quiesce using the following command: snapmirror quiesce destination
o Break using the following command: snapmirror break destination
2. Run the snapmirror status -l command to find what the base snapshot is:
destfiler> snapmirror status -l test_dst
Snapmirror is on.

Source: fsrcfiler:test_src
Destination: destfiler:test_dst
Status: Broken-off
Progress: State: Snapmirrored
Lag: 00:04:20
Mirror Timestamp: Wed Jun 17 15:18:02 EDT 2009
Base Snapshot: srcfiler(0101168974)_test_dst.5 <--- this is the
baseline snapshot
Current Transfer Type: Scheduled

102
Current Transfer Error: None
Contents: Replica
Last Transfer Type: Scheduled
Last Transfer Size: 76 KB
Last Transfer Duration: 00:00:04
Last Transfer From: srcfiler:test_src

3. Delete the baseline snapshot from the source storage system's volume:
srcfiler> snap delete test_src srcfiler(0101168974)_test_dst.5

-OR=
Run the snapmirror release source_vol destfiler:dest_vol command on the
source. This will delete the baseline snapshot on the source volume as well.
4. Delete the relationship entry by editing the entry from the snapmirror.conf file on the
destination storage system, or delete the schedule from OnCommand System Manager.
After a minute or two, run the snapmirror status command. The relationship should
no longer be present.
destfiler> snapmirror status test_dst
Snapmirror is on.

Often, the baseline snapshot has to be manually deleted from the destination volume as
well.
Occasionally, a stale, orphaned entry remains in snapmirror status on the source
storage system:
Source: sourcefiler:volume
Destination: destinationfiler:volume
Status: Transferring
Progress: 0 KB
State: Source
Lag: Mirror Timestamp: Base Snapshot: Current Transfer Type: Current Transfer Error: Contents: Last Transfer Type: Last Transfer Size: Last Transfer Duration: Last Transfer From: -

Clustered Data ONTAP Systems:


1. Quiesce and break the SnapMirror.
o Quiesce using the following command on the destination cluster: snapmirror
quiesce -destination-path <Destination_Path>
eg:
vs2.example.com::> snapmirror quiesce -destinationpath vs2.example.com:dept_eng_mirror1

103

o Break using the following command on the destination cluster: snapmirror


break -destination-path <Destination_Path>
eg:
vs2.example.com::> snapmirror break -destination-path
vs2.example.com:dept_eng_mirror1

2. Delete the snapmirror relationship by using the following command on the destination
cluster: snapmirror delete -destination-path <Destination_Path>
eg:
vs2.example.com::> snapmirror delete -destinationpath vs2.example.com:dept_eng_mirror1

3. To remove source information for the SnapMirror relationship and also delete the base
snapshot, use the following command on the source cluster: snapmirror release destination-path <Destination_Path>
eg:
vs1.example.com::> snapmirror release -destination-path
vs2.example.com:dept_eng_mirror1

4. Run snapmirror show to confirm that the relationship doesn't exist anymore:
cluster2::> snapmirror show

If the volume has been removed from the SnapMirror destination without following this process,
the solution is to issue the snapmirror release command from the source storage system.
Another workaround is to create the destination volume on the destination storage system. Note:
The re-created volume need not be of the same size as the earlier one. Once it is re-created with
the same name as before, run the snapmirror release command from the source storage
system. When the release command is run from the source storage system, an error stating the
path does not exist is encountered.
If you recieve an unknown status after running snapmirror status you can follow this
KB1012157: How to get rid of an unknown entry in SnapMirror status?, to cleanup the stale
entries.
@@@@@@@@@@@@@@@@@@@@@@@@@

How to troubleshoot SnapMirror compression


Description

Compression is failing to enable, no compression is displayed when running snapmirror


status -l during an active transfer or when viewing the SnapMirror log files.
There could potentially be a problem with enabling compression even if SnapMirror transfers are
working.
This can be due to a few different reasons, below is a list of possibilities:

104
1. The syntax of the snapmirror.conf file on the destination is incorrect
2. The ICS connection required for compression to enable isn't able to establish
successfully. This could be due to something along the path between source
and destination filtering on port 10565. Using a VPN tunnel and performing
network address translation can also contribute to such an issue.
3. There is an issue with name resolution between source and destination
4. Trying to enable on a snapmirror relationship that is not supported
Procedure

Scenario 1
The syntax of the destination system should be similar to the following:
connection_name=multi(src_system,dst_system)
connection_name:src_vol dst_system:dst_vol compression=enable * * * *

Note: Create a connection name for ICS and make sure that the name of this connection does not
match either the source or destination system names.
Scenario 2
If it is suspected that something is filtering traffic between source and destination, then contact
your network administrators and have them examine any firewalls, vpn concentrators, etc. for
any lost traffic between source and destination. Network address translation can effect the
establishment of the multipath connection as well. Often times, the quickest and easiest test to
perform is to gather matching (i.e. simultaneous) packet traces from both source and destination
systems at the same time:
a. Start the trace on both source and destination systems via the pktt command:
pktt start all -d /etc/log

b. Initiate a snapmirror update


c. Stop the trace
pktt stop all

d. Retrieve the .trc file from the storage systems. This can be done via CIFS and NFS.
Note: Using the pktt syntax above puts the packet trace file(s) in the /etc/log directory. Make
sure there is enough room in the volume to accommodate a large trace. If this is the case, then
specify another directory with the -d switch or filter based on source and destination IP
addresses. For more information on using pktt refer to the man pages.
If the ICS connection fails, notifications similar to the following is seen in the messages log file:

105

Source:
[NetApp_Source: snapmirror.src.multipath.connErr:error]: SnapMirror is unable
to set up a multipath/failover connection from SRCvol to
Netapp_Destination:DRvol, SnapMirror is resorting to a single TCP connection.
Please ensure ports 10565 and 10566 are open in the firewall configuration.

Destination:
[Netapp_Destination: snapmirror.dst.multipath.connErr:error]: SnapMirror is
unable to setup a multipath connection to system, resorting to the standard
single TCP connection.

SnapMirror transfers can work while getting the multipath errors, but compression will not.
Scenario 3
If there is not an entry in the /etc/hosts file, update it with the appropriate information. Check the
hosts file by running the following command:
rdfile /etc/hosts

Also, if using IP addresses is desired, turn SnapMirror IP checking on:


options snapmirror.checkip.enable on

Note: Make sure that if network address translation is being used between source and
destination, that the hosts file is updated to reflect the appropriate IP/name mappings on each
end.
Scenario 4
SnapMirror compression is supported on storage systems with Data ONTAP versions 7.3.2 and
later. It is currently only supported with asynchronous volume SnapMirror.
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@

Symptoms
Note: You must be logged in to view the content of this article.
Destination request while SnapMirror or SnapVault is off on the source
Destination request while SnapMirror or SnapVault is not licensed on the source
Destination request with incorrect version
Request from X denied, not listed in /etc/snapmirror.allow or snapvault.access
Source volume requested offline or nonexistent volume

106

Request denied. Previous request is still processing


Unknown system
Denied by source system
Destination volume too small
There is insufficient memory on this system
Transfer aborted: transfer attempted from busy destination/source
Destination is not in a snapmirrored state
SnapVault update failed because the destination path it is not an initialized replica qtree
Unable to translate Unicode pathname
SnapVault qtree replica is quiescing forever

Cause
--

Solution
Common problems and errors on the source storage system

Destination request while SnapMirror or SnapVault is off on the source


Enable the service using the options snapmirror.enable or snapvault.enable.

Destination request while SnapMirror or SnapVault is not licensed on the source


Install the required license(s) on the source storage system.

Destination request with incorrect version


o

The Data ONTAP version is different between the source and destination storage
systems.

The destination of a VSM relationship must run a version of Data ONTAP that is
equal to or more recent than the source.Volume Snapmirror Source

Volume SnapMirror Volume SnapMirror


Source
Destination

Replication
Possible?

Data ONTAP 7.2.x

Data ONTAP 7.3.x

Yes

Data ONTAP 7.3.x

Data ONTAP 7.2.x

No

Data ONTAP 7.3.x

Data ONTAP 7.3.x

Yes

Data ONTAP 7.3.x

Data ONTAP 8.0.x

Yes

Data ONTAP 8.0.x

Data ONTAP 8.0.x

Yes

Data ONTAP 7.3

Data ONTAP 8.1

Yes

Data ONTAP 8.0

Data ONTAP 8.1

Yes

107

Data ONTAP 8.1

Data ONTAP 7.2

No

Data ONTAP 8.1

Data ONTAP 7.3

No

Data ONTAP 8.1

Data ONTAP 8.0

No

Data ONTAP 8.1

Data ONTAP 8.1

Yes

Data ONTAP 8.2

Data ONTAP 7.3.x

No

Data ONTAP 8.2

Data ONTAP 8.0.x

No

Data ONTAP 8.2

Date ONTAP 8.1.x

No

Data ONTAP 8.2

Data ONTAP 8.2

Yes

Request from X denied, not listed in /etc/snapmirror.allow or snapvault.access


The destination hostname is not mentioned in the snapmirror.allow file or in the
snapvault.access options. Make sure that it is listed and that it matches the output of the
hostname command on the destination storage system.
Note: The hostname in the snapmirror.allow file is required even when the mirror source
and destination volumes are on the same storage system.

Source volume requested offline or nonexistent volume


The destination requested a transfer from a volume that doesn't exist on the source storage
system. Check the volume list on the source and the name of the volume on the destination
volume.
Note: /vol is not part of the volume name. The vol status command gives the volume
names.

Request denied. Previous request is still processing


If the source aborted the transfer because the previous request is still processing, the
administrator should stack the schedule in order to let large transfer to complete before the
next schedule update starts.
If a specific source qtree is actively modified, it is recommended to schedule incremental
update at the time the network is not overloaded and either mirror that qtree in a
"dedicated" destination volume.

Common problems and errors on the destination storage system

Unknown system
The destination is attempting to find the source system but is unable to determine the
system based on the name. Check the /etc/hosts file to make sure it is listed there.

Cannot contact source system


o

SnapMirror or SnapVault attempted to but could not create a connection to the


source storage system. Check the network connection. Ping the source storage system
(by the name listed in the snapmirror.conf file) from the destination. If that doesn't work,
fix that problem before proceeding.

108

If the source is an OSSV primary system:

The OSSV agent has a built-in NDMP server, it is listening on TCP port
10000 by default. Some other applications, like Reliaty, SyncSort BEX, etc, have
NDMP servers built-in, so they listen on port 10000 as well. If an OSSV agent cannot
bind to port 10000, it fails to start up and the "cannot connect to source system" error
is logged on the secondary. To remedy the problem, reconfigure the OSSV agent with
the Configurator GUI to listen on some port other than 10000, and stop and start the
service using the Service tab. Alternatively, you can configure the other application to
listen to another port.
NB: any application using these NDMP services must be told about the changed port
number if you do this.

On UNIX systems, you can do netstat -a | grep 10000 to find out whether
any application is listening on port 10000.
Also check if the OSSV primary does not have a static IP address but it is served by
DHCP, do not set any entry in regards to this client in the secondary's /etc/hostsfile
and specify on the secondary the primary hostname instead of its IP address using the
options snapvault.access.
Insure that ports 10000 & 10566 are allowed for WAN access through firewall.

Denied by source system


The source must approve the transfer before the transfer can occur. This message comes
out on the destination system when the source denied the request for a transfer. The source
system console gives a more specific message that describes the reason in greater detail.
Most often this situation results because the destination hostname is not listed in the
source's snapmirror.allow file or listed in the options of snapmirror.access or
snapvault.access.

Destination volume too small


For VSM, the destination volume must be the same size as or larger than the source
volume. The output of df takes snap reserve into account. The vol status -r command gives
a more precise size of the volume. Use "vol status -b" to compare the actual block size of
the volumes on source and destination.

There is insufficient memory on this system


Trouble can occur if a destination has much less memory than the source. The problem
comes when the transfer has completed, the destination attempts to bring the volume
online, and there is insufficient memory to bring the volume online. Storage systems have
memory requirements, and this applies to the destination system as well. To work around
this problem, make sure that the memory requirements for the destination system are
sufficient for the volume you are bringing online on the destination.

Transfer aborted: transfer attempted from busy destination/source


You have to define which process/operation is keeping the affected system busy. If the
system is the destination, check in /etc/log/snapmirror if a rollback or a coalescing cycle is

109

running. Such processes can take a while. You have to wait for those processes to complete
before an update request can be handled by the destination. If there is no SnapMirror or
SnapVault background processes running, check in /etc/messages which operations
are running on the system by the time the error was logged. Collecting and analyzing a
perfstat could also help in defining the problem source. In most of the case rebooting the
affected system will clear out the hanging or looping process, however, it may not prevent
the issue to re-occur. In this case, you will contact NetApp Technical Support.

Destination is not in a snapmirrored state


SnapMirror updates will fail with the above error when the destination replica is made
writable.
Which operation made the destination file system being writable?
o
o

The snapmirror break command makes the destination volume/qtree writable.


When you run wafl iron against a destination volume, this process breaks the
snapmirror or the snapvault relationships. Thus once wafl iron completed, all the
relationships on that volume are in status "Broken-Off". To remedy the issue, you will
have to resynchronize the affected relationship using the snapmirror
resync command.

SnapVault update failed because the destination path is not an initialized replica qtree
The above explanation applies as well to SnapVault and you will have to resynchronize the
affected relationship with the snapvault start -r command from the destination.
However, if the problem is caused by the snapvault stop command (which gets cleared
up, the qtree and the relationship) you will have to re-baseline the relationship.
Information regarding a re-baseline of the relationship can be found in the Data Protection
and Backup Guide for your specific version of Data ONTAP.

Unable to translate Unicode pathname


Unicode pathname that cannot be translated abort SnapMirror, SnapVault and even OSSV
transfers. Directories and files on all source volumes that support CIFS clients must be in
Unicode format before being replicated to a destination. When the volume language differ
from the source and the destination volume, the transfer may fail as well.
Before initializing a relationship, ensure that:
o

The volume options create_ucode and convert_ucode are enabled on all source
and destination volumes.

The volume language is identical on the source and destination volume with a
character set that uses UTF-8.

110

SnapVault qtree replica is 'quiescing' forever


When a qtree is quiescing, it means that SnapVault is running a background process. It
could be:
o

A coalescing cycle: this process runs automatically after each transfer update in
order to check the in-core state of the qtree before to export the file system.

A rollback: occurs when a transfer is aborted. Snapvault rollbacks the destination


file system to the state it was before that failed transfer started.
To define which background process is running, check the /etc/log/snapmirror logs
file on the destination. During a coalescing cycle or a rollback, qtrees remain quiescing
until the process ended. Once the process completed, the qtree status is 'quiesced' and
ready for the next update. There is no advantage to interrupt those processes (By
stopping the mirror service or rebooting the storage system) because as soon as the
system comes up or the service is re-started, the process will restart
Rollback/Coalescing cycle can take long to complete when:

The amount of qtrees that were transferring and aborted was important.

The amount of data already transferred was big.

The inode map file is pretty large (Use the df -i command on the
destination to see many inode are used per volume).
A coalescing cycle applies to all qtrees in a volume. Thus if one qtree update just
completed the coalescing cycle will run against all snapvaulted qtrees in that
destination volume.
The transfer size is significantly greater than the actual size of the file system data
being transferred

When a QSM or SnapVault update is transferring, you will see from the snapmirror
status -l or the snapvault status -l output how many KB have being transferred so far.
This value may be greater than the expected delta (Changes expected) and this is due to
known metadata transfer overhead.
SnapVault and qtree SnapMirror sources determine which differences to send to the
destination by comparing corresponding blocks in previous and new base snapshots. The
two noteworthy places where this happens is in determining which inodes have changed
by comparing blocks of the inode file, and determining which entries in a directory have
changed by comparing directory data blocks.

For every transferred inode (directory, NT stream directory, NT stream


file, ACLs), we send a 4KB header.

111

All changed data is rounded up to 4KB. Thus, a 1-byte file is much more
expensive than a 0-byte file.
Anytime you create/delete/rename a file that changes a directory.

However, once the update completed, a df for the destination volume should
matches the "expected" changes size.

Related Link:

3013825: Top 10 SnapMirror issues and solutions

Data ONTAP 8.2 Data Protection Online Backup and Recovery Guide For 7-Mode

2015803: SnapVault start or update results in 'the qtree is not the source for the
replication destination'
3014313: What tools are available to analyze SnapMirror and SnapVault Information?
2012828: Message appears when using Protection Manager, SnapVault and NDMP
Preferred Interface: Error Failed to create relationship

Disclaimer
NetApp provides no representations or warranties regarding the accuracy, reliability, or
serviceability of any information or recommendations provided in this publication, or with
respect to any results that may be obtained by the use of the information or observance of any
recommendations provided herein. The information in this document is distributed AS IS, and
the use of this information or the implementation of any recommendations or techniques herein
is a customers responsibility and depends on the customers ability to evaluate and integrate
them into the customers operational environment. This document and the information contained
herein may be used solely in connection with the NetApp products discussed in this document.

@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@@@@@@@@@@@@@@@@@@@@@@@@

snapmirror problem Transfer aborted: process was aborted


Getting snapmirror problem Transfer aborted: process was aborted when you try to initialize?
Try checking the amount of snapshots you have on the source volume. We had an issue where we
already had the maximum amount of snapshots and the initialize could therefore not create its
baseline one.

112

The error message is entirely useless and had a reasonable amount of information been provided
by Netapp, this would not have take 2 days of investigation!
Delete a few snapshots and try again, it should be fine.
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@

snapmirror error 13102 -cannot connect to source


filer
I figured it out with the help of Scott Owens. It turns out it's the same issue that has
bitten others before. At Scott's suggestion I ran a packet trace on the destination
filer while kicking off the snapmirror initialization. Traffic was exiting on the wrong
interface. I fixed it by adding a route statement on the destination filer that
specifically directs traffic to the source filer to use the desired interface (in this case,
the iSCSI VIF). Thanks to Scott for pointing me in the right direction.
######
you need snapmirror.access set on both sides... so both systems need to allow each
other

ts simply because of the etc hosts file. We think that snapmirror is running fine and
it should work fine. It will work fine in case of SRM failover but while doing failback it
actually runs snapmirror resync, in this case our source becomes destination and
destination becomes source so our /etc/hosts file should be updated accordingly
means it should have name and IP of both the storage filers on both end. Hope it
will help
@@@@@@@@@@@@@@@@@@@@@@@

snapmirror error could not read from socket (error 13102)


i have faced a problem with snappmirror.
we have 2 x fas2020 ontap 7.3.3, and snapmirror working fine in both directions
we have bought one FAS2240-2 and one Fas 2240-4 v.8.1.1.7 and snapmirror working fine in
both directions
But when i set up a snapmirror from one fas2020 as source to a Fas2240-4 i got the error:
snapmirror error could not read from socket (error 13102)

113

snapmirror.src.sockErr

Access denied for SnapMirror/SnapVault destination.

i looked at the host files and se there are no missmatch.@\


@@@
1)Just to be certain, I don't think you want host names in your snapmirror.allow unless they are
defined in your /etc/hosts. Try this just to make sure:
SAN1 snapmirror.allow:
10.10.10.20 # Assuming this is for SAN2
10.10.10.44 # Assuming this is for SAN4
2)Another question: Do you have any firewalls or routers between SAN1 and SAN4? Can you
verify something in the middle isn't stopping SM traffic between the two? (port 10565-10569
should be open )

3)I solved it, changed options snapmirror.access to legacy and it solved the problem.
The host file was correct and the dest.filer were defined in snapmirror.allow
Soran
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@

Snapmirror (transfer aborted because of network error)


The solution was the /etc/hosts.
There are wrong entries, IP and hostname different.
After i changed to the correct IP/hostname, works well.
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@

snapmirror error?
I have a client who is using snapmirror for a long time. He increased the space in the source
volume without increasing the destination volume and the snapmirror status becomes

114

"Unknown". Now he deleted the volume from the destination and created a new volume as same
as the source. But when he did a snapmirror initialize he is getting the following error.
snapmirror status
Snapmirror is on.
Source Destination State Lag

Status

sourcebox1:NSG2_Shared DESTINATIONBOX1 R_NSG2_Shared Unknown


4493:23:26 Pending with restart checkpoint (at 649 GB)
sourcebox1:Pthecus DESTINATIONBOX1
Idle

R_SGCPthecus Snapmirrored 13:43:39

sourcebox1: fresh_archvol DESTINATIONBOX1


13:43:38 Idle

R_mtvfresh_archvol Snapmirrored

sourcebox1:test1 DESTINATIONBOX1:test2 Broken-off

8226:41:16 Idle

could someone please help me to resolve the issue.


1)If you had deleted destination Snapmirror volume, so you have to make again look like new
relationship.. so please go to destination filer and make it new relationship.
If still you will get error messages so please send me logs..anil.pawar1988@yahoo.com
destinationfiler>volcreatedemo_destinationaggr01100G
destinationfiler>volrestrictdemo_destination
destinationfiler>snapmirrorinitializeSsource
filer:demo_sourcedestinationfiler:demo_destination
Transferstarted.
Monitorprogresswith'snapmirrorstatus'orthesnapmirrorlog.
destinationfiler>
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@

Semi-Sync Snapmirror

115

I have configured Semi-Sync Snapmirror between two vfilers located in different datacenters.
The initial sync was successful, but after that it keep throwing this error on the destination
Yes.. Async works between the source and destination.
Okie... It seems that Snapmirror Sync (& Semi-Sync) is not supported on Vfiler or on a volume
where de-duplication is enabled. Snapmirror Sync ( & Semi-Sync) works only on physical
storage (both source & destination)
@@@@@@@@@@@@@@@@@@@@@@@@@@@
Unable to delete SnapMirror relationship

Hi,
Our NetApp FAS2050 Filer running OnTAP Version 7.3.2P4 has the following entry in
FilerView/SnapMirror/Manage:
Source: Filer1:dr_luns
Destination: Filer1:SM_dr_luns
Status: idle
State: source
Lag: 4765:27:46
The Destination volume 'SM_dr_luns' is not listed in FilerView/Volumes/Manage or in a
command line session therefore
we are unable to quiesce, or break, the SnapMirror relationship as these commands return error
message 'destination is
offline, is restricted, or does not exist'
We inherited this configuration from an IT employee who is no longer with our company.and
have no idea why this SnapMirror
was originally configured or how the Destination volume could have been deleted without the
SnapMirror relationship being removed.
Is there any way of cleaning up this mess ?

116

1) It could be a couple of things. I honestly can't remember if FilerView shows offline or


restricted volumes in that list but I think it does. You can verify this on the CLI with the
following:
netapp> vol status
That will show all volumes regardless of state. If the volume exists but is just offline or
restricted you can blow it away and that should help.
However, the more likely case is that the dest volume doesn't exist but a snapmirror baseline on
the source volume does exist and has a snapmirror baseline snapshot. If you delete that snapshot
it should clear up snapmirror status.
Hope this helps.
@@@@@@@@@@@@@@@@@@@@@@@
SnapMirror First Installation and Errors ...

Hie,
I have to install a snapmirror on one FAS2020A to a FAS2020.
I have create the relationship between two FAS, so when i try to initialize the snapmirror
processing, i have an error :
SAN1-XXXX:vol1 SRV-BACKUP01:vol1 Abort (transfer from source not possible; snapmirror
may be misconfigured, the source volume may be busy or unavailable)
So my vol to my destination is restricted ...
My source is authorized connect to my destination ...
So can you help to understand what is the problem ?
A)
Couple of things which you should check:1. Data ONTAP version should be the same for source and destination systems.
2. Source and Target volume should be of the same size. (Target can also be 10% more than the
source)

117

3. Check the parameter for options snapmirror.access


@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
Snapmirror error "Idle with restart checkpoint" resolved - Netapp Notes
NCDA

Ideally error "idle with restart checkpoint" will occur when there is an issue with
your network connectivity.
When snapmirror replication is in replication state, if you modify or delete other snapshots in
the same volume (which are not tag as "snapmirror,busy")
If your starting an fresh snapmirror initialization, destination volume option unicode should be
in on
When your snapmirror relationship is stopped with
Uninitialized

546:41:39

Idle with restart checkpoint (at 51 GB)

you have execute the below command in destination filer to continue the replication from the
same point where it is stopped.
Note: Don't execute the below command if your replication is in snap-mirrored state.
NetappFilerDst>snapmirror initialize -S Soure:volumename Dstvolume
if your snapmirror is stopped with error "idle with restart checkpoint (at 15GB)"
you have to execute the below command in destination filer to continue the replication from
the same point where it is stopped.

118

NetappFilerDst>snapmirror update -S Source:VolumeName DstVolumeName


Verify your snapmirror configuration file
if you have an snapmirror throttle enabled change it to unlimited by removing kb=2505
value

@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
Qtree SnapMirror update states that file system is full
Qtree SnapMirror update states that file system is full

Symptoms
Qtree SnapMirror update states that file system is full
Snapmirror fails with error: [snapmirror.dst.waflErr:error]: SnapMirror destination
transfer from filer:/vol/srcvol/srcqtree to /vol/dstvol/dstqtree : qtree snapmirror
destination write failed: No space left on device.
Cause of this problem

For qtree snapmirror (QSM), it is possible that the destination filer's volume containing the
destination qtree for a snapmirror relationship will require more space than the volume on the
source filer. This occurs for two reasons:
1. During a qtree snapmirror update, changes must be replicated to the
destination before data can be removed. Thus, at least 5% free space should
be available per QSM relationship to allow for the temporary space needed
during the transfer.
2. Volume-level snapshots independent of the QSM base snapshot (such as the
nightly snapshots) can retain data in the QSM qtrees. Since the volume
snapshots are unique to the destination filer, they may contain data that was
already deleted from the source filer.

If the destination qtree has enough space to hold the data in the source qtree, but it does not have
enough space to contain the source qtree plus the snapshot delta, the QSM update will fail with
the following error:
[snapmirror.dst.waflErr:error]: SnapMirror destination transfer
from srcfiler:/vol/srcvol/srcqtree to /vol/dstvol/dstqtree :
qtree snapmirror destination write failed: No space left on
device.

119
Solution

Increase the size of the destination volume so that it can hold the sum of the following:

The amount of data stored in the source qtree

The amount of data stored in snapshots

5% free space

The following scenario is an example of how the destination volume for a QSM relationship can
become full even though it is the same size as the source filer's volume. A QSM relationship is
set up from source srcfiler:/vol/srcvol/srcqtree to destination dstfiler:/vol/dstvol/dstqtree. The
srcvol on srcfiler is 100GB in size and contains a qtree using 20 GB. A "df -g" on this filer
shows:
srcfiler> df -g
Filesystem total used avail capacity
/vol/srcvol/ 80GB 20GB 60GB 25%
/vol/srcvol/.snapshot 20GB 0GB 20GB

0%

Volume snapshots are disabled on the source volume /vol/srcvol. The dstvol on dstfiler is also
100GB in size and contains the QSM destination qtree, which holds the same 20GB of data as
the source qtree (srcfiler:/vol/srcvol/srcqtree). A "df -g" on dstfiler shows:
dstfiler> df -g
Filesystem total used avail capacity
/vol/dstvol/ 80GB 20GB 60GB 25%
/vol/dstvol/.snapshot 20GB 0GB 20GB 0%
Volume snapshots are enabled on the destination volume
/vol/dstvol:
dstfiler> snap sched
Volume dstvol: 0 2 6@8,12,16,20
The nightly snapshot has been taken on the destination volume. This snapshot contains the
20GB of data in /vol/dstvol/dstqtree that was replicated over by QSM.
dstfiler> snap list dstvol
Volume dstvol
working...
%/used %/
total
date
name
---------- ---------- ------------ --------

120

25% ( 25%) 10% ( 10%) Aug 20 16:00 nightly.0


On the source filer, 20 GB of files are added, and 10 GB of files
are deleted. The srcvol now contains 30GB of data:
srcfiler> df -g
Filesystem total used avail capacity
/vol/srcvol/ 80GB 30GB 50GB 38%
/vol/srcvol/.snapshot 20GB 0GB 20GB

0%

A QSM update occurs, and the changes are replicated to the destination. QSM sends delete
information for the files which have been deleted, and complete data for the newly created files.
Thus, the data transferred is equal to 20 GB of new data + (4KB * number of deleted files).
Once the transfer completes, the destination filer now has the following space used:
dstfiler> df -g
Filesystem total used avail capacity
/vol/dstvol/ 80GB 30GB 50GB 38%
/vol/dstvol/.snapshot 20GB 10GB 20GB 50%
Notice that there is space used in the destination filer's snap reserve. This space is held in the
nightly snapshot:
dstfiler> snap list dstvol
Volume dstvol
working...
%/used %/
total
date
name
---------- ---------- ------------ -------25% ( 25%) 10% ( 10%) Aug 20 16:00 nightly.0
Thus, because the destination volume has snapshots enabled, it can use more space that the
source volume. Therefore, it is important to consider the snapshot retention period and snapshot
delta when sizing the destination volume for QSM relationships.
Last Updated: 24 AUG 2006
Environment

Data ONTAP
All NetApp filer

121

NearStore
SnapMirror
If you would like more help, please try the new NOW Support Communities where
registered customers, partners, and NetApp technical experts discuss technical
questions and issues.

@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
NetApp Snap Mirror
NetApp Snap Mirror
Well every NetApp engineer will be aware of the snapmirror , its a common and important
feature of the NetApp, so today I thought of writing something about snapmirror , May be
my blog on snapmirror can help you to understand the snapmirror more nicely.
Why we need a snapmirror.
SnapMirror is replication feature of NetApp and it is fast and flexible enterprise solution to
replicate your critical and very precious data over local area, wide area and fiber channel
networks to the destination/different location, and it is the very good solution for the
disaster and even good solution for the online data migration without any additional
overhead.
Snapmirror have three modes.
Async: Replicates snapshot copies from a source volume or qtree to a destination volume
or qtree. Incremental updates are based on schedules or are performed manually using the
snapmirror update command. It works both in volume level and qtree level.
Sync: Replicates writes from a source volume to a secondary volume at the same time it is
written to the source volume. Snap mirror Sync is used in environments that have zero
tolerance for data loss.
Semi-sync: It is between the Async and sync mode with less impact on performance. It
can configure a snapMirror sync replication to lag behind the source volume by a userdefined number of write operations or milliseconds.
Volume snapmirror enables block-for block replication. The entire volume, including its
qtrees, and all the associated snapshot copies, are replicated to the destination volume. The
source volume is online/writable and the destination volume is online/readonly and when
the relationship is break the destination volume becomes writable.
Initial Transfer and Replication.

122
To initialize a snapmirror relation, you first have to restrict the destination volume in which
the replica will reside. During the baseline transfer, the source system takes a snapshot
copy of the volume. All data blocks referenced by this snapshot copy, including volume
metadata such as language translation settings, as well as all snapshot copies of the volume
are transferred and written to the destination volume.
After the initialization completes, the source and destination file systems have one snapshot
copy in common. Update occur from this point and are based on the schedule specified in a
flat-text configuration file known as the snapmirror.conf file or by using the snapmirror
update command.
To identify new and changed blocks, the block map in the new snapshot copy is compared
to the block map of the baseline snapshot copy. Only the blocks that are new or have
changed since the last successful replication are sent to the destination. Once the transfer
has completed the new shapshot copy becomes the baseline snapshot copy and the old one
is deleted.
Requirements and Limitations
Destinations Data Ontap version must be equal to or more recent than the source. In
addition, the source and the destination must be on the same Data ontap release.
Volume snapMirror replication can only occur with volumes of the same type either
traditional volumes or both flexible volumes.
Destination volumes capacity equal to or greater than size of the source, Administrators can
thin provision the destination so that it appears to be equal to or greater than the size of
the source volume.
Quota cannot be enabled on destination volume.
It is recommended that you allow a range of TCP ports from 10565 to 10569.
Qtree SnapMirror
Qtree snapMirror is a logical replication. All the files and directories in the source file system
are created in the target destination qtree.
Qtree Snap Mirror replication occurs between qtrees regardless of the type of the volume
(traditional or flexible).Even qtree replication can occur between different releases of Data
ONTAP.
Source volume and qtree are online/writable in qtree replication and Destination volume is
also online/writable (in qtree replication).
NOTE: Unlike volume snapMirror , a qtree snapMirror does not require that the size of the
destination volume be equal to or greater than the size of the source qtree.
In initial baseline transfer you not need to create the destination qtree , it gets
automatically created upon first time replication.

123
Requirements and limitations
Support Async mode only
Destination volume must contain 5% more free space than the source qtree and destination
qtree cannot be /etc
Qtree snapMirror performance is impacted by deep directory structure and large number
(tens of millions) of small files replicated.
Configuration process of snapmirror
1. Install the snapMirror license
For ex: license add <code>

2. On the source, specify the host name or IP address of the snapMirror destination systems
you wish to authorize to replicate this source system.
For Ex: options snapmirror.access host=dst_hostname1,dst_hostname2
3. For each source volume and qtree to replicate, perform an initial baseline transfer, For
volume snapmirror restrict the destination volume.
For Ex: vol restrict dst_volumename
Then initialize the volume snapmirror baseline, using the following syntax on the
destination:
For Ex: snapmirror initialize S src_hostname:src_vol dst_hostname:dst_vol
For a qtree snapmirror baseline transfer, use the following syntax on the destination
Snapmirror initialize S src_hostname: /vol/src_vol/src_qtree
dst_hostname:/vol/dst_vol/dst_qtree
4. Once the initial transfer completes, set the snapmirror mode of replication by creating
the /etc/snapmirror.conf file in the destinations root volume.
Snapmirror.conf
The snapmirror.conf configuration file entries define the relationship between the source
and the destination, the mode of replication, and the arguments that control SnapMirror
when replicating data.
Entries can be seen like this in snapmirror.conf file
For ex: Fas1:vol1 Fas2:vol1 0 23 * 1,3,5
Fas1:vol1 : source storage system hostname and path

124
Fas2:vol1: destination storage system hostname and path
-: Arguments: Arguments fields let you define the transfer speed and restart mode and
indicate the default mode is selected
Schedules
0: updates on the hours
23: updates on 11PM
*: Updates on all applicable days of the months
1,3,5: updates on Monday,Wednesday,Friday
You can Monitor transfer by running the cmd snapmirror status this cmd can be run on
source as well as on the destination also, it comes with two options l and q
-l : option display the long format of the output.
-q: option displays which volumes or qtree are quiesced or quiescing.
You can list all the snap shot copies of particular volumes by snap list volumename cmd,
snapmirror snapshot copies are distinguished from system snapshot copies by a more
elaborate naming convention.
The snap list command display the keyword snapmirror next to the necessary snapshot
copy
Log files
Snapmirror logs record whether the transfer finished successfully or failed. If there is a
problem with the updates , it is useful to look at the log file to see what has happened since
the last successful update. The log include the start and end of each transfer, along with the
amount of data transferred.
For ex: options snapnmirror.log.enable (on/off) by default it is on.
Log files are stored in the source and the destination storage system root volume, in the
/etc/logs/snapmirror directory.
This guides you quickly through the Snapmirror setup and commands.
1) Enable Snapmirror on source and destination filer
source-filer> options snapmirror.enable
snapmirror.enable
on
source-filer>
source-filer> options snapmirror.access
snapmirror.access
legacy
source-filer>

125

2) Snapmirror Access
Make sure destination filer has snapmirror access to the source filer. The snapmirror filer's
name or IP address should be in /etc/snapmirror.allow. Use wrfile to add entries to
/etc/snapmirror.allow.
source-filer> rdfile /etc/snapmirror.allow
destination-filer
destination-filer2
source-filer>
3) Initializing a Snapmirror relation
Volume snapmirror : Create a destination volume on destination netapp filer, of same size
as source volume or greater size. For volume snapmirror, the destination volume should be
in restricted mode. For example, let us consider we are snapmirroring a 100G volume - we
create the destination volume and make it restricted.
destination-filer> vol create demo_destination aggr01 100G
destination-filer> vol restrict demo_destination
Volume SnapMirror creates a Snapshot copy before performing the initial transfer. This copy
is referred to as the baseline Snapshot copy. After performing an initial transfer of all data
in the volume, VSM (Volume SnapMirror) sends to the destination only the blocks that have
changed since the last successful replication. When SnapMirror performs an update transfer,
it creates another new Snapshot copy and compares the changed blocks. These changed
blocks are sent as part of the update transfer.
Snapmirror is always destination filer driven. So the snapmirror initialize has to be done on
destination filer. The below command starts the baseline transfer.
destination-filer> snapmirror initialize -S source-filer:demo_source destinationfiler:demo_destination
Transfer started.
Monitor progress with 'snapmirror status' or the snapmirror log.
destination-filer>
Qtree Snapmirror : For qtree snapmirror, you should not create the destination qtree. The
snapmirror command automatically creates the destination qtree. So just volume creation
of required size is good enough.
Qtree SnapMirror determines changed data by first looking through the inode file for inodes
that have changed and changed inodes of the interesting qtree for changed data blocks. The
SnapMirror software then transfers only the new or changed data blocks from this Snapshot
copy that is associated with the designated qtree. On the destination volume, a new
Snapshot copy is then created that contains a complete point-in-time copy of the entire

126
destination volume, but that is associated specifically with the particular qtree that has been
replicated.
destination-filer> snapmirror initialize -S source-filer:/vol/demo1/qtree
destination-filer:/vol/demo1/qtree
Transfer started.
Monitor progress with 'snapmirror status' or the snapmirror log.
4) Monitoring the status : Snapmirror data transfer status can be monitored either from
source or destination filer. Use "snapmirror status" to check the status.
destination-filer> snapmirror status
Snapmirror is on.
Source
Destination
State
Lag Status
source-filer:demo_source
destination-filer:demo_destination Uninitialized Transferring (1690 MB done)
source-filer:/vol/demo1/qtree destination-filer:/vol/demo1/qtree
Uninitialized - Transferring (32 MB done)
destination-filer>
5) Snapmirror schedule : This is the schedule used by the destination filer for updating
the mirror. It informs the SnapMirror scheduler when transfers will be initiated. The
schedule field can either contain the word sync to specify synchronous mirroring or a cronstyle specification of when to update the mirror. The cronstyle schedule contains four spaceseparated fields.
If you want to sync the data on a scheduled frequency, you can set that in destination filer's
/etc/snapmirror.conf . The time settings are similar to Unix cron. You can set a synchronous
snapmirror schedule in /etc/snapmirror.conf by adding sync instead of the cron style
frequency.
destination-filer> rdfile /etc/snapmirror.conf
source-filer:demo_source
destination-filer:demo_destination - 0 * * * # This
syncs every hour
source-filer:/vol/demo1/qtree destination-filer:/vol/demo1/qtree - 0 21 * * #
This syncs every 9:00 pm
destination-filer>
6) Other Snapmirror commands

To break snapmirror relation - do snapmirror quiesce and snapmirror break.

To update snapmirror data - do snapmirror update

To resync a broken relation - do snapmirror resync.

To abort a relation - do snapmirror abort

127
Snapmirror do provide multipath support. More than one physical path between a source
and a destination system might be desired for a mirror relationship. Multipath support
allows SnapMirror traffic to be load balanced between these paths and provides for failover
in the event of a network outage.
Some Important Points to be known about SnapMirror
Clustered failover interaction.The SnapMirror product complements NetApp clustered
failover (CF) technology by providing an additional level of recoverability. If a catastrophe
disables access to a clustered pair of storage systems, one or more SnapMirror volumes can
immediately be accessed in read-only mode while recovery takes place. If read-write access
is required, the mirrored volume can be converted to a writable volume while the recovery
takes place. If SnapMirror is actively updating data when a takeover or giveback operation
is instigated, the update aborts. Following completion of the takeover or giveback operation,
SnapMirror continues as before. No specific additional steps are required for the
implementation of SnapMirror in a clustered failover environment
Adding disks to SnapMirror environments.When adding disks to volumes in a
SnapMirror environment always complete the addition of disks to the destination storage
system or volume before attempting to add disks to the source volume.
Note: The dfcommand does not immediately reflect the diskor disks added to the
SnapMirror volume until after the first SnapMirror update following the disk additions .
Logging. The SnapMirror log file (located in /etc/logs/snapmirror.log) records the start and
end
of an update as well as other significant SnapMirror events. It records whether the transfer
finished successfully or whether it failed for some reason. If there is a problem with
updates, it is often useful to look at the log file to see what happened since the last
successful update. Because the log file is kept on the source and destination storage
systems,quite often the source or the destination system may log the failure, and the other
partner knows only that there was a failure. For this reason, you should look at both the
source and the destination log file to get the most information about a failure. The log file
contains the start and end time of each transfer, along with the amount of data transferred.
It can be useful to look back and see the amount of data needed to make the update and
the amount of time the updates take.
Note: The time vs. data sent is not an accurate measure of the network bandwidth because
the transfer is not constantly sending data
Destination volume.For SnapMirror volume replication, you must create a restricted
volume to be used as the destination volume. SnapMirror does not automatically create a
volume.
Destination volume type.The mirrored volume must not be the root volume.
Data change rate.Using the snap delta command, you can now display the rate of
change stored between two Snapshot copies as well as the rate of change between a

128
Snapshot copy and the active file system. Data ONTAP displays the rates of change in two
tables. The first table displays rates of change between successive Snapshot copies. The
second table displays a summary of the rate of change between the oldest Snapshot copy
and the active file system.
Failed updates. If a transfer fails for any reason, SnapMirror attempts a retransfer
immediately, not waiting for the next scheduled mirror time. These retransfer attempts
continue until they are successful, until the appropriate entry in the /etc/snapmirror.conf
file is commented out, or until SnapMirror is turned off. Some events that can cause failed
transfers include:
Loss of network connectivity
Source storage system is unavailable
Source volume is offline
SnapMirror timeouts. There are three situations that can cause a SnapMirror timeout:
Write socket timeout. If the TCP buffers are full and the writing application cannot hand off
data to
TCP within 10 minutes, a write socket timeout occurs. Following the timeout, SnapMirror
resumes
at the next scheduled update.
Read socket timeout. If the TCP socket that is receiving data has not received any data
from the application within 30 minutes, it generates a timeout. Following the timeout,
SnapMirror resumes at the next scheduled update. By providing a larger timeout value for
the read socket timeout, you can be assured that SnapMirror will not time out while waiting
for the source file to create Snapshot copies, even when dealing with extremely large
volumes. Socket timeout values are not tunable in the Data ONTAP and SnapMirror
environment.
Sync timeouts. These timeouts occur in synchronous deployments only. If an event occurs
that causes a synchronous deployment to revert to asynchronous mode, such as a network
outage, no ACK is received from the destination system.
Open Files
If SnapMirror is in the middle of a transfer and encounters an incomplete file (a file that an
FTP server is still transferring into that volume or qtree), it transfers the partial file to the
destination. Snapshot copies behave in the same way. A Snapshot copy of the source would
show the transferring file and would show the partial file on the destination.
A workaround for this situation is to copy a file to the source. When the file is complete on
the source, rename the source file to the correct name. This way the partial file has an
incorrect name, and the complete file has the correct name.

129

@@@@@@@@@@@@@@@@@@@@@@@@
snapmirror
Hope someone can answe rthis easy question since I am new to this ?
I want to copy a volume to new aggregate and thinking of using snapmirror. I knwo I have to run
snapmirror intialise -S first.
What do I do after the completion?
Is that just snapmirror break <destination vol> at detination filer?
or Do i have to run update prior to break?
A) Run update before break will keep that target volume identical as the source vol
Destination volume will be usable after Break whether you update or not
@@@@@@@@@@@@@@@@@@@@@@@@@@

snapmirror
Filer2> snapmirror initialize -S Filer1:vol34 Filer2:vol25
Fri Sep 28 07:28:13 GMT [Filer2: replication.dst.err:error]: SnapMirror: destination transfer
from Filer1:vol34 to vol25 : could not find source filer.
Transfer aborted: could not find source filer.
Filer2>
Can Someone help me over the above, i have entries in both the filers but a ping is not working .
Filer1> rdfile /etc/snapmirror.allow
Filer2
Filer1> ping Filer2
ping: unknown host: Filer2
Filer2> rdfile /etc/snapmirror.allow

130

Filer1
Filer2> ping Filer1
ping: unknown host: Filer1
A) Have you got relevant records for both filers on your DNS servers? E.g. can you resolve
Filer1 & Filer2 (or ping) from any host sitting on the same subnet?
Also how about DNS on the filer itself? What's the output of:
Filer1> dns info
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@

How to remove snapmirror broken-off relationship?


filer02*> snapmirror status
Snapmirror is on.
Source
Destination
filer01:archive
filer02:archive

State

Lag
Broken-off

Status
606:00:59 Idle

but the volume "archive" on the destination storage filer02 has already been deleted..........
so could you tell me how to delete this broken-off snapmirror relationship?
waiting onine...............
A) filer01 likely has a snapshot for the mirror... if you run "snap list archive" on

filer01, you will see the snapshot you can delete which will clear this. If not, it can
be a snapmirror.conf entry or a registry state entry (not likely) which would require
support to walk you through fixing... but in this case almost positive it is a snapshot
on filer01 with the snapmirror status.

B) hiScott
thanks for your reply. yes, there is indeed a snapshot containing the keyword "snapmirror" as
below, but I can't delete this because the same source "filer01:archive" is replicated to another
desitnation volume
filer01> snap list archive
Volume archive
working...
%/used
%/total date
name
---------- ---------- ------------ -------0% ( 0%) 0% ( 0%) Jul 04 23:06 hotnew
0% ( 0%) 0% ( 0%) Jul 04 22:30 filer02(0135033979)_archive_mirror.16276 (snapmirror)

131

0% ( 0%) 0% ( 0%) Jul 04 00:01 weekly.0


38% (38%) 20% (20%) Jun 27 00:00 weekly.1
filer02*> snapmirror status
Snapmirror is on.
Source
Destination
State
Lag
Status
filer01:archive
filer02:archive
Broken-off
606:00:59 Idle
filer01:archive
filer02:archive_mirror
Snapmirrored 00:30:59 Idle
C) Yes,resoveld! thanks!
I didn;t notice that there is an entry in snapmirror.conf. I checked it agian and
deleted, then it works!
thanks very much for your help!
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@

SnapMirror resync clarification


What's the proper way to break and resync SnapMirror relationship in System Manager?
Particularly, does it make any difference from where you actually break it (on source or
destination) and from where you then resync it?
This question rose up when I broke SnapMirror on the destination, then run resync from the
desination and got this message: "SnapMirror resync of destination_vol to
source_filer:source_volume". Does this "to" mean that the filer actually did a reverse resync? Or
it simply means that the destination volume was synchronized "to" the state of the source
volume.
It just looks a bit confusing.
A).. snapmirror resync is always executed on a system, that will become
destination after command is completed.
I would argue that, because in System Manager you have 'resync' as well as
'reverse resync' on the source. However, you have only 'resync' on the destination
and 'reverse resync' is grayed out.
So my guess is that if you run 'resync' on either side, then the normal source to
destination resync is executed. And if you run 'reverse resync' on the source (you
can't do that on the destination for some reason), then the data is replicated in
reverse way: from the destination to the source.
B) To resize a snapmirrored volume, you don't need to break the mirror.
If you want to grow the source volume:
1) resize the destination volume first
2) then resize the source volume
3) run snapmirror update
4) run vol status -b on the destination volume to confirm that volume size and
filesystem size are the same
If you want to shrink the source volume:
1) resize the source volume

132
2) run snapmirror update
3) resize the destination volume
4) run vol status -b on the destination volume to confirm that volume size and
filesystem size are the same
PS: With Ontap 8.2, all this will be done automagically.
C).. Can you also further explain the System Manager behavior. There is resync
option on the source and destination and reverse resync on the source. Am I
understanding correctly that if you run resync either on the source filer or on the
destination filer it will resync from the source to the destination. And if you choose
reverse resync, then only in this case it will actually resync from the destination to
the source.
@@@
Snapmirror

I have a storage Fas3210 and I have 16 volumes in-sync with my DR storage. When I add the
volume to 17 I can not sync, back with the message "Too Many Active Transfer at Once."
Someone has gone through this problem?? Has a limit to synchronize the volumes??
A) Yes, 16 synchronous SnapMirror relationships is the absolute maximum for FAS3210. May
be, you could use standard async SnapMirror for some volumes, here upper limit is larger.

Maximum number of concurrent replication operations


The limit on the number of concurrent replication operations depends on your system model. The
system resources are shared between SnapMirror and SnapVault replication operations. Therefore,
the limit for a particular type of replication is reduced if there are any other types of replication
operations being performed concurrently.
A SnapMirror replication or a SnapVault backup consists of two replication operations: one operation
on the source system and the other on the destination. Therefore, if a system is both a source and a
destination, it uses two replication operations. For example, systemAis the source as well as the
destination system in a SnapMirror or SnapVault relationship and the total number of replication
operations available for that relationship is 100. Then, the total number of available replication
operations, when the transfer is active, is 98.
Each value listed in the following two tables indicates the maximum number of concurrent
replication operations allowed for a specific model, when using a single type of replication. The
values are exclusive, and not cumulative. For example, if you are using the maximum number of
synchronous SnapMirror replication operations for a system as a source, you cannot use any more
replication operations of any type for the system.
The NearStore personality license is enabled by default in Data ONTAP 8.1. However, to enable the
NearStore option, you must set the licensed_feature.nearstore_option.enableoption to
on.
The following table lists the maximum number of concurrent replication operations without the
NearStore option enabled:

133

FAS2520
FAS3220

50 50 16 16
64
64
64
64
16
50 50 16 16
64
64
64
64
16
FAS6250
150 15 32 32
128 128 128
128
32
0
FAS8080(max) 150 30 32 32
512 512 512
512
128
0
--------------------------------------------print--------------------------------------------------

134

135

Vous aimerez peut-être aussi