Académique Documents
Professionnel Documents
Culture Documents
SnapMirror
2
If critical data is replicated to a different location, in case of a disaster at the source site, the replica can be
made available to clients across the network until the damage caused by the disaster is repaired.
Additionally, as the source data can be replicated at a time chosen by systems administrators, the solution
minimizes network utilization.
SnapMirror is also used for backup offloading. SnapMirror technology attaches the off-site storage device
to the SnapMirror destination system, offloading tape backup overhead from production servers.
REPLICATION MODES
The Data ONTAP SnapMirror feature enables an administrator to replicate data either asynchronously or
synchronously.
The SnapMirror Async mode replicates Snapshot copies from a source volume or qtree to a destination
volume or qtree. Incremental updates are based on a schedule or are performed manually using the
snapmirror update command. Async mode works with both volume SnapMirror and qtree
SnapMirror.
SnapMirror Sync mode replicates writes from a source volume to a destination volume at the same time it
is written to the source volume. SnapMirror Sync is used in environments that have zero tolerance for
data loss.
SnapMirror Semi-Sync provides a middle-ground solution that keeps the source and destination systems
more closely synchronized than Async mode, but with less impact on performance.
VOLUME SNAPMIRROR
Volume SnapMirror enables block-for-block replication. The entire volume, including its qtrees, and all
the associated Snapshot copies, are replicated to the destination volume.
.
Volume SnapMirror can be initialized using a tape device and a volumes replica can be cascaded in a
series.
Volume SnapMirror can be asynchronous, synchronous, or semi-synchronous.
The source volumes are visible, accessible, and writable by the clients.
The destination volumes are visible, accessible, and read-only and usually on a separate system to which
the source volumes are replicated.
4
To initialize a volume, you first have to restrict the destination volume in which the replica will reside.
During the baseline transfer, the source storage system takes a Snapshot copy of the volume. All data
blocks referenced by this Snapshot copy, including volume metadata such as language translation settings,
as well as all Snapshot copies of the volume, are transferred and written to the destination volume.
After the initialization completes, the source and destination file systems have one Snapshot copy in
common. Updates occur from this point and are based on the schedule specified in a flat-text
configuration file known as the snapmirror.conf file or by using the snapmirror update command.
QTREE SNAPMIRROR
Qtree SnapMirror is a logical replication. All the files and directories in the source file system are created
in the target destination qtree.
Qtree SnapMirror replication occurs between qtrees regardless of the type of the volume (traditional or
flexible). You can mirror qtrees using SnapMirror from a traditional volume to a flexible volume and
from a flexible volume to a traditional volume. Note that volume size and the disk geometry do not make
any difference for qtree SnapMirror.
Qtrees from different sources can be replicated to a destination and qtree SnapMirror replication can
occur between different releases of Data ONTAP.
With qtree SnapMirror, the source volume and qtree are online and writable. The destination qtree is readonly, while the destination volume remains writable and may contain replicated qtrees from multiple
source volumes, and qtrees or nontree data not managed by SnapMirror.
NOTE: Unlike volume SnapMirror, qtree SnapMirror does not require that the size of the destination
volume be equal to or greater than the size of the source qtree.
7
Qtree SnapMirror is available in asynchronous mode only. The destination volume must contain 5% more
free space than the source qtree. A destination qtree cannot be the /etc file. Qtree SnapMirror cannot be
initialized using a tape device and does not support cascading of mirrors. Qtree SnapMirror performance
is impacted by deep directory structure and large number (tens of millions) of small files replicated.
To determine changed data, qtree SnapMirror looks at the inode file and defines which inodes are in the
qtree of interest and which inodes have changed. If the inode file is large, but the inodes of interest are
few, qtree SnapMirror spends a lot of time going through the inode file to find very few changes.
LICENSING SNAPMIRROR
SnapMirror requires a snapmirror license on both the source and the destination storage systems. If
the SnapMirror source and destination are on the same storage system, only one license is to be installed.
SnapMirror Sync and Semi-Sync require an additional snapmirror_sync free license available in
the Data ONTAP Data Protection Online Backup and Recovery Guide on the NOW site.
CONFIGURATION PROCESS
The SnapMirror configuration process consists of the following four steps:
1. Install the SnapMirror license on the source and destination systems: license add <code>
2. On the source, specify the host name or IP address of the SnapMirror destination systems you wish to
authorize to replicate this source system.
options snapmirror.access host=dst_hostname1,dst_hostname2
3. For each source volume or qtree to replicate, perform an initial baseline transfer. For volume
SnapMirror, restrict the destination volume first: vol restrict dst_vol
Then initialize the volume SnapMirror baseline, using the following syntax on the destination:
snapmirror initialize -S src_hostname:src_vol dst_hostname:dst_vol
For a qtree SnapMirror baseline transfer, use the following syntax on the destination:
snapmirror initialize S src_hostname:/vol/src_vol/src_qtree dst_hostname:/vol/dst_vol/dst_qtree
4. After the initial transfer completes, set the SnapMirror mode of replication by creating the
/etc/snapmirror.conf file in the destinations root volume.
SNAPMIRROR.CONF
The snapmirror.conf configuration file entries define the relationship between the source and the
destination, the mode of replication, and the arguments that control SnapMirror when replicating data.
The syntax for entries in the snapmirror.conf file is as follows:
src_system:/vol/src_vol/[src_qtree]
dest_system:/vol/dest_vol[/dest_qtree] [arguments] [schedule]
9
The arguments field lets you define the transfer speed and the restart mode. In this field you can also
enable checksum, set the synchronicity level and the visibility interval. A dash (-) indicates that all
arguments default values apply. The schedule consists of four space-separated fields in order: minute,
hour, day_of_month, and day_of_week.
All possible values can be applied with an asterisk (*). A single dash (-) means never and prevents this
schedule entry from executing.
NOTE: SnapMirror updates can be scheduled to occur as frequently as every minute.
10
MONITORING TRANSFER
Use the snapmirror status command either from the source or the destination storage system to
monitor the transfer progress, check relationships state and status, and control the transfer lag (age of
backup).
Command syntax:
snapmirror status [options] [system:][path]]
The value for options can be -l or -q
The -l option displays the long format of the output.
The -q option displays which volumes or qtrees are quiesced or quiescing.
The snapmirror quiesce command waits for all existing transfers to both volumes and qtrees to complete
and blocks any further updates. If a qtree is not in a stable state (is in transition), the snapmirror
quiesce command forces it into a stable state. You can quiesce only volumes and qtrees that are online
and that are SnapMirror destinations. You cannot quiesce a restricted or offline volume or a qtree in a
restricted or offline volume.
11
LOG FILES
The SnapMirror logs record whether the transfer finished successfully or failed. If there is a problem with
the updates, it is useful to look at the log file to see what has happened since the last successful update.
The logs include the start and end of each transfer, along with the amount of data transferred.
Use the option snapmirror.log.enable to record SnapMirror data transfer logs. By default, the
option is on.
options snapmirror.log.enable [on|off]
Log files are stored on the source and the destination storage systems root volume, in the
/etc/logs/snapmirror directory.
.
12
A new log file is generated every week as snapmirror.0. Older log files are renamed snapmirror.[1-5] and
the oldest log file is deleted.
NEARSTORE PERSONALITY
NearStore Personality allows you to utilize FAS systems as secondary systems. This feature requires the
nearstore_option license.
When enabled, the nearstore_option license increases the number of possible concurrent
destination qtree SnapMirror and SnapVault replications by optimizing the transfer resources required
for those replications. This license should not be installed on these storage systems if they intend to
handle primary application workloads.
13
CONCURRENT TRANSFERS
Each storage system model supports a maximum number of simultaneous replication operations.
In Data ONTAP 7.3, the volume SnapMirror and qtree SnapMirror maximum stream counts are increased
for certain platforms. This enhancement allows customers to accommodate the use of large numbers of
flexible volumes and multiple concurrent transfers.
The new concurrent stream counts apply only to the snapmirror initialize and the snapmirror update Data
ONTAP commands.
.
MANAGING TRANSFERS
In Data ONTAP 7.3, you can specify the number of volume SnapMirror transfers for which you want
resources by using the following Data ONTAP option:
options replication.volume.reserved_transfers <n>
<n> is the number of volume SnapMirror transfers for which you want resources reserved.
The default value is 0.
The reason for reserving transfers is that reserved resources will not be available for other replication
types like qtree SnapMirror or SnapVault transfers.
The stream count setting was increased in Data ONTAP 7.3. You can revert back to the previous stream
count setting by using the following option:
options replication.volume.transfer_limits [current | previous]
If the value is set to current, the maximum for the current release will be used. If the value is set to
previous, the maximum for Data ONTAP 7.2.0 will be used. The default value is current.
14
THROTTLING NETWORK
Throttle network usage can be configured on a per transfer basis, using the kbs argument in the
snapmirror.conf.
Dynamic throttle allows you to change the throttle value for a SnapMirror relationship while the transfer
is active. This feature is available from Data ONTAP 7.1 and later.
snapmirror throttle <n> dst_hostname:dst_path
<n> is the new throttle value in kilobytes per second
System-wide throttling is available from Data ONTAP 7.2 and later and limits the total bandwidth used
by all transfers at any time (SnapMirror and SnapVault transfers).
There are three options.
Enable or disable system-wide throttling on all systems: replication.throttle.enable [on|
off]
Set maximum bandwidth for all incoming transfers:
replication.throttle.incoming.max_kbs <value>
Set maximum bandwidth for all outgoing transfers:
replication.throttle.outgoing.max_kbs <value>
The default value is unlimited, which means there is no limit on total bandwidth used. Valid transfer rate
values are 1 to 125,000 kilobytes per second.
15
SPACE GUARANTEE
In Data ONTAP 7.3 and later, space is guaranteed for SnapMirror destination flexible volumes. This new
feature preallocates space in the aggregate for the volume. However, it is still possible for transfers to fail
when the aggregate is full. When you create a flexible volume, by default its volume guarantee is enabled.
Support for space guarantee on the SnapMirror destination volume allows for maintenance of this
guarantee after the initial baseline transfer is completed. The destination volume is set to an internal
guarantee type called Replica (RAID label). The guarantee is also maintained when the volume is offline.
When the SnapMirror destination storage system is upgraded to Data ONTAP 7.3 or later, you can enable
or disable the space guarantee on the existing SnapMirror destination volumes using the vol options
command.
Note that file guarantee is not supported on SnapMirror destination volumes.
dst> vol options vol_name guarantee [volume | none]
An upgraded_replica volume option is added to display if a volume is an upgraded SnapMirror
destination as a part of the vol status command.
SYNCHRONOUS SNAPMIRROR
SYNC MODE
16
SnapMirror in synchronous mode is a mode of replication that sends updates from the source to the
destination as they occur, rather than according to a predetermined schedule. This guarantees that data
written on the source system is protected on the destination even if the entire source system fails.
Volume SnapMirror Async is the base of SnapMirror Sync. The first step involved in SnapMirror sync
mode replication is a one-time baseline transfer of the source volume. When the baseline transfer is
completed, SnapMirror transitions through a series of states, becoming more and more synchronous until
the relationship gets in synchronous mode.
To configure SnapMirror Sync mode, the schedule field is replaced by sync in the snapmirror.conf
configuration file.
Example:
SystemA:vol1 SystemB:vol1 sync
Note that when changes are made to the snapmirror.conf file for a SnapMirror Sync entry, the SnapMirror
relationship will go out of sync momentarily and then attempt to return to synchronous state.
17
Before Data ONTAP 7.2.2, NVLOG files are written in the root volume of the destination system. With
Data ONTAP 7.2.2 and later, NVLOG files are written in the parent aggregate of the destination volume.
THEORY OF OPERATION
Client writes are acknowledged after the writes have been logged to the NVRAM on the source and to the
NVLOG files on the destination. Because the source waits for each transaction to be acknowledged from
the destination before moving forward, it may impact client application performance.
The source system receives a write request from a client. The request is journaled in the systems
NVRAM and recorded in cache memory.
The request and the NVLOG metadata are forwarded to the SnapMirror destination system where they are
also journaled in NVRAM and cache memory.
The destination responds to the source system.
Data ONTAP acknowledges the write to the client system, and the application that requested the write is
free to continue processing.
When a consistency point is triggered, Data ONTAP uses the transaction data in cache memory to build a
list of data block changes that need to be written to disk.
This list of data blocks is sent to the destination, which initiates its own write to disk and returns an
acknowledgment to the WAFL software on the source system.
18
SEMI-SYNC MODE
Versions of Data ONTAP earlier than 7.3, allowed for Semi-Sync mode, which provided different
synchronicity levels to control synchronicity versus performance.
The field in the configuration file that controls the synchronicity level is the outstanding argument. This
variable allows you to modify the amount of time or the number of operations a destination waits before
sending a write acknowledgment to the source.
outstanding={x ops | x ms | x s}
Default is no value: SnapMirror operates in a fully synchronous manner.
The ops suffix allows x number of outstanding write operations before forcing the clients to wait for an
acknowledgment.
s or ms defines the amount of time (seconds or milliseconds) a destination waits before sending a write
acknowledgment.
When the outstanding value is less than 10 seconds, the source forwards the NVLOG like it would in
sync mode, but it does not wait for the acknowledgment from the destination system. This provides
performance improvement for the client writing to the source volume. However, there is a possibility of
data loss during that interval should the source crash prior to forwarding the NVLOG.
19
When the outstanding value is set to 10 seconds or more, only the CP streams are forwarded to the
destination; NVLOG data is not forwarded. Eliminating NVLOG forwarding reduces the load on the
storage systems.
20
DEPLOYMENT EXAMPLES
Example 1: Qtree SnapMirror
src:/vol/vol1/q1 dst:/vol/vol1/q1 15 * * *
The source qtree q1 is replicated to the destination qtree q1 every 15 minutes, every day of the week and
every day of the month.
Example 2: Volume SnapMirror
src:vol2 dst:vol2 kbs=2000 10 8,20 * *
The source volume vol2 is replicated to the destination volume vol2 at 10 minutes past 8:00 a.m. and 8:00
p.m. every day. In other words, the source volume vol2 is replicated daily at 8:10 a.m. and 8:10 p.m. Data
ONTAP can use a maximum of 2,000 kilobytes per second to transfer data for this relation.
Example 3: SnapMirror Sync
src:vol3 dst:vol3 sync
The source volume vol3 is synchronously replicated to the destination volume vol3.
Example 4: SnapMirror Semi-Sync
src:vol4 dst:vol4 semi-sync
The source volume vol4 is replicated to the destination volume vol4 in Semi-Sync mode. Only the CPSync mechanism is used. NVLOG data is not forwarded to the destination.
21
PERFORMANCE TUNING
In Sync mode, changes are shown on the destination only after the source takes a Snapshot copy of the
source volume (every three minutes by default) and then transfers it to the destination. To control the view
of the data on the destination, you use the visibility_interval argument in the snapmirror.conf
file. If visibility_interval is set too low, the source system will be kept very busy creating
Snapshot copies and this can impact performance. Changing the visibility_interval default
value of three minutes is not recommended.
In versions of Data ONTAP earlier than 7.2.2, NVLOG data is written in the root volume. This may
impact on how quickly NVLOG data may be committed to disk. For traditional volumes, ensure that the
root volume spans enough disks. Note that with Data ONTAP 7.2.2 and later, the NVLOG files are
written in the destination volumes parent aggregate.
The checksums algorithm is used to protect SnapMirror transmitted data. Cyclic redundancy check
checksums, also known as crc32c, are computed by the CPU on the destination storage system and may
have undesired effects on performance. TCP checksums are computed directly on the network interface
card, or NIC, and are less CPU-intensive; therefore the TCP checksums computation is the recommended
method.
22
23
NORMAL OPERATIONS
24
Normal operations have been established with the SnapMirror initialize command executed
from the Disaster Recovery Site storage system.
The syntax of the command is:
destination>snapmirror initialize S source:source_vol
destination:destination_vol.
TEST MODE
To test the mirror, you break the SnapMirror relationship, continue writing to the source, and test writing
to the destination volume.
The syntax of the command is:
destination>snapmirror break destination_vol
25
DISASTER MODE
Disaster strikes. In this example, a backhoe has dug up the network cables that connect the data Center to
clients. The Data Center volume (dc_vol) is unavailable.
From the Disaster Recovery Site, break the mirror and the SnapMirror replica becomes writable.
The syntax of the snapmirror break command is:
destination>snapmirror break destination_vol.
After breaking the mirror, direct clients to the Disaster Recovery volume (dr_vol), and they continue
reading and writing their data.
The Data Center volume is offline and becoming out of data. The last shared Snapshot copy is preserved,
however. After the problem is fixed, a combination of snapmirror resync and snapmirror
break commands will help you to resume normal operations.
26
27
28
29
ADVANCED FEATURES
30
31
To begin the SnapMirror over FC configuration process, first configure the FC NIC adapters and connect
the systems to the Fibre Channel switches. Optionally, you can set up multiple SnapMirror traffic zones.
SnapMirror traffic must be configured within dedicated zones. After the switches have been set up,
configure SnapMirror and baseline the initial transfers. For details on SnapMirror over Fibre Channel
configuration steps, refer to the latest Data ONTAP Data Protection Online Backup and Recovery Guide
on the NOW site.
SNAPMIRROR TO TAPE
SnapMirror to tape is a deployment that supports SnapMirror replication over low-bandwidth connections
by accommodating the initial transfer between the source and the destination systems using a physically
transported tape. When baseline transfer has been carried out with the local tape device, incremental
SnapMirror updates can be performed over the network. The SnapMirror-to-tape function is available for
volume replication only.
On the source system using Data ONTAP 8.0 7-Mode and later, use the smtape backup command to
copy all volume Snapshot copies, including the base Snapshot copy, to tape. If more than one backup tape
is necessary, use the smtape continue command to continue the copying on a new tape. NOTE: For
systems using earlier versions of Data ONTAP, use the snapmirror store command to copy all volume
Snapshot copies to tape.
Physically transport the backup tapes from the source system to the destination system.
On the destination system, use the vol create and vol restrict commands to set up a
SnapMirror target volume.
For systems using Data ONTAP 8.0 7-Mode and later, use the smtape restore command to copy
the initial SnapMirror tape to the destination system. If the backup spans more than one tape, use the
smtape continue command to continue the restore on a subsequent tape. NOTE: For systems using
earlier versions of Data ONTAP, use the snapmirror retrieve command to restore the data from
tape and the snapmirror use command to continue when more than one backup tape is required.
32
Use the snapmirror update command to trigger an incremental update from the source to the
destination system over the low-bandwidth connection, or edit the snapmirror.conf file to set up an
incremental update schedule from the source to the destination volume.
Finally, use the snapmirror release command to eliminate the source-to-tape relationship and
associated Snapshot copy.
33
34
SNAPMIRROR INTERACTIONS
35
36
SNAPMIRROR PERFORMANCE
37
SnapMirror creates a Snapshot copy before every update and deletes a Snapshot copy at the end. On
heavily loaded storage systems, Snapshot copy creation time can stretch out and restricts the frequency of
SnapMirror updates. Stretched SnapMirror schedules result in SnapMirror creating many Snapshot copies
on the source storage system at the same time, which can impact client access. For this reason staggered
SnapMirror schedules are recommended to avoid system blockages.
Volume size and changed blocks
To perform an incremental update, the block map in the new Snapshot copy is compared to the block map
in the baseline Snapshot copy. The time required to determine the block changes depends on the volume
size. With Data ONTAP 7.0 and later, you can use the snap delta command to determine the rate of
data change between Snapshot copies on a volume.
38
To optimize SnapMirror deployment, it is recommended that the schedules be staggered. For qtree
SnapMirror, if there are too many qtrees per destination volume, the solution is to re-baseline those qtrees
to another volume.
CPU UTILIZATION
SnapMirror consumes available CPU cycles on a storage system.
When the source storage system shows that the CPU utilization is up to 100%, it does not mean that the
system performance or the SnapMirror throughput is degraded.
SnapMirror may have some impact, but in the majority of cases, it is not very significant.
You can monitor storage system CPU using Operations Manager Performance Advisor or the Data
ONTAP sysstat command.
SYSTEM ACTIVITIES
On heavily loaded systems, SnapMirror competes with other processes and may impact response times.
To address this problem you can set the system priority to High or Very High on dedicated storage
systems for SnapMirror replication using FlexShare software.
You can also schedule SnapMirror updates at times when NFS or CIFS traffic is low and reduce the
frequency of updates.
Finally, consider upgrading to a more powerful NetApp controller when the system resources become the
system blockage.
39
Networking issues can also be addressed by using a dedicated path for SnapMirror transfers or using
multiple paths for load balancing and failover.
If the network still does not perform up to expectations, look for typical network problems. For example,
duplex mismatches can cause networks to be very slow.
@@@@@@@@@@@@@@@@@@@@
Data protection using SnapMirror
SnapMirror is a feature of Data ONTAP that enables you to replicate data. SnapMirror
enables you to
replicate data from specified source volumes or qtrees to specified destination
volumes or qtrees,
respectively. You need a separate license to use SnapMirror.
You can use SnapMirror to replicate data within the same storage system or with
different storage
systems.
After the data is replicated to the destination storage system, you can access the
data on the
40
destination to perform the following actions:
You can provide users immediate access to mirrored data in case the source goes
down.
You can restore the data to the source to recover from disaster, data corruption
(qtrees only), or
user error.
You can archive the data to tape.
You can balance resource loads.
You can back up or distribute the data to remote sites.
You can configure SnapMirror to operate in one of the following modes:
Asynchronous mode: SnapMirror replicates Snapshot copies to the destination at
specified,
regular intervals.
Synchronous mode: SnapMirror replicates data to the destination as soon as the
data is written to
the source volume.
Semi-synchronous mode: SnapMirror replication at the destination volume lags
behind the source
volume by 10 seconds. This mode is useful for balancing the need for synchronous
mirroring with
the performance benefit of asynchronous mirroring.
SnapMirror can be used with traditional volumes and FlexVol volumes.
How SnapMirror works
SnapMirror replicates data from a source volume or qtree to a partner destination
volume or qtree,
respectively, by using Snapshot copies. Before using SnapMirror to copy data, you
need to establish a
relationship between the source and the destination.
You can specify a SnapMirror source and destination relationship between volumes
or qtrees by
using one of the following options:
The /etc/snapmirror.conf file
The snapmirror.access option
The /etc/snapmirror.allow file
The SnapMirror feature performs the following operations:
1. Creates a Snapshot copy of the data on the source volume.
2. Copies it to the destination, which can be a read-only volume or qtree.
3. Updates the destination to reflect incremental changes on the source, as per the
schedule you
specify.
The result of this process is an online, read-only volume or qtree that contains the
same data as the
41
source at the time of the most recent update.
Each of the following replication methods consists of a pair of operations, one
operation each at the
source storage system and the destination storage system:
Volume SnapMirror replication
Qtree SnapMirror replication
SnapVault replication
If a storage system is the source for one replication and the destination for another
replication, it uses
two replication operations. Similarly, if a storage system is the source as well as the
destination for
the same replication, it uses two replication operations.
42
semi-sync: The source system acknowledges the client write operation
immediately after the
source receives the data. The destination system is synchronized with the source at
intervals of
approximately 10 seconds. The semi-sync option provides a recovery point objective
of about
10 seconds. This means that if the source becomes unavailable, you might lose up
to 10 seconds
worth of data changes. The semi-sync mode provides a performance advantage
over the sync
mode.
Note: If neither of these two modes is specified, then the SnapMirror relationship is
set as
asynchronous.
43
2. In the asynchronous mode, the source system tries to communicate with the
destination system
once a minute.
3. When the source system reestablishes communication with the destination
system, the source
system asynchronously replicates data to the destination.
4. SnapMirror gradually transitions the replication relationship to the synchronous
mode.
If the latest common Snapshot copy is deleted from the source, SnapMirror does not
transition back
from asynchronous to synchronous mode. The SnapMirror relationship should be
broken and
resynchronized, by using the snapmirror break and snapmirror resync commands.
However,
you can avoid this situation and let the relationship go to synchronous mode
automatically, by setting
the replication.volume.use_auto_resync option to on. The default value of this
option is off.
44
The SnapMirror source volume, however, can be the root volume.
A destination qtree can be on the root volume, but the /etc qtree cannot be a
destination qtree.
Points of caution while using SnapMirror
Do not delete Snapshot copies that SnapMirror creates in the source volume
before copying the
data to the destination. The most recent SnapMirror Snapshot copy is referred to as
the newest
common Snapshot copy (NCS). Incremental changes to the destination depend on
the NCS. If
SnapMirror cannot find the required Snapshot copy on the source, it cannot perform
incremental
changes to the destination.
Do not use the snapmirror release or snapmirror break command on the
destination
volume or qtree unless you no longer need to copy incremental changes from the
source. The
destination must be actively functioning as a destination to receive incremental
updates.
Do not restrict or take the destination volume offline while SnapMirror is
configured to transfer.
Taking the destination offline prevents SnapMirror from performing updates to the
destination.
45
that the directories are in the Unicode format. This ensures that the read-only
directory copied on the
destination is in the Unicode format. This also enables requests through CIFS to
access the directory
and its files on the destination, and prevents Access denied errors.
You can ensure that both source volume and destination volume directories are in
the Unicode format
by using one of the following methods:
Method 1
On the system console for the source volume, enter these two commands.
vol options vol_name convert_ucode on
Use this command to convert any existing directories in a volume to the Unicode
format.
Data protection using SnapMirror | 89
vol options vol_name create_ucode on
Use this command to ensure that any new directories created in a volume are in the
Unicode
format.
Considerations before using synchronous SnapMirror
You need to consider certain issues when planning to use SnapMirror for
synchronous replication.
One source volume cannot have synchronous SnapMirror relationships to multiple
destinations
volumes.
You cannot create a synchronous SnapMirror relationship between FlexVol volumes
within the
same system or within the same HA pair.
You must ensure that the source and destination have the same version of Data
ONTAP installed.
You cannot use synchronous or semi-synchronous SnapMirror to replicate volumes
that use
deduplication.
A volume with a synchronous or a semi-synchronous SnapMirror relationship
should not be
placed in the same aggregate as a volume using deduplication.
Synchronous SnapMirror needs three Snapshot copies to get into synchronization.
Therefore, you should limit the combined total of Snapshot copies retained on any
one volume to 252 or fewer
46
SnapMirror commands
47
48
How the snapmirror initialize command matches source and destination
volume size
When you use the snapmirror initialize command to initialize a volume replication,
SnapMirror sets the vol options fs_size_fixed option to on. This option forces the file
system
on the destination volume to remain the same size as the file system on the source
volume.
What you can do if an initial SnapMirror transfer fails
If an initial SnapMirror transfer fails, you can resume the transfer by re-entering the
snapmirror
initialize command, under certain conditions.
If the following conditions are met, you can resume the initialization of a SnapMirror
relationship:
The value for restart mode in the /etc/snapmirror.conf file is set to always or is set
to the
default, and the next scheduled update has not begun.
The output of the snapmirror status command displays that the process has a
restart
checkpoint.
The Snapshot copy used for the initial SnapMirror transfer still exists.
The disk geometry has not changed.
Note: If these conditions are not satisfied, you cannot resume the initial transfer.
You need to start
the initial SnapMirror transfer again.
SnapMirror does not automatically retry to initialize a destination
What the snapmirror.conf file does
The snapmirror.conf file is used to specify the details related to the copying of data,
by using
SnapMirror, from the source to the destination. This file resides on the destination
storage system.
The /etc/snapmirror.conf file defines:
The relationship between the source and the destination.
The schedule used by the destination to copy data.
The arguments that control SnapMirror when copying data.
Converting asynchronous SnapMirror replication to synchronous
You can change an asynchronous volume SnapMirror relationship to replicate data
synchronously by
editing the snapmirror.conf file on the destination system.
About this task
Synchronous replication is not supported for qtree SnapMirror relationships.
Therefore, a qtree
SnapMirror relationship cannot be converted to a synchronous SnapMirror
relationship.
Step
1. To convert an asynchronous SnapMirror relationship to a synchronous SnapMirror
relationship,
49
on the administration host, edit the snapmirror.conf file on the destination system to
change
the schedule to sync.
What the quiesce command does
The snapmirror quiesce command waits for all volume and qtree SnapMirror
transfers to
complete, and blocks any further updates. If a qtree is not in a stable state (is in
transition), the
snapmirror quiesce command forces it into a stable state.
You can quiesce only volumes and qtrees that are online and that are SnapMirror
destinations. You
cannot quiesce a restricted or offline volume or a qtree in a restricted or offline
volume.
The snapmirror quiesce command stops a volume or qtree from acting as a
SnapMirror
destination, but does not prevent it from acting as a SnapMirror source.
You can use the snapmirrorresumecommand to restore the capability for data transfer to a
volume or qtree you have quiesced.
50
Look at the number of successful transfers and the failure rate.
Tune the schedule.
Create a notifier for aborted transfers.
Monitor performance on a per-volume level.
Be assured that things are working as planned.
51
If you change the volume name of a SnapMirror source or destination, you need to
make the
following changes.
1. Update the snapmirror.conf file, if there is an old entry.
2. Use the snapmirror release command to update the old destination name, and
SnapMirror
releases the soft lock and the old Snapshot copy.
3. Use the snapmirror update command on the new volume name, and status registry
is updated
with the new volume name.
4. Update the /etc/exports file.
Note: If a system is running at its limit of concurrent transfers, and you attempt to
initiate more
transfers through by using the snapmirror update command, the attempted transfer
will fail.
Accidental deletion of SnapMirror Snapshot copies
SnapMirror Snapshot copies stored on either the SnapMirror source or destination
location must not
be deleted. If the base Snapshot copy (most common Snapshot copy) is accidentally
deleted from
either the source or destination location, you can attempt recovery.
You might be able to recover without reinitializing the destination by breaking the
SnapMirror
relationship and then resynchronizing the source and the destination.
As long as there is at least one Snapshot copy common to both the source and the
destination,
resynchronization will succeed.
If there is no Snapshot copy common to both the source and the destination, you
need to use the
snapmirror initialize command over the network. Or, if the source and destination
are
volumes, you must use the smtape backup command to store the source volume on
tape and then
use the smtape restore command to restore the volume from the tape to the
destination.
Space issues when volume space guarantee is enabled for a destination
volume
When volume space guarantee is enabled on a SnapMirror destination volume, if the
destination
volume is larger than the source volume, the destination volume consumes more
space than the
source volume. You can reduce the size of the destination volume by using the vol
size command.
Applications of SnapMirror
SnapMirror is used to replicate data. Its qualities make SnapMirror useful in several scenarios,
including disaster recovery, data backup, and data restoration.
You can copy or use the data stored on a SnapMirror destination. The additional advantages of
52
SnapMirror make it useful in data retrieval situations such as those described in the following
table
Turning SnapMirror on
Before using SnapMirror you need to enable the SnapMirror license on both the source and the
destination systems.
Steps
1. To turn SnapMirror on, enter the following command on both the source system and
destination system:
optionssnapmirror.enableon
53
If the SnapMirror source and destination are FlexVol volumes, the RAID configurations do
not make a difference.
If the SnapMirror source and destination are qtrees, volume size and configuration do not
make any difference.
54
replicated along with the volume. The data in the volume is usable both on the source and the
destination.
Note: When configuring volume SnapMirror and deduplication, you should ensure that
deduplication and volume SnapMirror operations do not run at the same time. You should start
volume SnapMirror transfer of a deduplicated volume after the deduplication operation is
complete. This prevents any impact to the replication performance while deduplication is in
progress and sending of undeduplicated data and additional temporary deduplication metadata
files over the network.
To achieve maximum space savings on the destination volume, you must scan the entire file
system to re-create the deduplication metadata for the destination volume. Use the sisstart
scommand to do so.
Note: The destination volume is accessible for read-write operations when the deduplication
scan is in progress.
using qtree SnapMirror. The space savings at the destination and the source systems are the same
if inline compression is enabled on the source system.
55
replication over low-bandwidth connections accommodates an initial mirroring between a source and
destination volume using physically-transported tape (as shown in the following illustration). After
the large base Snapshot copy has been replicated, smaller, incremental Snapshot copy updates can be
carried out over a low-bandwidth connection.
Cascading destinations variation: A variation on the basic SnapMirror deployment and function
involves a writable source volume replicated to multiple read-only destinations. The function of this
deployment is to make a uniform set of data available on a read-only basis to users from various
locations throughout a network and to allow for updating that data uniformly at regular intervals.
SnapMirror files
SnapMirror uses configuration files, log files, and other files.
56
If your source volumes contain directories that are accessed by CIFS clients, you should have
ensured that those directories are in the Unicode format before replicating the volume using
SnapMirror.
Also, you should have ensured that you have appropriate SnapMirror licenses for both the source
and destination systems.
Steps
1. For both the source and the destination system consoles, enter the following command to
enable the SnapMirror license on the source and destination systems:
licenseaddsnapmirror_license_key
57
The following entry specifies Snapshot copy replication from vol0of s_systemAto vol1of
d_systemAat a maximum of 2,000 kilobytes per second 15 minutes past every hour, Monday
through Friday:
s_systemA:vol0d_systemA:vol1kbs=2000,restart=always15**1,2,3,4,5
To synchronously mirror vol0to vol1, you must use the following entry:
s_systemA:vol0d_systemA:vol1sync
For more information about schedule entries in the /etc/snapmirror.conffile of the
Invoking the following command transfers a complete copy of the source volume (vol0on
systemA) to the destination volume (vol2on systemB):
snapmirrorinitializeSsystemA:vol0systemB:vol2
The following command creates a destination qtree ( qtree4on vol1on systemB) and transfers
a complete copy of the source qtree ( qtree4on vol1on systemA) to that destination qtree:
snapmirrorinitializeSsystemA:/vol/vol1/qtree4systemB:/vol/vol1/
58
qtree4
The volume in which the destination qtree is created must be online and writable.
After using the snapmirrorinitializecommand, the scheduled Snapshot copy replication
that you specified in Step 3 automatically updates the destination volume or qtree at the specified
times.
After you finish
If the SnapMirror source volume or qtree becomes unavailable, you can use the snapmirrorbreak
command to make the destination volume or qtree writable. This enables you to provide continued
access to data for the clients who are no longer able to access the unavailable source.
59
destination online in a read-only state. This version of the destination is the baseline for the first
incremental update.
While the initial data transfer is taking place, the destination is marked invalidin the output of a
volstatuscommand. The volume becomes valid and goes online after the initial transfer is
complete.
Note: Any attempt to bring this volume online manually will only succeed after the initial transfer
is complete.
60
Note: SnapMirror automatically deletes old Snapshot copies that are no longer necessary for
updating data.
network compression.
You can set up SnapMirror to use multiple paths at the outset. You can also convert a single path
SnapMirror relationship to use multiple paths.
Steps
1. Ensure that you have two valid paths using the pingcommand from the source system to each of
the IP addresses on the destination system.
2. On the source system console, use the optionssnapmirror.accesscommand to specify the
host names of systems that are allowed to copy data directly from the source system.
Example
optionssnapmirror.accesshost=d_systemA
3. Edit the snapmirror.conffile on the destination system to add an entry that defines the mode
of the connection and what the two connections are. The format of the entry is as follows:
name=mode(src_systeme0,dst_systeme0)(src_systeme1,dst_systeme1)
where modeis either multior failover. See the na_snapmirror.conf(5) man page for details.
4. Edit the /etc/snapmirror.conffile on the destination system to specify the volumes and
qtrees to be copied and the schedule (minute, hour, day_of_month, day_of_week, sync, or
semisync) on which the destination is updated. Use the connection name specified in previous
step as the source system.
61
If no arguments or options are given, SnapMirror displays a message that indicates whether a
transfer
is in progress, how much of the data transfer has been completed, the state of the destination, and
the
amount of time since the last Snapshot copy was created and transferred successfully.
62
63
64
65
Ascertain the round-trip time between the source and the destination for a SnapMirror
relationship.
Determine the bandwidth available for the SnapMirror relationship.
The default TCP window size for a SnapMirror relationship is 1,994,752 bytes.
Adjustment of the TCP window size is applicable only for asynchronous SnapMirror
relationships.
For qtree SnapMirror relationships, TCP window sizes higher than the default value are not
supported.
Note: You should only adjust the TCP window size for a SnapMirror relationship if there are
The TCP window size specifies the amount of data that a source can send through a connection
before it requires an acknowledgement from the destination for the data received. A larger TCP
window size can increase SnapMirror throughput in certain scenarios. You can change the TCP
window size to optimize SnapMirror transfers for the network in use. Therefore, you can change
the TCP window size to optimize SnapMirror transfers.
Note: When using higher TCP window sizes than the default, the system might not be able to
achieve the maximum concurrent replication operations specified for the system. This is due to
increased resource utilization by the higher TCP window sizes.
The maximum TCP window size that you can specify for a SnapMirror relationship depends on
the connection type, as given in the following table.
Connection type
Default TCP window size
Maximum TCP window size
Single path
1,994,752 bytes
7,340,032 bytes (7 MB)
Multipath
1,994,752 bytes
14,680,064 bytes (14 MB)
Note: To limit the network bandwidth used by a particular SnapMirror relationship, use the kbs
parameter for the relationship entry in the snapmirror.conffile.
2. Specify the required TCP window size by adding the following option to the SnapMirror
relationship entry in the snapmirror.conffile:
wsize=value
valueis the required TCP window size (in bytes), as calculated in the preceding step.
Example
The following entry specifies a TCP window size of 3,250,000 bytes for the SnapMirror
relationship:
src_system:src_voldst_system:dst_volwsize=3250000****
66
Whether you are moving a volume SnapMirror source or qtree SnapMirror source to new systems or
newer drives, as long as there is a Snapshot copy in common on the source and destination, the
transition goes smoothly.
Volume SnapMirror transfers all of the Snapshot copies as part of the SnapMirror replication process.
For qtree SnapMirror, the source and destination have only one Snapshot copy in common. Different
qtree SnapMirror destinations have no common Snapshot copy, unless the Snapshot copy is
specifically replicated.
In a production environment, you should perform the process of moving SnapMirror relationships
from one volume or system to another only in a maintenance or out-of-service window. You should
also ensure that new data is not added to the original source during the move.
Ensure that the Data ONTAP release on the destination system is from a release family that is the
same as, or later than, the Data ONTAP release on the new source system.
About this task
The following terms are used in the task description for moving volume SnapMirror sources.
oldsourceThe original system on which the source resides.
newsourceThe system to which you are moving the source.
destinationThe system to which the source is replicated.
oldsourcevolThe original source volume.
newsourcevolThe new source volume to which you are moving.
destinationvolThe volume to which the source is replicated.
Steps
1. Copy the original source to the new source using the following command:
newsource>snapmirrorinitializeSoldsource:oldsourcevol
newsource:newsourcevol
Note: This might take some time to finish.
4. Update newsource and destination based on oldsource using the following commands.
newsource>snapmirrorupdateSoldsource:oldsourcevol
newsource:newsourcevol
destination>snapmirrorupdateSoldsource:oldsourcevol
destination:destinationvol
Note: The common_Snapshot Snapshot copy is on all volumes because all Snapshot copies are
67
destination>snapmirrorquiescedestinationvol
destination>snapmirrorbreakdestinationvol
newsource>snapmirrorquiescenewsourcevol
newsource>snapmirrorbreaknewsourcevol
6. Using an editor, update the /etc/snapmirror.conffile on the destination for the new
relationship by replacing the oldsource information with newsource information.
Before edit
oldsource:oldsourcevoldestination:destinationvolrestart=always0***
*
After edit
newsource:newsourcevoldestination:destinationvolrestart=always0***
*
common_Snapshot: namely, the ones used for the last SnapMirror update. This is expected and
no data is lost if you ensure that no new data was added to the original source volume during
the move.
The new SnapMirror relationship automatically picks the newest Snapshot copy in common to
mirror. This is the common_Snapshot Snapshot copy.
8. Verify that the SnapMirror relationship is resynchronizing by using the following command:
destination>snapmirrorstatus
The process of moving the qtree SnapMirror source involves creating a Snapshot copy on the
original source and then replicating the Snapshot copy on the destinations, both the new source
and the existing destination. After this is done, the Snapshot copy is common on all volumes,
allowing for the SnapMirror relationship to be broken from the original source and established
between the new source and the existing destination.
The following terms are used in the task description for moving qtree SnapMirror sources:
oldsourceThe original system on which the source resides.
newsourceThe system to which you are moving the source.
destinationThe system to which the source is replicated.
oldsourcevolThe original source volume.
newsourcevolThe new source volume to which you are moving.
destinationvolThe volume to which the source is replicated.
Steps
68
1. Copy the original source to the new source by using the following command:
newsource>snapmirrorinitializeSoldsource:/vol/oldsourcevol/qtree
newsource:/vol/newsourcevol/qtree
Note: This might take some time to finish.
2. Create a manual Snapshot copy on the oldsourcesystem by using the following command:
oldsource>snapcreateoldsourcevolcommon_Snapshot
4. Quiesce and break the SnapMirror relationship between oldsource and destination, and
oldsource and newsource, using the following commands:
destination>snapmirrorquiesce/vol/destinationvol/qtree
destination>snapmirrorbreak/vol/destinationvol/qtree
newsource>snapmirrorquiesce/vol/volnewsourcevol/qtree
newsource>snapmirrorbreak/vol/volnewsourcevol/qtree
5. Using an editor, update the /etc/snapmirror.conffile on the destination for the new
relationship by replacing the oldsource information with newsource information.
Before edit
oldsource:/vol/oldsourcevol/qtree
destination:/vol/destinationvol/qtree
restart=always0****
After edit
newsource:/vol/newsourcevol/qtree
destination:/vol/destinationvol/qtree
restart=always0****
6. Establish the new SnapMirror relationship using the following command on the destination
system:
snapmirrorresyncSnewsource:/vol/newsourcevol/qtree
destination:/vol/
destinationvol/qtree
Note: SnapMirror discards any Snapshot copies older than the common Snapshot copy,
namely, the ones used for the latest SnapMirror update. This is expected and no data is lost if
you ensure that no new data was added to the original source volume during the move.
The new SnapMirror relationship automatically picks the newest common Snapshot copy for
replication. This is the common Snapshot copy.
69
7. Verify that the SnapMirror relationship is resynchronizing by using the following command:
destination>snapmirrorstatus
If you are migrating data within the same storage system and the source volume contains
LUN(s), then you must have unmapped the source LUN(s) by using the lununmapcommand.
About this task
The migration must be run on two volumes which are currently the source volume and
destination volume in a SnapMirror relationship. When you start the migration process,
SnapMirror does the following:
Performs a SnapMirror incremental transfer to the destination volume.
Stops NFS and CIFS services on the entire system with the source volume.
Migrates NFS file handles to the destination volume.
Makes the source volume restricted.
Makes the destination volume read-write.
SnapMirror does not transfer IP addresses, license keys, or quota information. You must remount
on the NFS clients unless one of the following is true:
The IP address of the source system is transferred to the destination system independently after
the migration.
The source and destination volumes reside on the same system, in which case, the IP address to
access either volume is the same.
SnapMirror does not migrate CIFS clients. You must reestablish CIFS client sessions after
migrating data to the destination volume.
Step
70
status, to a writable volume or qtree.
You might want to convert a destination to a writable volume or qtree to perform one of the following
tasks:
Data migrationMoving data from one volume or qtree (original source) to another volume or
qtree (present destination) and make the data on the destination accessible and writable.
Disaster recoveryIf your source becomes unavailable, and you want your present destination to
substitute as the users retrieval and input source.
Application testingYou want to make your current destination volume or qtree writable to test a
new application on a mirrored replication of your current data rather than risk corruption of
original data on the source volume or qtree.
Converting the destination to a writable volume or qtree enables you to use data on the destination,
especially when the original source is unavailable .
Quota restrictions
Quotas are always disabled on a SnapMirror volume destination, regardless of whether quotas are
enabled on the source volume. If you try to enable quotas on a volume destination, SnapMirror
displays an error message. Quotas are not disabled on SnapMirror destination qtrees.
If the source volume or qtree and the destination reside on different storage systems, and you want
the same quota restrictions to be applied after you make the destination writable, the destination
system must have an /etc/quotasfile that includes all the entries from the /etc/quotasfile used
by the source system.
If you use SnapMirror replication for data migration, you can copy the /etc/quotasentries
from the source system to the /etc/quotasfile of the destination system before you use the
snapmirrorbreakcommand to make the destination writable.
If you use SnapMirror replication for backup and potential disaster recovery, you must keep a
copy on the destination system of all /etc/quotasentries used by the source system at all
times. That way, you can apply the quota entries to the destination volume or qtree if the source
system becomes unavailable.
To make a destination qtree writable, enter the following commands on the destination system.
snapmirrorquiesce/vol/volume_name/qtree_name
71
snapmirrorbreak/vol/volume_name/qtree_name
2. If you want to enable quotas on the former destination volume, carry out the following steps:
a. Edit the /etc/quotasfile on the former destination system so that, after the conversion, the
former destination includes the same quota restrictions as the source volume.
If the original source volume uses per-volume quotas, replace the original source volume
name with the former destination name in the quota entries.
b. Enter the following command to enable quotas on the former destination:
quotaonvolume_name
1. To increase the size of the SnapMirror source volume, enter the following command:
volsizevol_namesize
vol_nameis the name of the SnapMirror source volume.
sizeis the required size of the SnapMirror source volume.
Note: SnapMirror updates the size of the destination volume to match the source in the next
SnapMirror transfer.
2. On the destination system, enter the following command to check the size of the destination
volume and the file system:
72
volstatusvol_nameb
vol_nameis the name of the SnapMirror destination volume.
The size of the destination volume must have increased to the same size as that of the source
volume.
Before increasing the size of a source volume, you must compare the size of the source and
destination volumes for a specific volume SnapMirror relationship. If the destination volume is not
large enough to contain the larger source volume, you must manually resize the destination volume.
Note: In an active volume SnapMirror relationship, the size of the source and destination file
systems is identical. However, the size of the destination volume can be larger than or equal to the
size of the source volume. The size of the file systems must be identical to allow the source to be
restored from the destination.
Steps
1. On the source system, enter the following command to check whether the fs_size_fixed
option is off:
volstatusvol_namev
vol_nameis the name of the source volume for SnapMirror .
Note: If the fs_size_fixedoption is set to off, the size of the destination volume's file
system might be different from that of the source volume. To restore the source volume from
the SnapMirror destination, the size of the source and destination file systems should be
identical. When the status of a volume SnapMirror relationship is Idle, and the size of the
destination volume is increased, then you can change the fs_size_fixedoption to on.
3. On the source system, enter the following command to find the size of the source volume and the
size of the file system:
volstatusvol_nameb
vol_nameis the name of the SnapMirror source volume.
73
7. To increase the size of the SnapMirror destination volume, enter the following command:
voladdvol_namedisks
vol_nameis the name of the SnapMirror source volume.
disksis the number of disks that you want to add to the traditional volume.
You can use the volstatuscommand to confirm the increase in the source volume size.
Note: You can increase the destination volume size to be equal to or greater than the intended
This option ensures that the size of the SnapMirror source and destination file systems are
identical.
Note: SnapMirror updates the size of the destination file system to match the source in the next
74
synchronous
You can change an asynchronous volume SnapMirror relationship to replicate data
synchronously by editing the snapmirror.conffile on the destination system.
About this task
Synchronous replication is not supported for qtree SnapMirror relationships. Therefore, a qtree
SnapMirror relationship cannot be converted to a synchronous SnapMirror relationship.
Step
1. Enter the following command on the system on which you want to block transfers:
snapmirrorquiesce{dest_volume|/vol/volume_name/qtree_name}
dest_volumeis the name of the destination volume.
qtree_nameis the name of a qtree in volume_name.
Example
systemA>snapmirrorquiescevol1
snapmirrorquiesce:inprogress.
snapmirrorquiesce:vol1:successfullyquiesced
Example
systemA>snapmirrorquiescevol2
snapmirrorquiesce:inprogress.
Thiscanbealongrunningoperation.UseControlCtointerrupt.
......................................
snapmirrorquiesce:vol2:successfullyquiesced
SnapMirror waits for a transfer to finish and stops any further data transfers to vol2.
Example
systemA>snapmirrorquiesce/vol/vol1/qtree1
75
Note: If you decide to abort a SnapMirror quiesce operation, press Ctrl-C or enter the
snapmirrorresumecommand at any time.
A SnapMirror destination volume might have been deleted after the volume was quiesced. If you
want to create a SnapMirror destination volume with the same name as the deleted volume, first
use the snapmirrorreleasecommand. This step would ensure that the SnapMirror
relationship is set up properly.
1. Enter the following command for the system on which you want to resume transfers:
snapmirrorresume{dest_volume|/vol/vol_name/qtree_name}
dest_volumeis the name of the destination volume.
qtree_nameis the name of a qtree in vol_name.
Example
systemA>snapmirrorresumevol2
snapmirrorresume:vol2:Successfullyresumed
76
You should consider the following issues before aborting a SnapMirror transfer:
If you abort a copy operation, data transfer stops and SnapMirror is put in a restartable mode.
If you use the h(hard abort) option with the snapmirrorabortcommand, you cannot
restart the transfer.
Step
1. From either the source or the destination system, enter the following command:
snapmirrorabort[h]{[dest_system:]dest_volume|[dest_system:]/vol/
volume_name/qtree_name...}
hspecifies a hard abort; the transfer cannot be restarted. SnapMirror stops the transfer and
clears the restartable transfer log. This option applies only to the SnapMirror destination.
If you enter an invalid SnapMirror destination (one that is not displayed in the output of the
snapmirrorstatuscommand), the command fails and displays an error message.
Example
systemA>snapmirrorabortvol1systemB:vol2systemC:/vol3/qtree3
snapmirrorabort:Abortingtransfertovol1systemB:vol2systemC:/
vol3/qtree3
SnapMirror aborts the transfer to vol1on systemA, where the command was entered, and
aborts the transfer to vol2on systemBand the transfer to qtree3in vol3on systemC.
Releasing a source from a destination volume or qtree allows the source to delete its base Snapshot
copy for the SnapMirror relationship. After breaking the relationship, you need to take additional
steps to scrub the destination. Unless these extra steps are performed, the Snapshot copies associated
with the broken relationship remain stored on the destination system, and a snapmirrorstatus
command continues to list the former destination object as a current destination object.
Steps
77
source_volumeor qtree_pathis the name of the source volume or path to the qtree that
SnapMirror frees all resources on the source system that had been dedicated to the SnapMirror
relationship.
2. On the destination system, enter the following command to break the SnapMirror relationship
between the source and destination objects.
snapmirrorbreak{vol_name|qtree_path}
vol_nameis the name of the volume that you want to release from the relationship.
qtree_pathis the path of the qtree that you want to release from the relationship.
In the detailed output that is displayed, note the Snapshot copy basename associated with the
SnapMirror relationship that you just broke.
4. On the destination system, use the following command to delete the Snapshot copy set that
you displayed in the previous step.
snapdeletedest_volsnapshot_basename
5. Through the Admin host client, edit the /etc/snapmirror.conffile on the destination system.
Locate and delete the entry that specifies the SnapMirror relationship you want to end.
78
1. Enter the following command on the system for which you want the information:
optionssnapmirror.log.enable
SnapMirror keeps the current log on the root volume of the system as
/etc/log/snapmirror.0.
A new log file is generated every week as /etc/log/snapmirror.0. Older log files are
renamed /etc/log/snapmirror.[15]and the oldest log file is deleted. You can read the
log files using a text editor.
Step
1. Enter the following command on the system for which you want the log:
optionssnapmirror.log.enableon
Note: This setting is persistent across reboots.
Result
79
The option snapmirror.vfiler0.logging.enableis ON by default in Data ONTAP 8.2.2. All
SnapMirror logging activity in the vfiler0 and non-default vFiler context is now saved into a single
SnapMirror log file/vol/vol0/etc/log/snapmirror.
If you prefer the earlier behavior, you can turn the snapmirror.vfiler0.logging.enable
option OFF.
Step
1. Enter the following command on the system for which you want to disable SnapMirror
logging:
optionssnapmirror.log.enableoff
1. In the console of either your source or destination system, enter the following command:
80
snaplistvol_name
Result
A list of all Snapshot copies stored on your system is displayed. SnapMirror Snapshot copies are
distinguished from system Snapshot copies by a more elaborate naming convention and the label
snapmirror in parentheses.
For qtree replication, SnapMirror creates Snapshot copies of one or more source qtrees on the
source volume that are copied to a qtree on the destination volume.
A qtree SnapMirror Snapshot copy name is in the following format:
dest_system(sysid)_namesrc|dst.number
dest_systemis the host name of the destination system.
sysidis the destination system ID number.
nameis the name of the destination volume or qtree path.
src|dstis the source or destination name.
numberis an arbitrary start point number for the Snapshot copy. Data ONTAP increments this
81
0%(0%)0%(0%)Nov1710:50systemB(0016782130)_vol2.1(snapmirror)
1%(0%)0%(0%)Nov1710:00hourly.0
1%(0%)0%(0%)Nov1700:00nightly.0
1%(0%)0%(0%)Nov1516:00hourly.1
1%(0%)1%(0%)Nov1515:00hourly.2
2%(0%)1%(0%)Nov1514:00hourly.3
2%(0%)1%(0%)Nov1513:00hourly.4
2%(0%)1%(0%)Nov1512:00hourly.5
When it is time to update the destination, another Snapshot copy is created on systemA.
The snaplistcommand on systemA generates the following display after the
systemB(0016782130)_vol2.2 Snapshot copy is created on systemA.
systemA>snaplistvol1
working....
%/used%/totaldatename
0%(0%)0%(0%)Nov1710:52systemB(0016782130)_vol2.2(snapmirror)
0%(0%)0%(0%)Nov1710:51systemB(0016782130)_vol2.1(snapmirror)
1%(0%)0%(0%)Nov1710:00hourly.0
1%(0%)0%(0%)Nov1700:00nightly.0
1%(0%)0%(0%)Nov1516:00hourly.1
1%(0%)1%(0%)Nov1515:00hourly.2
82
You can see a list of each SnapMirror Snapshot copy on the server, and the qtrees it contains,
and the client sources of those qtrees and their timestamps by using the snaplistq
command.
You can use the snaplistocommand to display the names, timestamps, and sources (if
they are copies) of the qtrees in a specified volume or at a path name.
83
qtree back into a SnapMirror relationship and resynchronizes its contents with the source without
repeating the initial transfer.
When applied to the source volume, the snapmirrorresynccommand turns the source
volume into a copy of the original destination volume. In this way, the roles of source and
destination are reversed.
Note: Resynchronization is not possible if SnapMirror cannot find a common Snapshot copy on
the source and destination to use as the basis for resynchronization. SnapMirror generates a "No
commonsnapshottouseasthebaseforresynchronization" error message and
terminates the command. You must reinitialize the destination to establish the SnapMirror
relationship.
When you run the snapmirrorresynccommand on the source, a reverse relationship from
the destination to the source is established. This resynchronizes all the updated content from the
destination to the source. However, you cannot resynchronize using the Snapshot copies taken on
the destination volume when the destination qtree was in a mirrored state.
You might want to resynchronize a source and a destination volume or qtree under the following
circumstances:
When you change the current source to a different volume or qtree.
When you make a destination volume writable for application testing and then want to make it a
SnapMirror destination again.
When you need to recover from a disaster that disabled the source.
When you want to reverse the functions of the source and the destination.
Note: When you perform resynchronization for the destination system, the contents on the
on the source and the destination that can be used as the newest common Snapshot copy (base
Snapshot copy) so that you can resynchronize a specific SnapMirror relationship.
Data protection using SnapMirror | 187
fforces the operation to proceed without prompting you for confirmation.
knsets the maximum transfer speed to nkilobytes per second. This option has the same
effect as the kbsargument in the /etc/snapmirror.conffile.
S[source_system:]{source_volume| qtree_path} specifies the system and volume or
qtree you want to use as the source for resynchronization.
The source specified by the Soption must match a source entry in the /etc/
snapmirror.conffile. If entries exist but the source does not match, the operation displays
84
an error message and terminates. If there is no entry for the specified source, the command
runs.
Note: If the Soption is not set, the source must be specified in the /etc/
snapmirror.conffile. If it is not specified, the operation displays an error message and
terminates.
csnapshot_namecreates a Snapshot copy (with the name snapshot_name) of a qtree on
the destination after the resynchronization transfer completes (so that it does not compete with
any ongoing updates).
SnapMirror does not lock or delete this Snapshot copy.
Note:
snapshot_namecannot be minutely.x, hourly.x, nightly.x, or weekly.x,
considered to be the source. If the destination volume or qtree specified is not in a scheduled
relationship, then the Soption must be used to provide a source.
SnapMirror identifies the newest common Snapshot copy, which is to be used as the base for
resynchronization, and generates a list of Snapshot copies on the destination volume that meet the
following criteria:
The Snapshot copies are newer than the base Snapshot copy and are deleted.
The Snapshot copies are older than the base Snapshot copy and have already been deleted
from the source.
Note: For qtree resynchronization, only the common Snapshot copy is displayed.
85
Example
86
This command offers the choice of either source or destination to serve as the source in the restarted
SnapMirror relationship. It finds the newest common Snapshot copy (NCS) shared by the two
volumes or qtrees, and removes all newer information on the system on which the command is run.
Note: The snapmirrorresynccommand requires that the two volumes or qtrees have at least
one Snapshot copy in common. You can resynchronize a volume or qtree to any other volume or
qtree as long as both have at least one Snapshot copy in common.
Resynchronization causes the loss of all data written to the destination after the base Snapshot copy
was made. The snapmirrorresynccommand informs you what data might be lost during the
resynchronization and requests permission to proceed. If you want to save the data on the destination,
you can stop the resynchronization, manually copy the desired data elsewhere, and reissue the
snapmirrorresynccommand to the destination.
In the following example, the original source (the one disabled by the disaster) is systemA:vol/
volAand the original destination is systemB:/vol/volB. You use a combination of snapmirror
breakand snapmirrorresyncor snapmirrorinitializecommands to perform the
following tasks:
Temporarily make systemB:volBthe source and systemA:volAthe destination to restore
mirrored data back to systemA:volAand to update systemA:volA.
Restore systemA:/vol/volAand systemB:volBto their original roles as SnapMirror source
and SnapMirror destination volume.
In this example, all data from the last scheduled SnapMirror Snapshot copy before the source was
disabled and all the data written to systemB:vol/volBafter it was made writable is preserved. Any
data written to systemA:vol/volAbetween the last SnapMirror Snapshot copy and the time that
systemA:vol/volAwas disabled is not preserved.
Steps
1. After the source volume (in this case, systemA:volA) is disabled, use the snapmirrorbreak
87
command to make the destination volume, systemB:volB, writable.
snapmirrorbreaksystemB:volB
88
89
In versions earlier than Data ONTAP 7.3, SnapMirror considers each LUN clone as a new LUN.
Therefore, during the initial transfer of the LUN clone, all data from the LUN clone and the original
Data ONTAP LUN is transferred to the secondary system.
For descriptions of data backup and restore on volumes containing Data ONTAP LUNs, see the Data
ONTAP SAN Administration Guide for 7-Mode.
The transfer of LUN clones using SnapMirror works the same way as the transfer of LUN clones
using SnapVault in the non-optimized mode.
Note: Qtree SnapMirror transfers LUN clones in the non-optimized mode only. Qtree SnapMirror
To recover data for a qtree with LUN clones, you can replicate the destination qtree to a new qtree.
Attention: For a qtree with LUN clones, ensure that the volume has enough free space to store the
LUN clones as complete LUNs before you initiate data recovery using qtree SnapMirror .
90
You can perform the following tasks as part of managing SnapMirror relationships:
Initializing SnapMirror destinations
Updating SnapMirror relationships
Quiescing SnapMirror relationships
Resuming quiesced SnapMirror relationships
Breaking SnapMirror relationships
Resynchronizing and reverse resynchronizing SnapMirror relationships
Aborting SnapMirror data transfers
Always create a clone from the most recent Snapshot copy in the SnapMirror destination, because
that copy is guaranteed to exist in the source volume.
If a FlexClone volume is created from a Snapshot copy in the destination volume that is not the
most recent copy, and that Snapshot copy no longer exists on the source volume, all the
SnapMirror updates to the destination volume will fail until the clone is destroyed or split. This
happens because SnapMirror update attempts to delete the snapshot copy on the destination
system, which is locked due to the creation of a FlexClone volume.
91
Qtree SnapMirror and FlexClone
Qtree SnapMirror does not maintain the same Snapshot copies of the volume on the source and
destination systems. Because of this characteristic, a FlexClone volume created from a Snapshot
copy on the qtree SnapMirror destination does not lock that Snapshot copy on the source volume.
Accordingly, deleting that Snapshot copy on the source volume has no impact on the replication
or the destination volume. Therefore, the advantage of qtree SnapMirror is that a FlexClone
volume can live for a long time on the SnapMirror destination system without space implications
on the source system.
If a Snapshot copy is not specified when creating a FlexClone volume on the qtree SnapMirror
destination volume, the volclonecommand creates a new Snapshot copy on that volume.
If a FlexClone volume is created using the qtree SnapMirror baseline Snapshot copy, the qtree in
the FlexClone volume will be writable.
If a FlexClone volume is created on the qtree SnapMirror destination volume without specifying a
backing Snapshot copy for the clone creation, a separate SnapMirror relationship appears in the
snapmirrorstatuscommand output.
Setting up a SnapMirror relationship between the two FlexClone volumes does not consume any
extra space on the parent aggregate for shared Snapshot copies. In this way, you save disk space and
network resources. If you delete any of the inherited Snapshot copies from the parent system or
transfer new data from the SnapMirror source clone to the SnapMirror destination clone, additional
disk space is consumed.
Step
1. Establish the SnapMirror relationship between the two FlexClone volumes by entering the
following command on the destination FlexClone volume:
snapmirrorresyncSsource:sourcevoldestination:destinationvol
Note: The snapmirrorresynccommand cannot establish the SnapMirror relationship if the
background scanners are still processing the parent destination volume .
92
Establish the SnapMirror relationship between two FlexClone volumes by entering the
following command on SystemB:
snapmirrorresyncSSystemA:cloneAcloneB
If you are updating the /etc/snapmirror.conffile on SystemB, add the following entry to
the file:
SystemA:cloneASystemB:cloneB
SnapMirror destination is also supported. The cloneB FlexClone volume is created from the
parent destination volume and the cloneA FlexClone volume is created from the parent
source volume.
93
systemB>volrenamevolJobakvolStatbak
volJobakrenamedtovolStatbak
youmayneedtoupdate/etc/exports
systemB>snapmirrorstatusvolJobak
Snapmirrorison.
systemB>snapmirrorstatusvolStatbak
Snapmirrorison.
SourceDestination
State
Lag
systemB:volStatbakSnapmirrored00:03:22
Status
Idle
If you change the volume name of a SnapMirror source or destination, you need to make the
following changes.
1. Update the snapmirror.conffile, if there is an old entry.
2. Use the snapmirrorreleasecommand to update the old destination name, and
SnapMirror releases the soft lock and the old Snapshot copy.
3. Use the snapmirrorupdatecommand on the new volume name, and status registry is
updated with the new volume name.
4. Update the /etc/exportsfile.
Note: If a system is running at its limit of concurrent transfers, and you attempt to initiate more
transfers through by using the snapmirrorupdatecommand, the attempted transfer will fail.
94
If there is no Snapshot copy common to both the source and the destination, you need to use the
snapmirrorinitializecommand over the network. Or, if the source and destination are
volumes, you must use the smtapebackupcommand to store the source volume on tape and then
use the smtaperestorecommand to restore the volume from the tape to the destination.
f you are responsibile for helping maintain the health and performance of NetApp SnapMirror
within your storage and/or across your network, here are 5 potential performance killers (and
their cure!) which you should be aware of. They are ...
(1) Latency. (2) Packet Loss. (3) Limited Bandwidth. (4) Competition from Other Apps, and
(5) Competition from Other SnapMirror Transfers.
Understanding the perils and cure for each of these will help ensure that you can sleep better at
night knowing SnapMirror will have the performance it needs to meet your target Recovery
Point Objectives each and every time.
Performance Killer #1. Latency.
We'll start with an easy one,... a performance killer that is relatively common and wellunderstood (both with respect to its effects and its cure). Latency.
95
When using any TCP-based replication application, an improperly-configured TCP window size
can literally kill end-to-end throughput. And the reduction in throughput is increasingly severe
as the latency between source and destination filers increases.
96
As a quick definition, a "packet loss event" occurs any time a TCP packet fails to arrive at its
destination. If SnapMirror is using WAN infrastructure such as MPLS or VPN over the Internet,
packet loss events are a common occurrence.
The performance challenge associated with these packet loss events is that TCP interprets them
as a sign of congestion. And, by design, TCP will then cut its rate of transmission by 1/2, then
slowly ramp up its transmission rate again until the next packet loss event. The result Instead
of having a nice consistent rate of transmission that fully utilizes available bandwidth, you end
up with a "saw tooth" pattern for end-to-end throughput, where available WAN capacity can be
severely under-utilized. (This blog post provides a nice explanation of this behavior.)
97
98
often, however, this type of sharing does not equal caring when it comes to maintaining the
performance SLAs required by apps which may have previously had their own dedicated WAN
infrastructure (eg. DR apps).
Network Quality of Service (QoS) mechanisms are the typical solution to this problem. Network
QoS policies related to enforcing minimum bandwidth and/or maximum latency guarantees can
be implemented in multiple places:
1. Within the WAN itself, such as that delivered via MPLS WAN infrastructure. Here, there
are typically a relatively small number of "classes" which you can map a given
application's traffic to (eg. Default, Bulk, Preferred, Critical). Mapping is typically done
by tagging the traffic with predefined DSCP tags understood by the MPLS network.
2. WAN router. Here there is generally more flexibility with regards to the number of types
of traffic classes possible. Traffic is identified and characterized according to their
network attributes.
3. WAN Optimization device. The best Network QoS solutions are capable of identifying
and characterizing traffic types based on their actual payload signatures. This is known
as Deep Packet Inspection (DPI) and carries a number of advantages. First, it eliminates
the need to know what the network attributes (IP address, subnet, IP port, etc.) are which
identify a given application. And further, it allows deeper classification in cases when
multiple applications share the same network attributes. For example, many different
apps utilize HTTP and port 80.
Performance kill #5. Competition from other SnapMirror transfers.
Sometimes we are our own worst enemy. Well, the same could be said for SnapMirror in certain
circumstances. Consider the following story from one of our customers, a legal firm based in
California.
The NetApp filer in our head quarter's data center hosts a number of applications. Email is
one of our most critical apps, and its replication cycle runs every 1/2 hour. Then we've got a
different set of volumes for user home directories. And then there's another set of volumes which
store data for our legal cases. We like to replicate the volumes which store case data every 1/2
hour as well. But one of our challenges is that when lawyer's are working a case, they can
unpredictable put a bunch of new data onto these volumes. This then floods the network with
new case data to be replicated, and in turn causes our email replication to get backed up,
sometime by several hours. We would be fine to have the replication cycles for the case data
elongated during these spikes. But we don't want to compromise the RPO of the email volumes.
There's just not a good way to manage these spikes. We do use network QoS to dynamically
shape traffic for SnapMirror as a whole amongst a host of other consumers of the WAN (VOIP,
File Sharing, Web). But we can't take that to the next level to shape based on source volume as
well."
99
A solution to this problem recently emerged with the latest release of the Riverbed Optimization
System (RiOS), the software that powers Steelhead WAN optimization appliances. With RiOS
version 8.5, Network QoS capabilities have been expanded for NetApp SnapMirror such that you
can actual provide fine-grain control of NetApp on a per-volume (or Q-Tree) basis, not just for
SnapMirror as a whole. There's a set of new admin panes designed with storage adminstrators in
mind, where they can define relative priority and minimum bandwidth guarantees for different
volumes, as appropriate for their business. And since the Steelhead devices typically have
visibility into all the traffic traversing a WAN, it can dynamically expand and contract the
amount of bandwidth allocated to a given volume dynamically based on whatever else happens
to be going across the link at that time. There's no need to set "fixed" bandwidth throttles which
can end up being too low during off-hours and too high during peak production periods.
Here is one blogger's take on why this is so cool. :)
To Conclude...
I hope this post has helped highlight some of the more common pitfalls you can encounter when
maintaining and/or expanding your NetApp SnapMirror deployments. And, more importantly, I
hope it has provide you an understanding of how to effectively combat each one.
Reply with your comments on ways you've been able to ensure consistent and fast performance
for SnapMirror in your environments!
(About the author: Joshua Dobies is Group Product Manager for Steelhead WAN Optimization
products at Riverbed Technology with a focus on Data Center-to-Data Center and DR solutions.)
snapmirror.allow and snapmirror.access
To set a SnapMirror Trust relationship between source filer and destination filer you have to
allow the destination to pull from source. In other words the source filer has to allow the
destination filer to replicate the data from the entire volume/qtree. There are basically two ways
to do it:
snapmirror.access
snapmirror.access is an option that let us provide the list of the filers that have a permission to
pull the data from the source filer. To print the current setting just go with:
filerA>options snapmirror.access
snapmirror.access
host=filerB,filerC AND if=vif-10,vif-11
What does it mean? It means that filerB, and filerC has an access (as SnapMirror Destination) to
pull data from SnapMirror Source volume/qtrees. The data can be accessed only by the network
interfaces vif-10, and vif-11 (again it is just an example).
100
101
You must ensure that the source and destination have the same version of Data ONTAP
installed.
@@@@@@@@@@@@
Source: fsrcfiler:test_src
Destination: destfiler:test_dst
Status: Broken-off
Progress: State: Snapmirrored
Lag: 00:04:20
Mirror Timestamp: Wed Jun 17 15:18:02 EDT 2009
Base Snapshot: srcfiler(0101168974)_test_dst.5 <--- this is the
baseline snapshot
Current Transfer Type: Scheduled
102
Current Transfer Error: None
Contents: Replica
Last Transfer Type: Scheduled
Last Transfer Size: 76 KB
Last Transfer Duration: 00:00:04
Last Transfer From: srcfiler:test_src
3. Delete the baseline snapshot from the source storage system's volume:
srcfiler> snap delete test_src srcfiler(0101168974)_test_dst.5
-OR=
Run the snapmirror release source_vol destfiler:dest_vol command on the
source. This will delete the baseline snapshot on the source volume as well.
4. Delete the relationship entry by editing the entry from the snapmirror.conf file on the
destination storage system, or delete the schedule from OnCommand System Manager.
After a minute or two, run the snapmirror status command. The relationship should
no longer be present.
destfiler> snapmirror status test_dst
Snapmirror is on.
Often, the baseline snapshot has to be manually deleted from the destination volume as
well.
Occasionally, a stale, orphaned entry remains in snapmirror status on the source
storage system:
Source: sourcefiler:volume
Destination: destinationfiler:volume
Status: Transferring
Progress: 0 KB
State: Source
Lag: Mirror Timestamp: Base Snapshot: Current Transfer Type: Current Transfer Error: Contents: Last Transfer Type: Last Transfer Size: Last Transfer Duration: Last Transfer From: -
103
2. Delete the snapmirror relationship by using the following command on the destination
cluster: snapmirror delete -destination-path <Destination_Path>
eg:
vs2.example.com::> snapmirror delete -destinationpath vs2.example.com:dept_eng_mirror1
3. To remove source information for the SnapMirror relationship and also delete the base
snapshot, use the following command on the source cluster: snapmirror release destination-path <Destination_Path>
eg:
vs1.example.com::> snapmirror release -destination-path
vs2.example.com:dept_eng_mirror1
4. Run snapmirror show to confirm that the relationship doesn't exist anymore:
cluster2::> snapmirror show
If the volume has been removed from the SnapMirror destination without following this process,
the solution is to issue the snapmirror release command from the source storage system.
Another workaround is to create the destination volume on the destination storage system. Note:
The re-created volume need not be of the same size as the earlier one. Once it is re-created with
the same name as before, run the snapmirror release command from the source storage
system. When the release command is run from the source storage system, an error stating the
path does not exist is encountered.
If you recieve an unknown status after running snapmirror status you can follow this
KB1012157: How to get rid of an unknown entry in SnapMirror status?, to cleanup the stale
entries.
@@@@@@@@@@@@@@@@@@@@@@@@@
104
1. The syntax of the snapmirror.conf file on the destination is incorrect
2. The ICS connection required for compression to enable isn't able to establish
successfully. This could be due to something along the path between source
and destination filtering on port 10565. Using a VPN tunnel and performing
network address translation can also contribute to such an issue.
3. There is an issue with name resolution between source and destination
4. Trying to enable on a snapmirror relationship that is not supported
Procedure
Scenario 1
The syntax of the destination system should be similar to the following:
connection_name=multi(src_system,dst_system)
connection_name:src_vol dst_system:dst_vol compression=enable * * * *
Note: Create a connection name for ICS and make sure that the name of this connection does not
match either the source or destination system names.
Scenario 2
If it is suspected that something is filtering traffic between source and destination, then contact
your network administrators and have them examine any firewalls, vpn concentrators, etc. for
any lost traffic between source and destination. Network address translation can effect the
establishment of the multipath connection as well. Often times, the quickest and easiest test to
perform is to gather matching (i.e. simultaneous) packet traces from both source and destination
systems at the same time:
a. Start the trace on both source and destination systems via the pktt command:
pktt start all -d /etc/log
d. Retrieve the .trc file from the storage systems. This can be done via CIFS and NFS.
Note: Using the pktt syntax above puts the packet trace file(s) in the /etc/log directory. Make
sure there is enough room in the volume to accommodate a large trace. If this is the case, then
specify another directory with the -d switch or filter based on source and destination IP
addresses. For more information on using pktt refer to the man pages.
If the ICS connection fails, notifications similar to the following is seen in the messages log file:
105
Source:
[NetApp_Source: snapmirror.src.multipath.connErr:error]: SnapMirror is unable
to set up a multipath/failover connection from SRCvol to
Netapp_Destination:DRvol, SnapMirror is resorting to a single TCP connection.
Please ensure ports 10565 and 10566 are open in the firewall configuration.
Destination:
[Netapp_Destination: snapmirror.dst.multipath.connErr:error]: SnapMirror is
unable to setup a multipath connection to system, resorting to the standard
single TCP connection.
SnapMirror transfers can work while getting the multipath errors, but compression will not.
Scenario 3
If there is not an entry in the /etc/hosts file, update it with the appropriate information. Check the
hosts file by running the following command:
rdfile /etc/hosts
Note: Make sure that if network address translation is being used between source and
destination, that the hosts file is updated to reflect the appropriate IP/name mappings on each
end.
Scenario 4
SnapMirror compression is supported on storage systems with Data ONTAP versions 7.3.2 and
later. It is currently only supported with asynchronous volume SnapMirror.
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
Symptoms
Note: You must be logged in to view the content of this article.
Destination request while SnapMirror or SnapVault is off on the source
Destination request while SnapMirror or SnapVault is not licensed on the source
Destination request with incorrect version
Request from X denied, not listed in /etc/snapmirror.allow or snapvault.access
Source volume requested offline or nonexistent volume
106
Cause
--
Solution
Common problems and errors on the source storage system
The Data ONTAP version is different between the source and destination storage
systems.
The destination of a VSM relationship must run a version of Data ONTAP that is
equal to or more recent than the source.Volume Snapmirror Source
Replication
Possible?
Yes
No
Yes
Yes
Yes
Yes
Yes
107
No
No
No
Yes
No
No
No
Yes
Unknown system
The destination is attempting to find the source system but is unable to determine the
system based on the name. Check the /etc/hosts file to make sure it is listed there.
108
The OSSV agent has a built-in NDMP server, it is listening on TCP port
10000 by default. Some other applications, like Reliaty, SyncSort BEX, etc, have
NDMP servers built-in, so they listen on port 10000 as well. If an OSSV agent cannot
bind to port 10000, it fails to start up and the "cannot connect to source system" error
is logged on the secondary. To remedy the problem, reconfigure the OSSV agent with
the Configurator GUI to listen on some port other than 10000, and stop and start the
service using the Service tab. Alternatively, you can configure the other application to
listen to another port.
NB: any application using these NDMP services must be told about the changed port
number if you do this.
On UNIX systems, you can do netstat -a | grep 10000 to find out whether
any application is listening on port 10000.
Also check if the OSSV primary does not have a static IP address but it is served by
DHCP, do not set any entry in regards to this client in the secondary's /etc/hostsfile
and specify on the secondary the primary hostname instead of its IP address using the
options snapvault.access.
Insure that ports 10000 & 10566 are allowed for WAN access through firewall.
109
running. Such processes can take a while. You have to wait for those processes to complete
before an update request can be handled by the destination. If there is no SnapMirror or
SnapVault background processes running, check in /etc/messages which operations
are running on the system by the time the error was logged. Collecting and analyzing a
perfstat could also help in defining the problem source. In most of the case rebooting the
affected system will clear out the hanging or looping process, however, it may not prevent
the issue to re-occur. In this case, you will contact NetApp Technical Support.
SnapVault update failed because the destination path is not an initialized replica qtree
The above explanation applies as well to SnapVault and you will have to resynchronize the
affected relationship with the snapvault start -r command from the destination.
However, if the problem is caused by the snapvault stop command (which gets cleared
up, the qtree and the relationship) you will have to re-baseline the relationship.
Information regarding a re-baseline of the relationship can be found in the Data Protection
and Backup Guide for your specific version of Data ONTAP.
The volume options create_ucode and convert_ucode are enabled on all source
and destination volumes.
The volume language is identical on the source and destination volume with a
character set that uses UTF-8.
110
A coalescing cycle: this process runs automatically after each transfer update in
order to check the in-core state of the qtree before to export the file system.
The amount of qtrees that were transferring and aborted was important.
The inode map file is pretty large (Use the df -i command on the
destination to see many inode are used per volume).
A coalescing cycle applies to all qtrees in a volume. Thus if one qtree update just
completed the coalescing cycle will run against all snapvaulted qtrees in that
destination volume.
The transfer size is significantly greater than the actual size of the file system data
being transferred
When a QSM or SnapVault update is transferring, you will see from the snapmirror
status -l or the snapvault status -l output how many KB have being transferred so far.
This value may be greater than the expected delta (Changes expected) and this is due to
known metadata transfer overhead.
SnapVault and qtree SnapMirror sources determine which differences to send to the
destination by comparing corresponding blocks in previous and new base snapshots. The
two noteworthy places where this happens is in determining which inodes have changed
by comparing blocks of the inode file, and determining which entries in a directory have
changed by comparing directory data blocks.
111
All changed data is rounded up to 4KB. Thus, a 1-byte file is much more
expensive than a 0-byte file.
Anytime you create/delete/rename a file that changes a directory.
However, once the update completed, a df for the destination volume should
matches the "expected" changes size.
Related Link:
Data ONTAP 8.2 Data Protection Online Backup and Recovery Guide For 7-Mode
2015803: SnapVault start or update results in 'the qtree is not the source for the
replication destination'
3014313: What tools are available to analyze SnapMirror and SnapVault Information?
2012828: Message appears when using Protection Manager, SnapVault and NDMP
Preferred Interface: Error Failed to create relationship
Disclaimer
NetApp provides no representations or warranties regarding the accuracy, reliability, or
serviceability of any information or recommendations provided in this publication, or with
respect to any results that may be obtained by the use of the information or observance of any
recommendations provided herein. The information in this document is distributed AS IS, and
the use of this information or the implementation of any recommendations or techniques herein
is a customers responsibility and depends on the customers ability to evaluate and integrate
them into the customers operational environment. This document and the information contained
herein may be used solely in connection with the NetApp products discussed in this document.
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@@@@@@@@@@@@@@@@@@@@@@@@
112
The error message is entirely useless and had a reasonable amount of information been provided
by Netapp, this would not have take 2 days of investigation!
Delete a few snapshots and try again, it should be fine.
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
ts simply because of the etc hosts file. We think that snapmirror is running fine and
it should work fine. It will work fine in case of SRM failover but while doing failback it
actually runs snapmirror resync, in this case our source becomes destination and
destination becomes source so our /etc/hosts file should be updated accordingly
means it should have name and IP of both the storage filers on both end. Hope it
will help
@@@@@@@@@@@@@@@@@@@@@@@
113
snapmirror.src.sockErr
3)I solved it, changed options snapmirror.access to legacy and it solved the problem.
The host file was correct and the dest.filer were defined in snapmirror.allow
Soran
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
snapmirror error?
I have a client who is using snapmirror for a long time. He increased the space in the source
volume without increasing the destination volume and the snapmirror status becomes
114
"Unknown". Now he deleted the volume from the destination and created a new volume as same
as the source. But when he did a snapmirror initialize he is getting the following error.
snapmirror status
Snapmirror is on.
Source Destination State Lag
Status
R_mtvfresh_archvol Snapmirrored
8226:41:16 Idle
Semi-Sync Snapmirror
115
I have configured Semi-Sync Snapmirror between two vfilers located in different datacenters.
The initial sync was successful, but after that it keep throwing this error on the destination
Yes.. Async works between the source and destination.
Okie... It seems that Snapmirror Sync (& Semi-Sync) is not supported on Vfiler or on a volume
where de-duplication is enabled. Snapmirror Sync ( & Semi-Sync) works only on physical
storage (both source & destination)
@@@@@@@@@@@@@@@@@@@@@@@@@@@
Unable to delete SnapMirror relationship
Hi,
Our NetApp FAS2050 Filer running OnTAP Version 7.3.2P4 has the following entry in
FilerView/SnapMirror/Manage:
Source: Filer1:dr_luns
Destination: Filer1:SM_dr_luns
Status: idle
State: source
Lag: 4765:27:46
The Destination volume 'SM_dr_luns' is not listed in FilerView/Volumes/Manage or in a
command line session therefore
we are unable to quiesce, or break, the SnapMirror relationship as these commands return error
message 'destination is
offline, is restricted, or does not exist'
We inherited this configuration from an IT employee who is no longer with our company.and
have no idea why this SnapMirror
was originally configured or how the Destination volume could have been deleted without the
SnapMirror relationship being removed.
Is there any way of cleaning up this mess ?
116
Hie,
I have to install a snapmirror on one FAS2020A to a FAS2020.
I have create the relationship between two FAS, so when i try to initialize the snapmirror
processing, i have an error :
SAN1-XXXX:vol1 SRV-BACKUP01:vol1 Abort (transfer from source not possible; snapmirror
may be misconfigured, the source volume may be busy or unavailable)
So my vol to my destination is restricted ...
My source is authorized connect to my destination ...
So can you help to understand what is the problem ?
A)
Couple of things which you should check:1. Data ONTAP version should be the same for source and destination systems.
2. Source and Target volume should be of the same size. (Target can also be 10% more than the
source)
117
Ideally error "idle with restart checkpoint" will occur when there is an issue with
your network connectivity.
When snapmirror replication is in replication state, if you modify or delete other snapshots in
the same volume (which are not tag as "snapmirror,busy")
If your starting an fresh snapmirror initialization, destination volume option unicode should be
in on
When your snapmirror relationship is stopped with
Uninitialized
546:41:39
you have execute the below command in destination filer to continue the replication from the
same point where it is stopped.
Note: Don't execute the below command if your replication is in snap-mirrored state.
NetappFilerDst>snapmirror initialize -S Soure:volumename Dstvolume
if your snapmirror is stopped with error "idle with restart checkpoint (at 15GB)"
you have to execute the below command in destination filer to continue the replication from
the same point where it is stopped.
118
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
Qtree SnapMirror update states that file system is full
Qtree SnapMirror update states that file system is full
Symptoms
Qtree SnapMirror update states that file system is full
Snapmirror fails with error: [snapmirror.dst.waflErr:error]: SnapMirror destination
transfer from filer:/vol/srcvol/srcqtree to /vol/dstvol/dstqtree : qtree snapmirror
destination write failed: No space left on device.
Cause of this problem
For qtree snapmirror (QSM), it is possible that the destination filer's volume containing the
destination qtree for a snapmirror relationship will require more space than the volume on the
source filer. This occurs for two reasons:
1. During a qtree snapmirror update, changes must be replicated to the
destination before data can be removed. Thus, at least 5% free space should
be available per QSM relationship to allow for the temporary space needed
during the transfer.
2. Volume-level snapshots independent of the QSM base snapshot (such as the
nightly snapshots) can retain data in the QSM qtrees. Since the volume
snapshots are unique to the destination filer, they may contain data that was
already deleted from the source filer.
If the destination qtree has enough space to hold the data in the source qtree, but it does not have
enough space to contain the source qtree plus the snapshot delta, the QSM update will fail with
the following error:
[snapmirror.dst.waflErr:error]: SnapMirror destination transfer
from srcfiler:/vol/srcvol/srcqtree to /vol/dstvol/dstqtree :
qtree snapmirror destination write failed: No space left on
device.
119
Solution
Increase the size of the destination volume so that it can hold the sum of the following:
5% free space
The following scenario is an example of how the destination volume for a QSM relationship can
become full even though it is the same size as the source filer's volume. A QSM relationship is
set up from source srcfiler:/vol/srcvol/srcqtree to destination dstfiler:/vol/dstvol/dstqtree. The
srcvol on srcfiler is 100GB in size and contains a qtree using 20 GB. A "df -g" on this filer
shows:
srcfiler> df -g
Filesystem total used avail capacity
/vol/srcvol/ 80GB 20GB 60GB 25%
/vol/srcvol/.snapshot 20GB 0GB 20GB
0%
Volume snapshots are disabled on the source volume /vol/srcvol. The dstvol on dstfiler is also
100GB in size and contains the QSM destination qtree, which holds the same 20GB of data as
the source qtree (srcfiler:/vol/srcvol/srcqtree). A "df -g" on dstfiler shows:
dstfiler> df -g
Filesystem total used avail capacity
/vol/dstvol/ 80GB 20GB 60GB 25%
/vol/dstvol/.snapshot 20GB 0GB 20GB 0%
Volume snapshots are enabled on the destination volume
/vol/dstvol:
dstfiler> snap sched
Volume dstvol: 0 2 6@8,12,16,20
The nightly snapshot has been taken on the destination volume. This snapshot contains the
20GB of data in /vol/dstvol/dstqtree that was replicated over by QSM.
dstfiler> snap list dstvol
Volume dstvol
working...
%/used %/
total
date
name
---------- ---------- ------------ --------
120
0%
A QSM update occurs, and the changes are replicated to the destination. QSM sends delete
information for the files which have been deleted, and complete data for the newly created files.
Thus, the data transferred is equal to 20 GB of new data + (4KB * number of deleted files).
Once the transfer completes, the destination filer now has the following space used:
dstfiler> df -g
Filesystem total used avail capacity
/vol/dstvol/ 80GB 30GB 50GB 38%
/vol/dstvol/.snapshot 20GB 10GB 20GB 50%
Notice that there is space used in the destination filer's snap reserve. This space is held in the
nightly snapshot:
dstfiler> snap list dstvol
Volume dstvol
working...
%/used %/
total
date
name
---------- ---------- ------------ -------25% ( 25%) 10% ( 10%) Aug 20 16:00 nightly.0
Thus, because the destination volume has snapshots enabled, it can use more space that the
source volume. Therefore, it is important to consider the snapshot retention period and snapshot
delta when sizing the destination volume for QSM relationships.
Last Updated: 24 AUG 2006
Environment
Data ONTAP
All NetApp filer
121
NearStore
SnapMirror
If you would like more help, please try the new NOW Support Communities where
registered customers, partners, and NetApp technical experts discuss technical
questions and issues.
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
NetApp Snap Mirror
NetApp Snap Mirror
Well every NetApp engineer will be aware of the snapmirror , its a common and important
feature of the NetApp, so today I thought of writing something about snapmirror , May be
my blog on snapmirror can help you to understand the snapmirror more nicely.
Why we need a snapmirror.
SnapMirror is replication feature of NetApp and it is fast and flexible enterprise solution to
replicate your critical and very precious data over local area, wide area and fiber channel
networks to the destination/different location, and it is the very good solution for the
disaster and even good solution for the online data migration without any additional
overhead.
Snapmirror have three modes.
Async: Replicates snapshot copies from a source volume or qtree to a destination volume
or qtree. Incremental updates are based on schedules or are performed manually using the
snapmirror update command. It works both in volume level and qtree level.
Sync: Replicates writes from a source volume to a secondary volume at the same time it is
written to the source volume. Snap mirror Sync is used in environments that have zero
tolerance for data loss.
Semi-sync: It is between the Async and sync mode with less impact on performance. It
can configure a snapMirror sync replication to lag behind the source volume by a userdefined number of write operations or milliseconds.
Volume snapmirror enables block-for block replication. The entire volume, including its
qtrees, and all the associated snapshot copies, are replicated to the destination volume. The
source volume is online/writable and the destination volume is online/readonly and when
the relationship is break the destination volume becomes writable.
Initial Transfer and Replication.
122
To initialize a snapmirror relation, you first have to restrict the destination volume in which
the replica will reside. During the baseline transfer, the source system takes a snapshot
copy of the volume. All data blocks referenced by this snapshot copy, including volume
metadata such as language translation settings, as well as all snapshot copies of the volume
are transferred and written to the destination volume.
After the initialization completes, the source and destination file systems have one snapshot
copy in common. Update occur from this point and are based on the schedule specified in a
flat-text configuration file known as the snapmirror.conf file or by using the snapmirror
update command.
To identify new and changed blocks, the block map in the new snapshot copy is compared
to the block map of the baseline snapshot copy. Only the blocks that are new or have
changed since the last successful replication are sent to the destination. Once the transfer
has completed the new shapshot copy becomes the baseline snapshot copy and the old one
is deleted.
Requirements and Limitations
Destinations Data Ontap version must be equal to or more recent than the source. In
addition, the source and the destination must be on the same Data ontap release.
Volume snapMirror replication can only occur with volumes of the same type either
traditional volumes or both flexible volumes.
Destination volumes capacity equal to or greater than size of the source, Administrators can
thin provision the destination so that it appears to be equal to or greater than the size of
the source volume.
Quota cannot be enabled on destination volume.
It is recommended that you allow a range of TCP ports from 10565 to 10569.
Qtree SnapMirror
Qtree snapMirror is a logical replication. All the files and directories in the source file system
are created in the target destination qtree.
Qtree Snap Mirror replication occurs between qtrees regardless of the type of the volume
(traditional or flexible).Even qtree replication can occur between different releases of Data
ONTAP.
Source volume and qtree are online/writable in qtree replication and Destination volume is
also online/writable (in qtree replication).
NOTE: Unlike volume snapMirror , a qtree snapMirror does not require that the size of the
destination volume be equal to or greater than the size of the source qtree.
In initial baseline transfer you not need to create the destination qtree , it gets
automatically created upon first time replication.
123
Requirements and limitations
Support Async mode only
Destination volume must contain 5% more free space than the source qtree and destination
qtree cannot be /etc
Qtree snapMirror performance is impacted by deep directory structure and large number
(tens of millions) of small files replicated.
Configuration process of snapmirror
1. Install the snapMirror license
For ex: license add <code>
2. On the source, specify the host name or IP address of the snapMirror destination systems
you wish to authorize to replicate this source system.
For Ex: options snapmirror.access host=dst_hostname1,dst_hostname2
3. For each source volume and qtree to replicate, perform an initial baseline transfer, For
volume snapmirror restrict the destination volume.
For Ex: vol restrict dst_volumename
Then initialize the volume snapmirror baseline, using the following syntax on the
destination:
For Ex: snapmirror initialize S src_hostname:src_vol dst_hostname:dst_vol
For a qtree snapmirror baseline transfer, use the following syntax on the destination
Snapmirror initialize S src_hostname: /vol/src_vol/src_qtree
dst_hostname:/vol/dst_vol/dst_qtree
4. Once the initial transfer completes, set the snapmirror mode of replication by creating
the /etc/snapmirror.conf file in the destinations root volume.
Snapmirror.conf
The snapmirror.conf configuration file entries define the relationship between the source
and the destination, the mode of replication, and the arguments that control SnapMirror
when replicating data.
Entries can be seen like this in snapmirror.conf file
For ex: Fas1:vol1 Fas2:vol1 0 23 * 1,3,5
Fas1:vol1 : source storage system hostname and path
124
Fas2:vol1: destination storage system hostname and path
-: Arguments: Arguments fields let you define the transfer speed and restart mode and
indicate the default mode is selected
Schedules
0: updates on the hours
23: updates on 11PM
*: Updates on all applicable days of the months
1,3,5: updates on Monday,Wednesday,Friday
You can Monitor transfer by running the cmd snapmirror status this cmd can be run on
source as well as on the destination also, it comes with two options l and q
-l : option display the long format of the output.
-q: option displays which volumes or qtree are quiesced or quiescing.
You can list all the snap shot copies of particular volumes by snap list volumename cmd,
snapmirror snapshot copies are distinguished from system snapshot copies by a more
elaborate naming convention.
The snap list command display the keyword snapmirror next to the necessary snapshot
copy
Log files
Snapmirror logs record whether the transfer finished successfully or failed. If there is a
problem with the updates , it is useful to look at the log file to see what has happened since
the last successful update. The log include the start and end of each transfer, along with the
amount of data transferred.
For ex: options snapnmirror.log.enable (on/off) by default it is on.
Log files are stored in the source and the destination storage system root volume, in the
/etc/logs/snapmirror directory.
This guides you quickly through the Snapmirror setup and commands.
1) Enable Snapmirror on source and destination filer
source-filer> options snapmirror.enable
snapmirror.enable
on
source-filer>
source-filer> options snapmirror.access
snapmirror.access
legacy
source-filer>
125
2) Snapmirror Access
Make sure destination filer has snapmirror access to the source filer. The snapmirror filer's
name or IP address should be in /etc/snapmirror.allow. Use wrfile to add entries to
/etc/snapmirror.allow.
source-filer> rdfile /etc/snapmirror.allow
destination-filer
destination-filer2
source-filer>
3) Initializing a Snapmirror relation
Volume snapmirror : Create a destination volume on destination netapp filer, of same size
as source volume or greater size. For volume snapmirror, the destination volume should be
in restricted mode. For example, let us consider we are snapmirroring a 100G volume - we
create the destination volume and make it restricted.
destination-filer> vol create demo_destination aggr01 100G
destination-filer> vol restrict demo_destination
Volume SnapMirror creates a Snapshot copy before performing the initial transfer. This copy
is referred to as the baseline Snapshot copy. After performing an initial transfer of all data
in the volume, VSM (Volume SnapMirror) sends to the destination only the blocks that have
changed since the last successful replication. When SnapMirror performs an update transfer,
it creates another new Snapshot copy and compares the changed blocks. These changed
blocks are sent as part of the update transfer.
Snapmirror is always destination filer driven. So the snapmirror initialize has to be done on
destination filer. The below command starts the baseline transfer.
destination-filer> snapmirror initialize -S source-filer:demo_source destinationfiler:demo_destination
Transfer started.
Monitor progress with 'snapmirror status' or the snapmirror log.
destination-filer>
Qtree Snapmirror : For qtree snapmirror, you should not create the destination qtree. The
snapmirror command automatically creates the destination qtree. So just volume creation
of required size is good enough.
Qtree SnapMirror determines changed data by first looking through the inode file for inodes
that have changed and changed inodes of the interesting qtree for changed data blocks. The
SnapMirror software then transfers only the new or changed data blocks from this Snapshot
copy that is associated with the designated qtree. On the destination volume, a new
Snapshot copy is then created that contains a complete point-in-time copy of the entire
126
destination volume, but that is associated specifically with the particular qtree that has been
replicated.
destination-filer> snapmirror initialize -S source-filer:/vol/demo1/qtree
destination-filer:/vol/demo1/qtree
Transfer started.
Monitor progress with 'snapmirror status' or the snapmirror log.
4) Monitoring the status : Snapmirror data transfer status can be monitored either from
source or destination filer. Use "snapmirror status" to check the status.
destination-filer> snapmirror status
Snapmirror is on.
Source
Destination
State
Lag Status
source-filer:demo_source
destination-filer:demo_destination Uninitialized Transferring (1690 MB done)
source-filer:/vol/demo1/qtree destination-filer:/vol/demo1/qtree
Uninitialized - Transferring (32 MB done)
destination-filer>
5) Snapmirror schedule : This is the schedule used by the destination filer for updating
the mirror. It informs the SnapMirror scheduler when transfers will be initiated. The
schedule field can either contain the word sync to specify synchronous mirroring or a cronstyle specification of when to update the mirror. The cronstyle schedule contains four spaceseparated fields.
If you want to sync the data on a scheduled frequency, you can set that in destination filer's
/etc/snapmirror.conf . The time settings are similar to Unix cron. You can set a synchronous
snapmirror schedule in /etc/snapmirror.conf by adding sync instead of the cron style
frequency.
destination-filer> rdfile /etc/snapmirror.conf
source-filer:demo_source
destination-filer:demo_destination - 0 * * * # This
syncs every hour
source-filer:/vol/demo1/qtree destination-filer:/vol/demo1/qtree - 0 21 * * #
This syncs every 9:00 pm
destination-filer>
6) Other Snapmirror commands
127
Snapmirror do provide multipath support. More than one physical path between a source
and a destination system might be desired for a mirror relationship. Multipath support
allows SnapMirror traffic to be load balanced between these paths and provides for failover
in the event of a network outage.
Some Important Points to be known about SnapMirror
Clustered failover interaction.The SnapMirror product complements NetApp clustered
failover (CF) technology by providing an additional level of recoverability. If a catastrophe
disables access to a clustered pair of storage systems, one or more SnapMirror volumes can
immediately be accessed in read-only mode while recovery takes place. If read-write access
is required, the mirrored volume can be converted to a writable volume while the recovery
takes place. If SnapMirror is actively updating data when a takeover or giveback operation
is instigated, the update aborts. Following completion of the takeover or giveback operation,
SnapMirror continues as before. No specific additional steps are required for the
implementation of SnapMirror in a clustered failover environment
Adding disks to SnapMirror environments.When adding disks to volumes in a
SnapMirror environment always complete the addition of disks to the destination storage
system or volume before attempting to add disks to the source volume.
Note: The dfcommand does not immediately reflect the diskor disks added to the
SnapMirror volume until after the first SnapMirror update following the disk additions .
Logging. The SnapMirror log file (located in /etc/logs/snapmirror.log) records the start and
end
of an update as well as other significant SnapMirror events. It records whether the transfer
finished successfully or whether it failed for some reason. If there is a problem with
updates, it is often useful to look at the log file to see what happened since the last
successful update. Because the log file is kept on the source and destination storage
systems,quite often the source or the destination system may log the failure, and the other
partner knows only that there was a failure. For this reason, you should look at both the
source and the destination log file to get the most information about a failure. The log file
contains the start and end time of each transfer, along with the amount of data transferred.
It can be useful to look back and see the amount of data needed to make the update and
the amount of time the updates take.
Note: The time vs. data sent is not an accurate measure of the network bandwidth because
the transfer is not constantly sending data
Destination volume.For SnapMirror volume replication, you must create a restricted
volume to be used as the destination volume. SnapMirror does not automatically create a
volume.
Destination volume type.The mirrored volume must not be the root volume.
Data change rate.Using the snap delta command, you can now display the rate of
change stored between two Snapshot copies as well as the rate of change between a
128
Snapshot copy and the active file system. Data ONTAP displays the rates of change in two
tables. The first table displays rates of change between successive Snapshot copies. The
second table displays a summary of the rate of change between the oldest Snapshot copy
and the active file system.
Failed updates. If a transfer fails for any reason, SnapMirror attempts a retransfer
immediately, not waiting for the next scheduled mirror time. These retransfer attempts
continue until they are successful, until the appropriate entry in the /etc/snapmirror.conf
file is commented out, or until SnapMirror is turned off. Some events that can cause failed
transfers include:
Loss of network connectivity
Source storage system is unavailable
Source volume is offline
SnapMirror timeouts. There are three situations that can cause a SnapMirror timeout:
Write socket timeout. If the TCP buffers are full and the writing application cannot hand off
data to
TCP within 10 minutes, a write socket timeout occurs. Following the timeout, SnapMirror
resumes
at the next scheduled update.
Read socket timeout. If the TCP socket that is receiving data has not received any data
from the application within 30 minutes, it generates a timeout. Following the timeout,
SnapMirror resumes at the next scheduled update. By providing a larger timeout value for
the read socket timeout, you can be assured that SnapMirror will not time out while waiting
for the source file to create Snapshot copies, even when dealing with extremely large
volumes. Socket timeout values are not tunable in the Data ONTAP and SnapMirror
environment.
Sync timeouts. These timeouts occur in synchronous deployments only. If an event occurs
that causes a synchronous deployment to revert to asynchronous mode, such as a network
outage, no ACK is received from the destination system.
Open Files
If SnapMirror is in the middle of a transfer and encounters an incomplete file (a file that an
FTP server is still transferring into that volume or qtree), it transfers the partial file to the
destination. Snapshot copies behave in the same way. A Snapshot copy of the source would
show the transferring file and would show the partial file on the destination.
A workaround for this situation is to copy a file to the source. When the file is complete on
the source, rename the source file to the correct name. This way the partial file has an
incorrect name, and the complete file has the correct name.
129
@@@@@@@@@@@@@@@@@@@@@@@@
snapmirror
Hope someone can answe rthis easy question since I am new to this ?
I want to copy a volume to new aggregate and thinking of using snapmirror. I knwo I have to run
snapmirror intialise -S first.
What do I do after the completion?
Is that just snapmirror break <destination vol> at detination filer?
or Do i have to run update prior to break?
A) Run update before break will keep that target volume identical as the source vol
Destination volume will be usable after Break whether you update or not
@@@@@@@@@@@@@@@@@@@@@@@@@@
snapmirror
Filer2> snapmirror initialize -S Filer1:vol34 Filer2:vol25
Fri Sep 28 07:28:13 GMT [Filer2: replication.dst.err:error]: SnapMirror: destination transfer
from Filer1:vol34 to vol25 : could not find source filer.
Transfer aborted: could not find source filer.
Filer2>
Can Someone help me over the above, i have entries in both the filers but a ping is not working .
Filer1> rdfile /etc/snapmirror.allow
Filer2
Filer1> ping Filer2
ping: unknown host: Filer2
Filer2> rdfile /etc/snapmirror.allow
130
Filer1
Filer2> ping Filer1
ping: unknown host: Filer1
A) Have you got relevant records for both filers on your DNS servers? E.g. can you resolve
Filer1 & Filer2 (or ping) from any host sitting on the same subnet?
Also how about DNS on the filer itself? What's the output of:
Filer1> dns info
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
State
Lag
Broken-off
Status
606:00:59 Idle
but the volume "archive" on the destination storage filer02 has already been deleted..........
so could you tell me how to delete this broken-off snapmirror relationship?
waiting onine...............
A) filer01 likely has a snapshot for the mirror... if you run "snap list archive" on
filer01, you will see the snapshot you can delete which will clear this. If not, it can
be a snapmirror.conf entry or a registry state entry (not likely) which would require
support to walk you through fixing... but in this case almost positive it is a snapshot
on filer01 with the snapmirror status.
B) hiScott
thanks for your reply. yes, there is indeed a snapshot containing the keyword "snapmirror" as
below, but I can't delete this because the same source "filer01:archive" is replicated to another
desitnation volume
filer01> snap list archive
Volume archive
working...
%/used
%/total date
name
---------- ---------- ------------ -------0% ( 0%) 0% ( 0%) Jul 04 23:06 hotnew
0% ( 0%) 0% ( 0%) Jul 04 22:30 filer02(0135033979)_archive_mirror.16276 (snapmirror)
131
132
2) run snapmirror update
3) resize the destination volume
4) run vol status -b on the destination volume to confirm that volume size and
filesystem size are the same
PS: With Ontap 8.2, all this will be done automagically.
C).. Can you also further explain the System Manager behavior. There is resync
option on the source and destination and reverse resync on the source. Am I
understanding correctly that if you run resync either on the source filer or on the
destination filer it will resync from the source to the destination. And if you choose
reverse resync, then only in this case it will actually resync from the destination to
the source.
@@@
Snapmirror
I have a storage Fas3210 and I have 16 volumes in-sync with my DR storage. When I add the
volume to 17 I can not sync, back with the message "Too Many Active Transfer at Once."
Someone has gone through this problem?? Has a limit to synchronize the volumes??
A) Yes, 16 synchronous SnapMirror relationships is the absolute maximum for FAS3210. May
be, you could use standard async SnapMirror for some volumes, here upper limit is larger.
133
FAS2520
FAS3220
50 50 16 16
64
64
64
64
16
50 50 16 16
64
64
64
64
16
FAS6250
150 15 32 32
128 128 128
128
32
0
FAS8080(max) 150 30 32 32
512 512 512
512
128
0
--------------------------------------------print--------------------------------------------------
134
135