Académique Documents
Professionnel Documents
Culture Documents
Issue 02
Date 2015-12-10
and other Huawei trademarks are trademarks of Huawei Technologies Co., Ltd.
All other trademarks and trade names mentioned in this document are the property of their respective
holders.
Notice
The purchased products, services and features are stipulated by the contract made between Huawei and the
customer. All or part of the products, services and features described in this document may not be within the
purchase scope or the usage scope. Unless otherwise specified in the contract, all statements, information,
and recommendations in this document are provided "AS IS" without warranties, guarantees or
representations of any kind, either express or implied.
The information in this document is subject to change without notice. Every effort has been made in the
preparation of this document to ensure accuracy of the contents, but all statements, information, and
recommendations in this document do not constitute a warranty of any kind, express or implied.
Website: http://e.huawei.com
This document describes the implementation principles and application scenarios of the
remote replication feature. Also, it explains how to configure and manage remote replications.
Intended Audience
This document is intended for:
l Technical support engineers
l Maintenance engineers
Symbol Conventions
The symbols that may be found in this document are defined as follows.
Symbol Description
Change History
Changes between document issues are cumulative. The latest document issue contains all the
changes made in earlier issues.
Issue 03 (2016-04-30)
This issue is the third official release, which incorporates the following changes:
Optimized descriptions about section Availability.
Optimized descriptions about section Restrictions.
Optimized the description of configuration operations in Configuration Example.
Added FAQs for remote replication licenses.
Issue 02 (2015-12-10)
This issue is the second official release. The updates are as follows:
Description about principles is optimized.
1.2 Availability is revised.
Issue 01 (2015-08-30)
This issue is the first official release.
Contents
2 Configuration Example.............................................................................................................. 32
2.1 Central Backup and Disaster Recovery........................................................................................................................ 33
2.1.1 Requirement Analysis................................................................................................................................................33
2.1.2 Configuration Planning..............................................................................................................................................36
2.1.3 Configuration Operations.......................................................................................................................................... 43
2.1.4 Configuration Verification......................................................................................................................................... 53
3 Disaster Recovery........................................................................................................................ 55
4 Configuration and Management.............................................................................................. 57
4.1 Viewing System Information........................................................................................................................................58
4.2 Setting Device Information.......................................................................................................................................... 60
4.3 Configuring Remote Replication..................................................................................................................................60
4.3.1 Configuration Process................................................................................................................................................61
4.3.2 Preparing for the Configuration.................................................................................................................................61
4.3.3 Checking the License File......................................................................................................................................... 62
4.3.4 Managing a Route......................................................................................................................................................62
4.3.5 Adding a Remote Device...........................................................................................................................................64
4.3.6 Creating a Remote Replication..................................................................................................................................71
4.3.6.1 Creating a Synchronous Remote Replication......................................................................................................... 71
4.3.6.2 Creating an Asynchronous Remote Replication.....................................................................................................73
4.3.7 Creating a Consistency Group................................................................................................................................... 80
5 FAQs.............................................................................................................................................146
5.1 What are the differences between synchronous remote replication and asynchronous remote replication............... 147
5.2 How do I use the secondary LUN in a remote replication......................................................................................... 147
5.3 Why information about the primary device and the secondary device in a remote replication cannot be updated in
real time............................................................................................................................................................................ 148
5.4 What is the relationship among a remote replication, clone, and consistency group.................................................148
5.5 What is the principle of setting bandwidth for remote replication and LUN copy among storage arrays................. 149
5.6 Precautions for Creating a Remote Link.................................................................................................................... 149
5.7 Functions of Reserved Space During Asynchronous Remote Replication................................................................ 150
5.8 Impact of Modifying Consistency Group Properties..................................................................................................151
5.9 When performing a remote replication task, is an initial synchronization required to synchronize data from the
primary storage system to the secondary storage system................................................................................................. 151
5.10 Do I need to stop the services on the primary storage system before performing the initial synchronization.........151
5.11 Impact of the Secondary Storage System Failure on Remote Replication............................................................... 152
5.12 Do File Remote Replication and Block Remote Replication Use the Same License...............................................152
A Using the External Storage System to Perform Initial Synchronization for a Remote
Replication .................................................................................................................................... 153
A.1 Introduction................................................................................................................................................................154
A.2 Configuration Operation (Using the LUN Copy Feature of the External Storage System)...................................... 155
A.3 Configuration Operation (Using the LUN Copy Feature of the Production and DR Centers)..................................156
C Glossary...................................................................................................................................... 162
D Acronyms and Abbreviations................................................................................................ 174
1 Feature Description
The remote replication feature provides flexible and powerful data replication functions
including remote data backup and recovery, continuous support for service data, and disaster
recovery.
1.1 Overview
This section describes the background, definition, purposes, and benefits of the remote
replication feature.
1.2 Availability
This section describes the availability of the remote replication feature in terms of license
requirement and applicable versions.
1.3 Working Principle
This section describes how the remote replication feature replicates data, switches services,
and recovers data, and introduces the function of a consistency group.
1.4 Restrictions
This section describes the impact of the remote replication feature on system performance,
interaction between the remote replication feature and other features, and restrictions on the
remote replication feature.
1.5 Application Scenarios
The remote replication feature is used for data backup and DR. The typical application
scenarios include central backup and DR, and 3DC.
1.1 Overview
This section describes the background, definition, purposes, and benefits of the remote
replication feature.
Background
As the digitalization drive advances in various industries, data has become critical to the
operation of enterprises, and customers impose increasingly demanding requirements on the
stability of storage systems. Although some storage devices offer extremely high stability,
they fail to prevent natural disasters from irrecoverable damage to production systems caused
by natural disasters. To ensure the consistency, recoverability, and high reliability of data
access, remote disaster recovery solutions emerge. The remote replication technology is one
of the key technologies used by remote disaster recovery solutions. HyperReplication is a
remote replication feature developed by Huawei.
Definition
As a core technology for disaster recovery and backup, the remote replication technology can
implement remote data synchronization and disaster recovery.
Benefit
Table 1-1 lists the purposes and benefits of the remote replication feature.
Remote backup and To recover service data using This feature prevents damage
recovery backup data in the remote caused by data loss in the case
storage system after the that data at the primary site
service data becomes becomes unusable.
unusable.
Continuous support for To quickly switch service data This feature prevents damage
services from the primary site to the caused by a service interruption
secondary site, protecting upon a failure at the primary
service continuity. site.
Disaster recovery To recover data at the primary This feature prevents the
site using backup data at the damage caused by service data
secondary site in the case of a loss or long recovery duration.
disaster.
Note 1: The primary site is a data center that includes the primary storage system,
application servers, and links.
Note 2: The secondary site is a backup center that includes the secondary storage system,
application servers, and links.
Note 3: Unless otherwise specified, hosts mentioned in this document refer to application
servers.
1.2 Availability
This section describes the availability of the remote replication feature in terms of license
requirement and applicable versions.
License Requirement
Remote replication is a value-added feature that requires a software license in both the
primary storage system and the secondary storage system.
Product Version
Product Version
Product Version
Remote
2. Replicating data
replication links
Primary Secondary Data is replicated during
LUN LUN
synchronization.
Phases in a full remote replication process
Remote
replication links 3. Switching services
Secondary Primary The data status and secondary
LUN LUN LUNs accessible property show
whether a primary/secondary
switchover can be performed.
Pair A new pair is generated after a
primary/secondary switchover.
Remote
replication links 4. Recovering data
Secondary Primary
LUN LUN Data recovery is implemented
using synchronization.
Pair
Remote
replication links 5. Restoring pairs
Primary Secondary
LUN LUN The original pair is restored after
another primary/secondary
switchover.
Pair
Pair
A pair refers to the relationship between a primary logical unit number (LUN) and a
secondary LUN of a remote replication session. In a pair, data can be replicated only from the
primary LUN to the secondary LUN. People can speak over two phones only after a
connection is set up between the phones by dialup. Similarly, data can be replicated from the
primary LUN to the secondary LUN only after a pair is set up between the LUNs. The
primary and secondary LUNs of a pair must come from different storage systems.
A pair may be in different running status at different phases of a remote replication session.
By viewing the running status of a pair, you can determine whether you need to perform an
operation for that pair, and what operation should be performed if you need. After you
perform an operation, you can view the pair running status to determine whether the operation
succeeded. Table 1-4 describes the pair running status involved in a remote replication
session.
Normal Data synchronization between the primary LUN and the secondary
LUN is complete.
Split Data replication between the primary LUN and the secondary LUN
is suspended. The pair running status changes to Split after the
primary and secondary LUNs are split from each other manually by
the administrator.
Interrupted The pair running status changes to Interrupted after the pair
between the primary and secondary LUNs is interrupted because the
links used by the remote replication are interrupted or any of the two
LUNs fails.
Invalid If the original basic pair properties of the primary or secondary LUN
are changed (for example, the secondary LUN configured at the
secondary site is deleted) after the pair is interrupted, configuration
becomes inconsistent between the primary and secondary sites and
the pair running status changes to Invalid.
Synchronizing Data is being copied from the primary LUN to the secondary LUN.
When a pair is in this state, data on the secondary LUN is unusable.
Consistency Group
A consistency group is a collection of pairs that have a service relationship with each other.
For example, the primary storage system has three primary LUNs, which respectively store
service data, log, and change tracking information of a database. If data on any of the three
LUNs becomes invalid, all data on the three LUNs becomes unusable. For these LUNs, you
can create a consistency group, and add the remote replication sessions of the LUNs to the
consistency group.
Synchronization
Synchronization refers to a process during which data is copied from the primary LUN to the
secondary LUN of a pair. Synchronization can be performed for a single remote replication
session or multiple remote replication sessions in a consistency group at one time.
Splitting
Splitting refers to the action of stopping data replication from the primary LUN to the
secondary LUN. This operation can be performed only by the administrator. Splitting can be
performed for a single remote replication session or multiple remote replication sessions in a
consistency group at one time. After the splitting, the pair relationship between the primary
LUN and the secondary LUN still exists and the access permission of hosts for the primary
and secondary LUNs remains unchanged.
At some time, for example when the bandwidth is not enough to support critical services, you
probably do not want to copy data from the primary LUN to the secondary LUN. In such
cases, you can split the secondary LUN from the primary LUN to suspend data
synchronization.
Users can effectively control a remote replication process by performing synchronization and
splitting.
Primary/Secondary Switchover
A primary/secondary switchover is a process during which the primary and secondary LUNs
in a pair exchange their roles. A primary/secondary switchover changes the original
relationship between the LUNs in a remote replication session (a new pair is formed). You can
perform a primary/secondary switchover for a single remote replication session or multiple
remote replication sessions in a consistency group at one time. Usually, a primary/secondary
switchover needs to be performed in the following scenarios:
l The remote replication links are reestablished and data is synchronized between the
primary and secondary sites after the primary site recovers from the disaster. In such a
case, the administrator needs to perform a primary/secondary switchover for the remote
replication session at the primary or secondary site.
l When a device in the primary storage system needs to be replaced, the administrator
performs primary/secondary switchover for the remote replication session at the primary
or secondary site.
After a primary/secondary switchover, services are running on the secondary storage system.
Therefore, it is required that data on the secondary LUN is available before a primary/
secondary switchover is initiated. You can determine whether data on the secondary LUN is
available by viewing the data status of the secondary LUN.
Data Status
The remote replication feature identifies the data status of a pair based on the data difference
between the primary LUN and the secondary LUN. When a disaster occurs, the remote
replication feature determines whether to allow a primary/secondary switchover for a pair
based on the data status of the pair. There are two pair data states: Consistent and
Inconsistent.
l When a fault occurs on the primary LUN, and the remote replication is linked up, the
administrator needs to split pair before setting the secondary LUN state to writable on
the primary or secondary storage system.
Link Compression
Link compression is an online compression technology. In asynchronous remote replication,
data is compressed first on the primary device before being transferred. Then the compressed
data is decompressed on the secondary device, reducing bandwidth required for data transfer.
Link compression used by the OceanStor 5300 V3/5500 V3/5600 V3/5800 V3/6800 V3
storage system has the following highlights:
l Online data compression
Data to be transferred is compressed in real time.
l Intelligent compression
The system identifies whether data can be compressed in advance, minimizing
unnecessary compression and improving transfer efficiency.
l High reliability and security
Lossless compression technology is used to ensure data security. Multiple check methods
are used to ensure data reliability. After receiving data, the secondary device verifies data
correctness and checks data consistency after the data is decompressed.
l User unawareness
Link compression does not affect services running on the host and is transparent to users.
l Compatibility with full backup and incremental backup
Link compression compresses all data that is transferred over the network regardless of
upper-layer services.
the two LUNs have consistent data. Every time a host writes data to the storage system after
the synchronization, the data is copied from the primary LUN to the secondary LUN in real
time. Figure 1-2 shows how a synchronous replication processes a write (the data block being
written is N).
Host
LOG DCL
1 5 2 6
6
RM
3
3 4
4
Primary Secondary
Cache Cache
Primary Secondary
LUN LUN
Primary Secondary
storage system storage system
LOG
1 Data block N
2 Records the difference in the LOG
3 Data block N
3 Data block N
Determination
6
Saves the log to the DCL upon any write failure
Deletes the log if all writes are successful
Host
1 2
DCL
3 6
RM
1 2
Primary 5 Secondary
Cache Cache
5
Primary Secondary
LUN LUN
4 4
Remote replication links
Snapshot Snapshot
7 7
Snapshot Snapshot
Primary Secondary
storage system storage system
DCL
Primary Secondary
Pri Primary Primary Secondary Secondary Secondary
LUN
Primary LUN
Primary Secondary
mar RM
1 Primary
Cache LUN LUN Cache LUN
LUN LUN
snapshot
y snapshot snapshot
1 Data block N 1 Data block N
5 Data is replicated from the primary cache and LUN to the secondary cache.
NOTE
Snapshots for the primary and secondary LUNs are generated and activated when synchronization starts
for the first time. The functions of the snapshots are as follows:
l The snapshot generated for the primary LUN ensures that data read from the primary LUN during
data synchronization is always consistent and allows concurrent implementation of data
synchronization and data write to the primary LUN.
l The snapshot for the secondary LUN stores backup data for the data on the secondary LUN before
synchronization so that data on the secondary LUN can still be usable even when an exception
occurs during synchronization.
The reason for stopping the snapshots: The snapshot function is used only during a data synchronization.
After the data synchronization is complete, the snapshot function is stopped to reduce the system
overhead and improve performance.
1. The host sends a write I/O request to the RM, and the RM directly writes data to the
primary cache.
2. The primary cache returns the write I/O result to the RM, and the RM further returns the
result to the host.
3. After data in the primary cache is written to the LUN, the RM records data differences
between the primary LUN and the secondary LUN in the DCL.
4. When a replication period starts, the snapshots of the primary and secondary LUNs are
activated.
5. Data is replicated from the primary cache and LUN to the secondary cache.
Asynchronous remote replication employs multi-time-point caching technology to
replicate data from cache to the DR site. The working principle is as follows:
a. When a replication period starts, new time segments (T2 and P2) are respectively
generated in the caches of primary and secondary LUNs.
b. New data from the host is written into time segment T2 in the cache of the primary
LUN.
c. The primary LUN returns a message indicating that the write I/O operation is
complete.
d. Data in time segment T1 is replicated to time segment P2.
e. Data in the caches of the primary and secondary LUNs is written to LUNs.
6. The RM delete differences recorded in the DCL after replicating data.
7. After the remote replication, the snapshot for the primary and secondary LUNs is
stopped.
The remote replication mode should be chosen based on users' requirements for the RPO and
RTO.
Figure 1-4 Service switchover implemented using the remote replication feature
Primary site Secondary site
Production host Standby host 3. The secondary site takes over services.
NOTE
After a primary/secondary switchover is implemented, the primary and secondary sites will record
changed data respectively to avoid the full synchronization next time when the secondary site turns back
to the primary one.
Figure 1-5 Process of recovering data at the primary site after a disaster
Primary site Secondary site
NOTE
In an asynchronous remote replication scenario, the storage system performs data synchronization
multiple times until the data difference between the primary and secondary LUN is comparatively small.
Then, the storage system stops services and performs the last synchronization. This prevents data loss
and minimizes the downtime.
system running a consistency group with a system without running a consistency group in
terms of data unavailability to describe the functions of a consistency group.
Users can perform synchronization, splitting, and primary/secondary switchovers for a single
remote replication session or perform these operations for multiple remote replication sessions
by creating a consistency group. The following are notes for creating a consistency group:
l Remote replication sessions can be added to a consistency group only on the primary
storage system. In addition, secondary LUNs of all member remote replication sessions
must be in the same remote storage system.
l LUNs of different member remote replication sessions in a consistency group can belong
to different working controllers.
l Remote replication sessions in one consistency group must work in the same remote
replication mode.
l When a remote replication is added to a consistency group, the configuration properties
of it (such as synchronization rate and recovery policy) must be the same with that of the
consistency group.
The following compares remote replication sessions in a consistency group with sessions
without using a consistency group in terms of synchronization result to prove that a
consistency group is effective in preventing data unavailability.
Before data
replication 1. No consistency group is created.
Remote replication
Primary session 01 Secondary
LUN01 Data, log, and change tracking
LUN01
information of a database are
respectively stored on primary
Remote replication LUN01, primary LUN02, and
Primary session 02 Secondary primary LUN03.
LUN02 LUN02 No consistency group is
created on the primary storage
Remote replication system.
Primary session 03 Secondary
LUN03 LUN03
Phases in a process of data becoming invalid
Data recovery
3. Data becomes invalid.
Remote replication
Primary session 01 Secondary
LUN01 LUN01 After the remote replication
session failure, the secondary
storage system recovers data
Remote replication in the primary storage system,
Primary session 02 Secondary but data in the primary
LUN02 LUN02 storage system is still
unusable because the three
Remote replication primary LUNs store data at
Primary session 03 Secondary different points in time.
LUN03 LUN03
1.4 Restrictions
This section describes the impact of the remote replication feature on system performance,
interaction between the remote replication feature and other features, and restrictions on the
remote replication feature.
l Asynchronous remote replication: The secondary site synchronizes data with the primary
site periodically. In this case, host performance is not affected. However, in the event of
a disaster, the amount of lost data is mainly subject to the synchronization period. (A
longer synchronization period leads to more lost data. A shorter synchronization period
leads to less lost data.) I/O requests delivered by a host are processed only at the primary
site. When the user-defined synchronization period reaches, a data synchronization is
automatically initiated between the primary site and the secondary site. Therefore, after
completing the I/O processing, the primary site immediately returns a response
indicating that the I/O processing is completed to the host.
Table 1-5 Interaction relationship between the remote replication feature and other features
(where the former is configured earlier than the latter)
NOTE
l HyperMetro and asynchronous remote replication are not mutually exclusive. HyperMetro and
synchronous remote replication are mutually exclusive.
l The following table indicates whether the primary or secondary LUN in a remote replication can be
configured as a LUN in the first column.
l In the following table, indicates that the primary or secondary LUN in a remote replication can be
configured as the corresponding LUN in first column, - indicates that the primary or secondary LUN in
a remote replication is unlikely to be configured as the corresponding LUN in the first column, and
indicates that the primary or secondary LUN in a remote replication cannot be configured as the
corresponding LUN in the first column.
Local LUN in
HyperMetro
Remote LUN in
HyperMetro
Snapshot LUNa - -
Secondary LUN in a
remote replication
Source LUN in
SmartMigration
Target LUN in
SmartMigration
Mirror LUN in
HyperMirror
Mirror copy in - -
HyperMirrorb
eDevLUN hosted by - -
SmartVirtualizationc
a: A snapshot LUN is a data copy automatically generated by the storage system after a
snapshot is created. Therefore, the scenario where the primary or secondary LUN in a
remote replication is configured as a snapshot LUN does not exist.
b: A mirror copy is a data copy automatically generated by the storage system when a
HyperMirror task is created. Therefore, the scenario where the primary or secondary LUN
in a remote replication is configured as a mirror copy in a HyperMirror task does not exist.
c: SmartVirtualization only hosts external LUNs from a third-party storage system as an
eDevLUN. Therefore, the scenario where the primary or secondary LUN in a remote
replication is hosted as an eDevLUN does not exist.
d: In the SmartMulti-Tenant feature, a vStore is configured and then LUNs are added the
vStore. Therefore, the scenario where another feature is configured earlier than
SmartMulti-Tenant for a LUN does not exist.
Table 1-6 Interaction relationship between the remote replication feature and other features
(where the former is configured later than the latter)
NOTE
l HyperMetro and asynchronous remote replication are not mutually exclusive. HyperMetro and
synchronous remote replication are mutually exclusive.
l The following table indicates whether a LUN in the first column can be configured as the primary or
secondary LUN in a remote replication.
l In the following table, indicates that the corresponding LUN in first column can be configured as the
primary or secondary LUN in a remote replication, and indicates that the corresponding LUN in the
first column cannot be configured as the primary or secondary LUN in a remote replication.
Remote LUN in
HyperMetro
Snapshot LUN
Secondary LUN in a
remote replication
Source LUN in
SmartMigration
Target LUN in
SmartMigration
Mirror LUN in
HyperMirror
Mirror copy in
HyperMirror
eDevLUN hosted by
SmartVirtualization
NOTICE
If you select an existing LUN to be the secondary LUN or target LUN of a value-added
feature, data on the existing LUN may be overwritten. Therefore, perform this operation with
caution.
Application Restriction
l The remote replication feature is not available if network address translation (NAT)
devices exist on the local area network (LAN) where the storage system resides.
l The remote replication feature of this series does not support direct data replication with
a storage device from another vendor.
l Because of the LUN identification mechanism used by hosts' operating systems and
volume managers, do not map the primary and secondary LUNs of a remote replication
pair both to a host. For example, in Oracle ASM, hosts distinguish between LUNs by
ASM disk headers and may wrongly identify the primary and secondary LUNs as the
same LUN, causing data corruption on the two LUNs.
l During the creation of a remote replication, a LUN mapped to a host cannot be added as
the secondary LUN.
l Table 1-7 lists the maximum distance between the primary site and secondary site
supported by the remote replication feature.
l The creation of a remote replication requires links to meet the following requirements:
Synchronous remote replication: Link bandwidth x Bandwidth utilization rate >=
Peak service write bandwidth x Bandwidth utilization rate >= 64 Mbit/s
Asynchronous remote replication: Link bandwidth x Bandwidth utilization rate >=
Average service write bandwidth x Bandwidth utilization rate >= 1 Mbit/s
As for the asynchronous remote replication on an iSCSI network, you are advised to use a device such
as a security gateway to encrypt links when adding the iSCSI links to ensure data transfer security
among storage systems and prevent information leakage.
In a specific application scenario, determine the replication mode based on the distance and
bandwidth between sites. Table 1-8 describes the typical application scenarios of the remote
replication feature.
Remote replication
session 01
Primary
LUN 01 Secondary
LUN 01
Synchronous
Service site 01
Remote replication Host
Primary session 02
Snapshot Secondary
LUN 02
02 LUN 02
Asynchronous
...
Service site 02
...
Remote replication
...
session n
Primary Snapshot Secondary
LUN n n LUN n
Asynchronous
Central backup site
Service site n
In the example shown in the previous figure, the remote replication feature can do the
following things:
l Collects data from n (a number) service sites to the central backup site, takes a snapshot
of a secondary LUN, and maps the secondary LUN to a host. The snapshot can be used
for data analysis and data mining.
l Lets the central backup site to take over services when any service site (01 to n)
encounters a disaster and switch services back to the service site when the site recovers.
l Implements real-time backup (using the synchronous remote replication mode) from
service site 01 to the central backup site, which is 500 m away from service site 01.
Performs replication tasks (using the asynchronous remote replication mode) from
service site 02 to the central backup site, which is 500 km away from service site 02.
3DC
3DC indicates that three DCs are deployed at two places in the DR solution. Three DCs
consist of the production center, intra-city DR center, and remote DR center. The three DCs
can be cascaded and interconnected in parallel mode based on different networks formed by
storage systems in the three DCs.
l Cascaded 3DC
The production center (A), intra-city DR center (B), and remote DR center (C) are
cascaded in A-B-C mode.
l Parallel-connected 3DC
The intra-city DR center (B) and remote DR center (C) concurrently serve as the DR
centers for the production center (A). The three DCs are interconnected in A-B and A-C
mode.
Figure 1-9 shows the cascaded 3DC in synchronous + asynchronous mode.
Site A
LUN
Storage System
Production
center
Synchronous
Remote
Replication
Site B Site C
Asynchronous
Remote
LUN Replication LUN
Intra-city DR Remote DR
center center
Optical fiber cable
Network cable
The cascaded 3DC in synchronous + asynchronous mode meets the following service
requirements:
l Real-time backup and remote backup of data achieve robust reliability of critical service
data and ensure that critical service data has two copies.
l The cascaded 3DC has minor impact on the performance of the storage system in the
production center and does not affect host services.
l When the production center (A) encounters a disaster, the intra-city DR center (B) takes
over services immediately and recovers data. If the secondary disaster damages the intra-
city DR center (B), data in the remote DR center (C) is available.
Site A
LUN
Storage System
Production
center
Asynchronous
Remote
Replication
Site B Site C
Asynchronous
Remote
LUN Replication LUN
Intra-city DR Remote DR
center center
Optical fiber cable
Network cable
The cascaded 3DC in synchronous + asynchronous mode meets the following service
requirements:
l Real-time backup and remote backup of data achieve robust reliability of critical service
data and ensure that critical service data has two copies.
l The cascaded 3DC has minor impact on the performance of the storage system in the
production center and does not affect host services.
l When the production center (A) encounters a disaster, the intra-city DR center (B) takes
over services and recovers data. If the secondary disaster damages the intra-city DR
center (B), data in the remote DR center (C) is available.
Site A
LUN
Storage System
Production
center Asynchronous
Remote
Synchronous
Replication
Remote
Replication
Site B Site C
LUN LUN
Intra-city DR Remote DR
center center
Optical fiber cable
Network cable
Site A
LUN
Storage System
A
sy Re lica
Production
nc m ti
hr ote on
center
R
ep
on
ou
Asynchronous
s
Remote
Replication
Site B Site C
LUN LUN
Intra-city DR Remote DR
center center
Optical fiber cable
Network cable
The difference between 3DC in synchronous + asynchronous and asynchronous + asynchronous modes
lies in the different data synchronization modes of the production center (A) and intra-city DR center
(B). For the data synchronization mode used by the 3DC in synchronous + asynchronous mode, when
data is concurrently written to the production center (A) and intra-city DR center (B), a message is
displayed indicating that the synchronization succeeds. For the data synchronization mode used by the
3DC in asynchronous + asynchronous mode, when data is concurrently written to the production center
(A), a message is displayed indicating that the synchronization succeeds.
For details, see OceanStor 5300 V3&5500 V3&5600 V3&5800 V3&6800 V3 Storage System V300R003
3DC Configuration.
Figure 1-13 illustrates how to use BCManager to deploy remote replication. Deploy the
BCManager host in the DR center and installBCManager Server on the host. Install
BCManager Agent on service hosts and DR hosts in the production center.
BCManager
Agent
BCManager BCManager
Server Agent
Production
host BCManager DR host
host
Remote
replication
Figure 1-14 and Figure 1-15 illustrate how to use BCManager to deploy 3DC. Deploy the
BCManager host in DR centers 1 and 2 and install BCManager Server on the hosts. Install
BCManager Agent on service hosts and DR hosts in the production center.
BCManager
Agent BCManager BCManager
BCManager BCManager
Server Agent Server Agent
Production
host
BCManager DR host BCManager DR host
host host
Synchronous/
Asynchronous Asynchronous
remote remote
replication replication
DR center 1
BCManager BCManager
Server Agent
Production center
BCManager DR host
host
BCManager
Agent
A
rem sync BCManager BCManager
Storage system ote hro
rep nou Server Agent
lica s
tio
n
BCManager DR host
host
Storage system
NOTE
For details about how to install, deploy, and configure BCManager, see the OceanStor BCManager
V100R005C00 Product Documentation.
2 Configuration Example
The remote replication feature is usually used for remote disaster recovery. The feature can be
configured flexibly based on customer requirements. This chapter provides an example to
explain how to configure the remote replication feature in a typical remote disaster recovery
scenario. The content is divided into requirement analysis, configuration planning,
configuration procedure, and configuration verification.
2.1 Central Backup and Disaster Recovery
This example explains how to configure the remote replication feature to implement central
backup and disaster recovery.
Requirement Source
The carrier has a service site at place A (the service site is called site A later in this
document). At site A, there is an application server of which the most important application is
a user data management application. Data generated by the application server is stored in an
OceanStor storage system. The data and bandwidth status of site A is as follows:
l The daily data change amount is between 100 MB to 200 MB.
l The total data amount does not exceed 500 GB.
l The current network bandwidth between the application server and the storage system is
2 Mbit/s.
Since the network is complete and the application is running properly, the carrier prefers to
keep the existing network. Figure 2-1 shows a diagram of the existing network.
Switch
Management
network port
Table 2-1 and Table 2-2 describe the LUN mapping and IP address allocation on existing
devices at site A.
Place A suffers natural disasters from time to time. The carrier wants to protect service data at
site A against natural disasters and to quickly recover the service data when a disaster
happens. The specific requirements are as follows:
l Data backup and disaster recovery
Data at site A is automatically backed up to a backup site without interrupting services to
minimize data loss upon a disaster. The backup data can be used to quickly recover
services at site A after the disaster.
l Quick service recovery
Once a disaster happens, site B takes over services with an RTO no longer than 15
minutes.
l Central data management
After a new service site (site C) or even more service sites are set up, data from all
service sites are backed up to the backup site for central management and reduced
deployment and maintenance costs.
l Data analysis and mining
Service data can be exported for analysis and mining to support policy-making.
Consideration Description
Data transfer amount Inter-city data transfer is subject to the bandwidth. The amount
of lost data decreases as the amount of data transferred within a
specified period of time increases.
Host performance Data backup affects services on the host. Services on the host
can run properly if the impact of backup on the services is
slight.
Service continuity The disaster recovery site must be able to quickly take over
services at the service site through a service switchover to
protect service continuity.
Advantage Description
Advantage Description
Quick service takeover When a disaster happens, breaking remote replication pairs
between the service site and the backup site, the backup site
quickly takes over services of the service site.
l Data availability
The carrier needs to analyze the backup data and requires that data at site A be quickly
recovered after a disaster. Therefore, data at the backup site must be available anytime.
l Service continuity
The backup site must be able to quickly take over services when a disaster causes a
failure of site A to minimize the loss, through setting the Interval is 10 minute to satisfy
the RPO requirement.
l Others
Besides backup and disaster recovery, the carrier hopes backup data be managed
centrally and latest data at service sites be available anytime so that services can be
analyzed conveniently. Therefore, data at service sites needs to be centrally backed up to
one site. In this way, copies of all data can be quickly obtained when necessary.
Based on the former analysis, details of the solution are as follows:
l To choose the central backup and disaster solution and build a central backup site at
place B (equipment at the backup site includes an OceanStor storage system, an
application server, a maintenance terminal, and an iSCSI switch)
l To use asynchronous remote replication for backup and disaster recovery
l To set up an iSCSI connection between site A and B
Network Planning
Figure 2-2 shows a diagram of the planned network. As shown in the diagram, central backup
and disaster recovery is implemented without changing the carrier's existing network.
Switch Switch
Site C
To achieve data backup to the central backup site, an iSCSI connection between site A and
site B must be set up first, and then a logical connection needs to be established by adding a
routes and a target. Before setting up the connections, a detailed networking plan must be
drawn.
Table 2-5 describes the IP address plan for site B.
Service Planning
According to the requirement analysis conclusion, a central backup and disaster recovery
solution can meet the carrier's requirements. Figure 2-3 shows the configuration roadmap of
the solution.
Start
End
Table 2-6 lists the configuration operations and planning items involved in the configuration
example.
NOTE
l Keep the default values of the parameters that are not mentioned in this table.
l If the version of the local device is different from that of the remote device, you need to add a
remote device on the newer version device. Otherwise the operation may fail.
Prerequisites
l The connection between the storage system and application server at each site is normal.
l The iSCSI connection between the primary storage system at site A and the secondary
storage system at site B is normal.
Procedure
Step 1 Check the license file.
Remote replication is a value-added feature. Before configuring this feature, you must make
sure that the license file of the storage system grants the permission for the remote replication
feature.
On the navigation bar of the DeviceManager of each storage system, click Settings.
Click License Magement. On the Active License tab page, check information about the
license file for remote replication. If the license file does not grant the permission for remote
replication, search for License in Help. Then follow the displayed instructions to import and
activate the license file.
Step 2 Create a secondary LUN in the secondary storage system at site B.
1. Create a disk domain in the secondary storage system at site B.
In the navigation tree, click Data Protection. Click Port, and click the host port
whose IP address you want to change. In the dialog box that is displayed, click
Properties. On the Properties of the Port dialog box, configure the IP address of the port.
2. Add routes for iSCSI host ports of both the primary and secondary storage systems.
Perform this step when the host port on the primary storage system and that on the
secondary storage system use IP addresses in different network segments. If the host
ports use IP addresses in the same network segment, skip this step. The following uses
the primary storage system at site A as an example to describe how to add a route for a
host port. The procedure at site B is the same.
After modifying host port IP address, select the host port need to add routes under the
device view. Then click Route Management, and choose Add in the displayed diagnose
box to add route information.
3. Add a remote device in either the primary storage system at site A or the secondary
storage system at site B.
Add a remote device at one side to establish an usable link between two storage systems.
The following uses the primary storage system at site A as an example to describe how
to add a remote device. The procedure for the secondary storage system at site B is the
same.
On the navigation bar of the DeviceManager, click Data Protection, and click
Remote Device. Choose Add Remote Device to add a remote device following the Add
Remote Device Wizard dialog box.
NOTE
If remote devices are OceanStor 18000 series V1R1C20, T series V2R2C20, or OceanStor
V3 series V3R1C10 or later, you need to use the machine-machine user. The default
username and password of the machine-machine user are mm_user and mm_user@storage
respectively.
NOTE
You are advised to change your login password periodically in the future by executing
the change mm_user password command. This reduces the password leakage risks.
For details about how to use the command, see OceanStor 5300 V3&5500 V3&5600
V3&5800 V3&6800 V3 Storage System V300R003 Command Reference.
If remote devices are OceanStor 18000 series V1R1C10, T series V2R2C10, or OceanStor
V3 series V3R1C00 or earlier, you need to use the password of the super administrator or
administrator account on the older version remote device. The username and password of the
super administrator or administrator account are consistent with the password on the remote
device.
4. Add an iSCSI link.
Only one link is selected between the storage systems during remote device adding. To
ensure link redundancy, you must manually add links after the remote device is added.
Select the added remote device and click Add Link. Set Link Type to iSCSI and then
add desired links.
On the navigation bar of the DeviceManager, click Data Protection, and click Remote
Replication. Choose Create to create RP_001 following the Create Remote Replication
Wizard dialog box.
----End
Prerequisites
l The connection between the storage system and application server at site A and that at
site B are normal. The primary and secondary LUNs have been mapped to hosts.
l Initial synchronization of the asynchronous remote replication session is complete.
Procedure
Step 1 Map the primary and secondary LUNs to the hosts.
1. Log in to DeviceManager, and choose Provisioning > LUN > LUN Group. Click
Create to create LUN groups. Then add primary and secondary LUNs to corresponding
LUN group.
2. Choose Provisioning > Mapping View, and choose the mapping view that the host
group which holds the application servers belongs to.Then add the newly created LUN
group to this mapping view.
Step 2 Check whether data replication succeeded.
1. Log in to application server 1 at site A and write data block X to the primary LUN.
2. Wait 10 to 20 minutes. Then log in to the standby server at the secondary site to check
whether data block X is on the secondary LUN.
If yes, data replication succeeded.
1. Log in to the DeviceManager at site B, click Data Protection on the navigation bar,
and click Remote Replication. Click the remote replication, and choose More > Switch
Over Primary/Secondary to switch the secondary LUN to the primary LUN.
2. Log in to the standby server at site B, copy data block X to another disk and write data
block Y to the new primary LUN (previous secondary LUN) to check whether the LUN
is accessible.
If yes, the secondary site can take over services.
If no, the secondary site cannot take over services.
Step 4 Check whether data can be recovered.
1. Log in to the DeviceManager at site B, click Data Protection on the navigation bar,
and click Remote Replication. Click the remote replication, and choose More >
Synchronize to synchronize data manually.
2. After the synchronization is complete, log in to application server 1 at site A to check
whether data block Y is on the secondary LUN (after the primary/secondary switchover).
If yes, data can be recovered.
If no, data cannot be recovered.
----End
3 Disaster Recovery
If the production center malfunctions, use the DR center to take over services from the
production center. After the production center is recovered, switch the services back to the
production center.
If a disaster occurs in the production center, perform the following operations for the DR
center to take over services:
1. Disable secondary resource protection on the DR storage system.
2. Map the secondary LUN to the standby host.
3. Run services on the secondary host in the DR center.
After the production center is recovered, perform the following operations to switch the
services back to the production center:
1. Perform a primary/secondary switchover for the remote replication on the production
storage system and start data synchronization on the DR storage system.
2. After data synchronization is complete, stop host services in the DR center and delete the
mapping between the secondary host and the storage system.
3. Perform a primary/secondary switchover for the remote replication on the production
storage system.
4. After the switchover, set the secondary LUN on the storage system in the DR center
read-only on the production storage system.
5. Run services on the production host.
Perform the following operations to reconstruct the production center and switch back
services:
1. Perform a primary/secondary switchover for the remote replication on the production
storage system and start data synchronization on the DR storage system.
2. After data synchronization is complete, stop host services in the DR center and delete the
mapping between the secondary host and the storage system.
3. Manually start incremental synchronization to copy data from the DR storage system to
the production storage system and ensure data consistency between the storage systems.
4. After data replication is complete, perform a primary/secondary switchover on the
production storage system.
5. After the switchover, set the secondary LUN on the storage system in the DR center
read-only on the production storage system.
Procedure
Step 1 Log in to DeviceManager.
Step 3 In the Basic Information area, check the system's basic information. Table 4-1 describes
related parameters.
Step 4 In the Alarms area, check the system's alarm information. Table 4-2 describes related
parameters.
Step 5 In the Disk Information area, view the used disk quantity and total disk quantity.
Step 6 In the System Capacity Trend area, view the system capacity trend of the storage system.
Table 4-3 describes related parameters.
Step 7 In the Performance area, view the bandwidth and total IOPS of the storage system.
Table 4-4 describes related parameters.
----End
Procedure
Step 1 Log in to DeviceManager.
Multi-mode optical fibers support only a short transmission distance. Therefore, remote
replication that uses Fibre Channel ports is suitable for short-distance backup and DR.
l iSCSI host port
Typically, iSCSI host ports are inferior to Fibre Channel ports in terms of transmission
rate and latency. However, iSCSI host ports carry data by using the IP protocol.
Therefore, data can be backed up to the secondary storage system through the Internet or
dedicated cables, achieving remote backup and DR across cities, provinces, and even
countries.
Understanding the advantages and disadvantages of data transmission based on Fibre Channel
ports and those of data transmission based on iSCSI host ports helps you select host ports that
best meet service needs. After you select a host port type, the corresponding link type is also
selected by default. In an actual networking scenario, you must also consider factors such as
the bandwidth, latency, network quality, and security to ensure link efficiency and stability.
Procedure
Step 1 Log in to DeviceManager.
----End
Follow-up Procedure
If no license is available, buy and import a license file.
Context
The ethernet port has been assigned an IP address.
NOTE
In redundant links, you must configure IP addresses and route for multiple Ethernet ports.
Procedure
Step 1 Log in to DeviceManager.
NOTICE
The default IP addresses of the internal heartbeat on a dual-controller storage system are
127.127.127.10 and 127.127.127.11, and the default IP addresses of the internal
heartbeat on a four-controller storage system are 127.127.127.10, 127.127.127.11,
127.127.127.12, and 127.127.127.13. Therefore, the IP address of the router cannot fall
within the 127.127.127.XXX segment. Besides, the IP address of the gateway cannot be
127.127.127.10, 127.127.127.11, 127.127.127.12, or 127.127.127.13. Otherwise, routing
will fail. (Internal heartbeat links are established between controllers for these controllers
to detect each other's working status. You do not need to separately connect cables. In
addition, internal heartbeat IP addresses have been assigned before delivery, and you
cannot change these IP addresses).
Step 5 Click OK. The route information is added to the route list.
The security alert dialog box is displayed.
Step 6 Confirm the information of the dialog box and select I have read and understood the
consequences associated with performing this operation..
----End
Prerequisites
A local storage device is normally connected to the remote storage device.
Context
The link between a local storage device and a remote storage device can be a Fibre Channel
link or an iSCSI link.
l In scenarios where a firewall is deployed, ports 36061 and 3260 must be enabled for adding a
remote device.
l On a Fibre Channel network, clear zone configurations of ports connecting to the storage system
from the switch before adding the remote device. You are advised to allocate two ports in one zone
to prevent excessive remote links because too many ports exist in one zone.
l A maximum of eight links are available between a controller and the remote device, and a
maximum of two links can be used for adding a remote device.
Procedure
Step 1 Log in to DeviceManager.
NOTICE
l If Running Status of an added link is Invalid, delete and add the link again.
l To ensure data transfer security among storage systems, you are advised to use a device
such as a security gateway to encrypt links to prevent information leakage.
l If the version of the local device is different from that of the remote device, you need to
add a remote device on the newer version device. Otherwise the operation may fail.
l Replication
Adding an iSCSI link
i. Set Link Type to iSCSI.
ii. Set parameters for the iSCSI link. Table 4-5 lists related parameters.
l Heterogeneity
a. Set parameters for the iSCSI link. Table 4-7 lists related parameters.
Step 6 Click Finish. The operation result is displayed. Perform the following operations based on the
number of links you want to add:
l To add only one link, click Close.
l To add another link for link redundancy, select Continue adding links and click Close.
The Add Link dialog box is displayed. Perform Step 7 as prompted.
NOTE
After the links are added, you can select the remote device from the remote device list. On the
navigation bar, click Add Link or right-click the device and choose Add Link to add other links.
----End
Prerequisites
l The remote replication license is valid.
l The local and remote storage devices must support remote replication.
l The remote storage device is added already, and secondary resource is created on the
remote device.
l The storage room size of primary and secondary resource is the same.
l The status of primary and secondary resource is healthy.
Precautions
Do not create remote replications on any of the following LUNs:
Procedure
Step 1 Log in to DeviceManager.
Step 4 Select the replication mode of the remote replication to Synchronous, and click Next.
Step 7 Optional: Add the remote replication pair to a newly created consistency group.
1. Select Create a consistency group and add pairs to it.
2. Set Name and Description of the consistency group.
3. Click Next.
----End
Prerequisites
l The remote replication license is valid.
l The local and remote storage devices must support remote replication.
l The remote storage device is added already, and secondary LUN is created on the remote
device.
l The storage room size of primary and secondary LUN is the same.
l The status of primary and secondary LUN is healthy.
Precautions
Do not create remote replications on any of the following LUNs:
Procedure
Step 1 Log in to DeviceManager.
Step 5 Select the primary resource and secondary resource to form a remote replication pair.
1. In Select Primary Resource Type, select the desired resource type.
2. In the Primary Resource area, select the primary resource.
3. In the Secondary Resource area, select a remote device to which the primary resource is
copied from the Remote Device drop-down list.
4. In the Secondary Resource area, select the secondary resource.
5. Click Add to Pairs. After the pair is added successfully, the specified pair will be
displayed in Pairs.
NOTE
When you set Select Primary Resource Type to File system, if the capacities of the primary and
secondary resources are inconsistent, the following risks may arise:
1. When the capacity of the primary resource is larger than that of the secondary resource, remote
replication I/Os may become abnormal, remote replication pairs may be interrupted unexpectedly,
and data at the primary and secondary ends may become inconsistent.
2. When the capacity of the primary resource is smaller than that of the secondary resource, the
capacity at the secondary end is wasted. To optimize capacity utilization, ensure that the capacities
of primary and secondary resources are the same.
The Remote Device Compression Status in the pairs list indicates that the compression algorithm or
protocol of both local and remote storage devices matches or not. If it matches, the status displays as
Valid, indicating data compression transmission is available. Otherwise, the status displays as Invalid,
indicating data compression transmission is not available.
6. Click Next.
The Set Properties page is displayed.
Step 6 Set properties of the remote replication pair.
1. On the Set Properties page, set the remote replication parameters based on your service
requirements.
Table 4-8 describes parameters of remote replication.
2. Click Next.
The Add Pairs to Remote Replication Consistency Group page is displayed.
Step 7 Optional: Add the remote replication pair to a newly created consistency group (only applied
to resource type is LUN).
1. Select Create a consistency group and add pairs to it.
2. Set the Name and Description of consistency group.
3. Click Next.
The Summary page is displayed.
----End
Prerequisites
l The Role of the remote replication pair to be added into the consistency group must be
Primary.
l The remote replication pair to be added into the consistency group must be from the
same storage device.
l The Replication Mode of the remote replication pair to be added into the consistency
group must be the same as the Replication Mode of the consistency group.
Precautions
l If the remote replication pair is not in split status, the system will split the remote
replication pair first and then add it into the consistency group.
l The consistency group does not support the remote replication pair whose resource type
is file system.
l After the remote replication pair has been added into the consistency group, the remote
replication pair will function according to the replication properties of the consistency
group (the properties are Recovery Policy, Speed, Data Transmission,
Synchronization Method, and Interval), and is not relevant to the properties of the
remote replication pair.
Procedure
Step 1 Log in to DeviceManager.
Step 2 Choose Data Protection > Remote Replication > Consistency Group.
Step 4 Configure basic information about the consistency group. The Table 4-9 describes related
parameters.
Step 5 Select the remote replication pair that you want to add to the consistency group.
1. Click Next, and in Owning Remote Device of Secondary Resource of Pair, select the
remote device.
NOTE
The remote device is the storage device which the secondary resource of the remote replication pair to
be added into the consistency group belongs to.
2. From the Available Remote Replications Pairs list, select one or multiple remote
----End
Context
l On the remote replication management page, you can click Refresh to refresh remote
replication pair information.
l On the remote replication management page, you can click Keyword and enter
keywords to search for the desired remote replication pair information.
l On the remote replication management page, you can click and select the pair
information items you want to view.
Procedure
Step 1 Log in to DeviceManager.
Step 3 View the information of remote replication pairs. Table 4-10 describes related parameters.
Step 4 Click a remote replication pair, under the pair list, the information of the remote replication
pair is displayed in graphic style. Table 4-11 lists the parameters.
----End
Precautions
The properties of the remote replication pair that has been added into the consistency group
cannot be modified.
NOTE
After the remote replication pair has been added into the consistency group, the remote replication pair will
function according to the replication properties of the consistency group (the properties are Recovery Policy,
Data Transmission, Speed, Synchronization Method, and Interval), and is not relevant to the properties of
the pair.
Procedure
Step 1 Log in to DeviceManager.
Step 3 Select the remote replication pair you want to modify and click Properties.
The Properties of Pair dialog box is displayed.
Step 4 Modify properties of the remote replication pair. Table 4-12 describes related parameters.
l Select Data
Protection >
Remote Device, and click
Modify to view the
Compression Status of
the remote device.
Enable the I/O timeout If data is written into the [Default value]
threshold for the secondary primary LUN but fails to be Disabled
LUN written into the secondary
NOTE
LUN when the timeout This parameter is available
period you set has elapsed, only when Replication Mode
the remote replication pair is set to Synchronous.
enters the Interrupted
status.
Timeout Threshold (second) The timeout for data written [Value range]
into the secondary LUN. 10 to 30
[Default value]
30
NOTE
This parameter is available
only when Enable the I/O
timeout threshold for the
secondary LUN is selected.
----End
Prerequisites
l The remote replication pair for which you want to perform data synchronization is not
added to a consistency group.
l Secondary resource protection has been enabled.
l The Pair Running Status of the remote replication pair determines whether data
synchronization can be performed for the remote replication pair, as shown in Table
4-13.
Normal
Split
Interrupted
To be recovery
Synchronizing N/A
Invalid
: Able to operate.
: Unable to operate.
Procedure
Step 1 Log in to DeviceManager.
Step 3 Right-click the remote replication you want to synchronize and choose Synchronize.
The security alert dialog box is displayed.
----End
Prerequisites
l The remote replication pair that you want to split is not added to a consistency group.
l Based on Pair Running Status of the remote replication pair, determine whether the
remote replication pair can be split, as shown in Table 4-14.
Normal
Split
Interrupted
To be recovery
Synchronizing
Invalid
: Able to operate.
: Unable to operate.
Precautions
If the remote replication pair does not complete the initial synchronization, splitting the
remote replication pair will cause data on the secondary resource unavailable. Perform this
operation with caution.
Procedure
Step 1 Log in to DeviceManager.
Step 3 Right-click the remote replication you want to split and choose Split.
The security alert dialog box is displayed.
Step 4 Confirm the remote replication splitting.
1. Confirm the information in the dialog box. Select I have read and understood the
consequences associated with performing this operation. and click OK.
The Success dialog box is displayed, indicating that the operation succeeded.
2. Click OK.
NOTE
Once the pair is split, the data of secondary resource will roll back to the data when the latest pair
synchronization completes.
----End
Prerequisites
l The remote replication pair for which you want to initiate a primary/secondary
switchover is not added to a consistency group.
l Based on Pair Running Status and Secondary Resource Data Status of the remote
replication pair, determine whether a primary/secondary switchover can be performed for
the remote replication pair, as shown in Table 4-15.
Split Consistent
Split Inconsistent
Interrupted Consistent
Interrupted Inconsistent
To be recovery Consistent
To be recovery Inconsistent
Synchronizing Inconsistent
Synchronizing Consistent
: Able to operate.
: Unable to operate.
Procedure
Step 1 Log in to DeviceManager.
Step 3 Perform a primary/secondary switchover for a remote replication pair. Right-click the remote
replication pair you want to perform the switchover and choose Primary/Secondary
Switchover.
The security alert dialog box is displayed.
----End
Prerequisites
Based on Pair Running Status and Secondary Resource Data Status of the remote
replication pair, determine whether enable protection for secondary resource can be performed
for the remote replication, as shown in Table 4-16.
Table 4-16 Status requirements for enabling protection for secondary Resource
Split Consistent
: Able to operate.
: Unable to operate.
Context
l If resource type is LUN, you cannot write any data into secondary resource after
protection for secondary resource is enabled.
l If resource type is file system, you cannot read or write any data into secondary resource
after protection for secondary resource is enabled.
Procedure
Step 1 Log in to DeviceManager.
----End
Prerequisites
Based on Pair Running Status and Secondary Resource Data Status of the remote
replication pair, determine whether disable protection for secondary resource can be
performed for the remote replication, as shown in Table 4-17.
Table 4-17 Status requirements for disabling protection for secondary resource
Pair Running Status Secondary Resource Disable protection for
Data Status secondary resource
Split Consistent
: Able to operate.
: Unable to operate.
Procedure
Step 1 Log in to DeviceManager.
----End
Prerequisites
l The connection between the local storage system and remote storage system is normal.
l Remote replication pair is created.
Procedure
Step 1 Log in to DeviceManager.
----End
Prerequisites
l The remote replication pair to be deleted is not added to a consistency group.
l Based on Pair Running Status of the remote replication pair, determine whether the
remote replication can be delete, as shown in Table 4-18.
Normal
Split
Interrupted
To be recovery
Synchronizing
Invalid
: Able to operate.
: Unable to operate.
Context
l The essence of deleting a remote replication pair is to delete the remote replication
configuration information of the primary and secondary resources, thereby, to cancel the
replication relationship between them. The configuration information is stored on both
local and remote storage device. In this way, the remote replication pair can be deleted.
l Remote replication configuration is automatically generated in the primary and
secondary resources when the remote replication pair is created.
Procedure
Step 1 Log in to DeviceManager.
Step 3 In the remote replication list, select a remote replication pair that can be deleted.
1. Click Delete.
The security alert dialog box is displayed.
2. In the security alert dialog box, select Only delete the configuration information
about the local device if the local device is disconnected from the remote device and
Forcibly ensure data consistency for the secondary resource based on your needs.
Table 4-19 describes the parameters.
Forcibly ensure data consistency for the If the secondary resource data is
secondary resource inconsistent and the initial
synchronization completes, the secondary
resource will execute snapshot rollback to
ensure data consistency. If the initial
synchronization does not complete, the
secondary resource data may not be
consistent. You are advised to manually
clear the resource to use.
----End
Context
l On the consistency group management page, you can click Refresh to refresh
consistency group information.
l On the consistency group management page, you can click Keyword and enter
keywords to search for the desired consistency group information.
l On the consistency group management page, you can click and select the information
items you want to view.
Procedure
Step 1 Log in to DeviceManager.
Step 2 Choose Data Protection > Remote Replication > Consistency Group.
Step 3 View basic information about existing consistency groups. Table 4-20 describes the related
parameters.
Step 4 Click the desired consistency group and view its remote replication information. The
following Table 4-21describes the related parameters.
----End
Caution
The properties of pairs in a consistency group will not be changed after the properties of
consistency group are modified, but the working properties change to the properties of
existing consistency group.
Procedure
Step 1 Log in to DeviceManager.
Step 2 Choose Data Protection > Remote Replication > Consistency Group.
Step 3 Select the consistency group whose properties you want to modify and click Properties.
The Properties of Consistency Group dialog box is displayed.
Step 4 Modify properties of the consistency group. Table 4-22 describes the related parameters.
----End
Prerequisites
l This operation can be performed only on the primary device of a consistency group.
l Secondary resource protection has been enabled.
l Based on Running Status of a consistency group, determine whether data
synchronization can be performed for the consistency group, as shown in Table 4-23.
Normal
Split
Interrupted(Linked down)
Interrupted(Linked up)
To be recovery
Invalid
Synchronizing
: Able to operate.
: Unable to operate.
Precautions
During the synchronous consistency groups are synchronizing, if the storage system happens
to controller restoration, which may cause some remote replications result in synchronization
failure, at this time, the consistency groups can not guarantee the data consistency.
Procedure
Step 1 Log in to DeviceManager.
Step 2 Choose Data Protection > Remote Replication > Consistency Group.
----End
Precautions
l Based on Running Status of a consistency group, determine whether remote
replications in the consistency group can be split, as shown in Table 4-24.
Table 4-24 Status requirements for splitting remote replications in a consistency group
Running Status of a Consistency Splitting Remote Replications in a
Group Consistency Group
Normal
Split
Interrupted
To be recovery
Invalid
Synchronizing
: Able to operate.
: Unable to operate.
l During the synchronous consistency groups are splitting, if the storage system happens
to controller restoration, which may cause some remote replications result in split failure,
at this time, the consistency groups can not guarantee the data consistency.
Procedure
Step 1 Log in to DeviceManager.
Step 2 Choose Data Protection > Remote Replication > Consistency Group.
----End
Prerequisites
Based on Running Status of a consistency group and Secondary Resource Data Status of
the remote replications added to the consistency group, determine whether a primary/
secondary switchover can be performed for the consistency group, as shown in Table 4-25.
Split Consistent
Split Inconsistent
Interrupted Consistent
Interrupted Inconsistent
To be recovery Consistent
To be recovery Inconsistent
: Able to operate.
: Unable to operate.
Procedure
Step 1 Log in to DeviceManager.
Step 2 Choose Data Protection > Remote Replication > Consistency Group.
Step 3 Right-click the desired consistency group and choose Primary/Secondary Switchover.
The security alert dialog box is displayed.
Step 4 Confirm the primary/secondary switchover operation.
1. Carefully read the contents of the dialog box, and click OK for confirmation.
The Success dialog box is displayed indicating that the operation is successful.
2. Click OK to complete the primary/secondary switchover.
----End
Prerequisites
Based on Running Status of a consistency group and Secondary Resource Data Status of
the remote replications added to the consistency group, determine whether enable protection
for secondary resource can be performed for the consistency group, as shown in Table 4-26.
Table 4-26 Status requirements for enabling protection for secondary resource
Split Consistent
: Able to operate.
: Unable to operate.
Procedure
Step 1 Log in to DeviceManager.
Step 2 Choose Data Protection > Remote Replication > Consistency Group.
----End
Prerequisites
Based on Running Status of a consistency group and Secondary Resource Data Status of
the remote replications added to the consistency group, determine whether disable protection
for secondary resource can be performed for the consistency group, as shown in Table 4-27.
Table 4-27 Status requirements for disabling protection for secondary resource
Split Consistent
: Able to operate.
: Unable to operate.
Procedure
Step 1 Log in to DeviceManager.
Step 2 Choose Data Protection > Remote Replication > Consistency Group.
----End
Prerequisites
l The Replication Mode of the remote replication pair to be added into the consistency
group must be the same as the Replication Mode of the consistency group.
l A remote replication pair can be added only when the Role of the consistency group is
Primary.
l The Role of the remote replication pair to be added into the consistency group must be
Primary.
l The number of the remote replication pair in consistency group does not reach the
maximum number of specifications.
Precautions
l If the consistency group is in split status, when a remote replication pair is added, the
system will split the remote replication pair first and then add it into the consistency
group.
l If the consistency group is not in split status, when a remote replication pair is added, the
system will split the consistency group first, split the remote replication pair and add it
into the group, and then synchronize the consistency group.
l After the remote replication pair has been added into the consistency group, the remote
replication pair will function according to the replication properties of the consistency
group (the properties are Recovery Policy, Speed, Data Transmission,
Synchronization Method, and Interval), and is not relevant to the properties of the
remote replication pair.
Procedure
Step 1 Log in to DeviceManager.
Step 2 Choose Data Protection > Remote Replication > Consistency Group.
Step 3 Right-click a consistency group to which you want to add a remote replication pair and
choose Add Remote Replication Pair
The Add Remote Replication Pairs dialog box is displayed.
Step 4 In Owning Remote Device of Secondary Resource of Pair, select the remote device.
NOTE
The remote device is the storage device which the secondary resource of the remote replication pair to be
added into the consistency group belongs to.
Step 5 From the Available Remote Replications Pairs list, select one or multiple remote replication
1. Click OK.
The Execution Result message box is displayed, indicating that the operation
succeeded.
2. Click Close to complete adding remote replication pair.
----End
Prerequisites
l The consistency group is split.
l A remote replication pair can be removed only when the Role of the consistency group is
Primary.
Precautions
l When a remote replication pair cannot be removed due to a link failure, delete the
consistency group to which the remote replication belongs.
l Once a remote replication pair is removed from the consistency group, the remote
replication pair will work according to its own replication properties.
Procedure
Step 1 Log in to DeviceManager.
Step 2 Choose Data Protection > Remote Replication > Consistency Group.
Step 3 Right-click a consistency group from which you want to remove a remote replication pair and
choose Remove Remote Replication Pair
The Remove Remote Replication Pairs dialog box is displayed.
Step 4 From the Available Remote Replications Pairs list, select one or multiple remote replication
----End
Prerequisites
Based on Running Status of the consistency group, determine whether the group can be
delete, as shown in Table 4-28.
Normal
Split
Interrupted
To be recovery
Synchronizing
Invalid
: Able to operate.
: Unable to operate.
Precautions
l The essence of deleting a consistency group is to delete the consistency group
configuration information stored on both local and remote storage device. In this way,
the consistency group can be deleted.
l If the link has failed, you must forcibly delete consistency groups from the local and
remote devices.
l The deletion of a consistency group causes you cannot centrally manage the remote
replication pairs within the group.
Procedure
Step 1 Log in to DeviceManager.
Step 2 Choose Data Protection > Remote Replication > Consistency Group.
NOTE
This option is applicable to scenarios where the physical link between the local storage system and
remote storage system is disconnected. In such scenarios, this option is mandatory. Otherwise, the
pair cannot be deleted.
If you want to delete the configuration information of consistency group from the remote storage
device, you must perform this operation in the remote storage system.
If the physical link is normal, do not select this option.
2. Carefully read the contents of the dialog box. Then click the check box next to the
statement I have read and understood the consequences associated with performing
this operation. and click OK to confirm the information.
The Success dialog box is displayed indicating that the operation is successful.
3. Click OK to complete deleting the consistency group.
----End
5 FAQs
This section provides answers to frequently asked questions (FAQs). If you encounter
problems during remote replication configuration or maintenance, you can refer to this
section.
5.1 What are the differences between synchronous remote replication and asynchronous
remote replication
5.2 How do I use the secondary LUN in a remote replication
5.3 Why information about the primary device and the secondary device in a remote
replication cannot be updated in real time
5.4 What is the relationship among a remote replication, clone, and consistency group
5.5 What is the principle of setting bandwidth for remote replication and LUN copy among
storage arrays
5.6 Precautions for Creating a Remote Link
5.7 Functions of Reserved Space During Asynchronous Remote Replication
5.8 Impact of Modifying Consistency Group Properties
5.9 When performing a remote replication task, is an initial synchronization required to
synchronize data from the primary storage system to the secondary storage system
5.10 Do I need to stop the services on the primary storage system before performing the initial
synchronization
5.11 Impact of the Secondary Storage System Failure on Remote Replication
5.12 Do File Remote Replication and Block Remote Replication Use the Same License
Answer
Both synchronous remote replication and asynchronous remote replication can be used to
back up data remotely, implementing disaster recovery for a storage system. However,
synchronous remote replication and asynchronous remote replication are implemented in
different modes and applicable to different service scenarios.
l Implementation mode
Synchronous remote replication: When data is being written to the primary LUN, a
write request is also sent to the secondary LUN. After both primary LUN and
secondary LUN return a write success response, the storage system returns a write
success response to the host. This ensures real-time data synchronization between
the primary LUN and the secondary LUN.
Asynchronous remote replication: When writing data to the primary LUN, the
primary site records the changed data. After the primary LUN returns a write
success response, the primary site returns a write success response to the host.
Then, data synchronization is performed manually or automatically based on user-
defined trigger conditions to ensure data consistency between the primary LUN and
the secondary LUN.
l Application scenario
Synchronous remote replication requires high bandwidth and low latency.
Therefore, synchronous remote replication typically applies to disaster recovery
scenarios where the primary device and the secondary device are adjacent to each
other, for example, data backup and disaster recovery within the same city.
Asynchronous remote replication does not require high bandwidth and low latency.
Therefore, asynchronous remote replication applies to disaster recovery scenarios
where the primary device and the secondary device are far from each other or
network bandwidth is limited.
Answer
The primary LUN and the secondary LUN in a remote replication form a pair, and the
secondary LUN stores a data duplicate for the primary LUN.
l In normal cases, the secondary LUN cannot be accessed by a host. To enable a host to
access data on the secondary LUN, you can create a snapshot for the secondary LUN and
map the snapshot to the host.
l When the primary LUN fails and causes service interruption, you must perform a
primary/secondary switchover to change the secondary LUN into the primary LUN, and
then map the new primary LUN to the host.
Answer
The symptoms and reasons are as follows:
l Symptom 1: After data synchronization between the primary LUN and the secondary
LUN is initiated, the secondary device does not display the synchronization progress in
real time.
To display the synchronization progress in real time during data synchronization, the
secondary device must send a query task to the primary device. This affects system
performance and the number of tasks.
l Symptom 2: After the name of the primary device is changed, the name in the remote
replication is not updated in real time.
The name of the primary device in a remote replication is obtained by invoking the
interface of the system management module rather than using the remote replication
module. Therefore, after the device name is changed, the remote replication cannot
detect the change. In this case, you can refresh the information manually or log in to the
DeviceManager again. Then, the name of the primary device will be updated.
To obtain the latest information about the primary device and the secondary device in a timely
manner, you can log in to the DeviceManager and click Refresh on the remote replication
management page.
Answer
The relationship among a remote replication, clone, and consistency group are describes as
follows:
l Remote replication and consistency group can be implemented only between separate
storage arrays while clone can be implemented only in a single storage array.
l A remote replication involves two types of objects: pairs and remote replication
consistency groups. A clone involves only one type of object, namely, pairs.
l A consistency group manages remote replications in it in a unified manner. When a
consistency group implements a synchronization, split, or primary/secondary switchover,
the operation applies to all of the remote replications in the consistency group at the
same time, ensuring data consistency among multiple LUNs.
Answer
The bandwidth refers to the amount of data that is transferred on a channel in a certain period
of time. Its common unit is bit per second (bps). The bandwidth of a computer network refers
to the maximum data rate in the network, that is, the number of bits transferred per second.
When setting the bandwidth, note the follows:
l The bandwidth should be set to meet the requirements of actual services. You can
customize a proper bandwidth as required.
l The bandwidth should be set no larger than the current bandwidth of customers' network.
If the workload in the system is heavy, the bandwidth for remote replication and LUN copy
among storage arrays can be correspondingly reduced to guarantee the normal running of
ongoing services.
Answer
l If a Fibre Channel network is used for creating a remote link, set Link Type to FC. If an
IP network is used, set Link Type to iSCSI.
l If two storage systems of different versions are used for creating a remote link, initiate
link creation from the storage system of the later version. Otherwise, link creation fails.
Answer
Reserved space stores snapshots generated during asynchronous remote replication. Storage
space is reserved because:
l Copy-on-write (COW) for I/Os newly written to the host during remote replication
occupies the storage pool space of the primary storage array. In addition, COW for data
synchronization I/Os of the secondary storage array occupies the storage pool space of
the secondary storage array.
l If the storage pool cannot provide sufficient space, COW for host I/Os or data
synchronization I/Os of the secondary storage array fails due to insufficient space. As a
result, the remote replication session enters the split state. You are advised to reserve
30% of LUN space in an asynchronous remote replication for copy-on-write.
When configuring asynchronous remote replication, reserve certain space in the storage pool
of the secondary storage array to ensure that sufficient space is provided for COW for host
I/Os or data synchronization I/Os of the secondary storage array during remote replication.
l The size of each data block written by the host must be considered during reserved space planning.
If the COW granularity for host I/Os is 64 KB and the size of each data block written by the host is
smaller than 64 KB, the reserved space for COW must be larger than the amount of data written by
the host during the copy period.
l If an incremental synchronization is implemented after the remote replication session is interrupted
due to manual splitting or external faults and, the space required for COW on the secondary storage
array equals the amount of service data delivered by the primary storage array during the
synchronization or the LUN capacity at most.
Answer
After the properties of a consistency group are modified, the properties of pairs in the
consistency group keep unchanged. However, the working properties are the properties of the
consistency group. After a member whose rate is Medium is added to a consistency group
and the rate of the consistency group is changed to High, the displayed rate of the remote
replication remains Medium but the working rate is High. After the member exits the
consistency group, its working rate is Medium.
Answer
Initial synchronization is recommended to make the data on the primary system consistent
with that on the secondary system. However, if you are absolutely sure that the two sets of
data are exactly the same, you can skip this operation.
Answer
No, you do not need to stop the services, because initial synchronization can be performed
when the primary storage system is processing services.
Answer
If the secondary storage system is powered off or faulty, services running on the primary
storage system are not affected but the remote replication relationship is interrupted. The
primary storage system records changes after the remote replication relationship is interrupted
in the Data Change Log (DCL) bitmap. After the secondary storage system is recovered, the
remote replication relationship can be restored by one synchronization operation.
Answer
File remote replication and block remote replication use the same license. After purchasing
and importing a remote replication license file, you can create both file and block remote
replications.
You can perform initial synchronization for a remote replication using the external storage
system.
NOTICE
This section is intended for Huawei technical engineers. Use this method only under the
guidance of Huawei technical engineers. If you want to use this method to perform initial
synchronization, contact Huawei technical support engineers for help.
A.1 Introduction
After a remote replication is established between the production center and the DR center,
initial synchronization is automatically started to replicate data from the production center to
the DR center.
A.2 Configuration Operation (Using the LUN Copy Feature of the External Storage System)
You can create a LUN copy on the external storage system to implement initial
synchronization for a remote replication.
A.3 Configuration Operation (Using the LUN Copy Feature of the Production and DR
Centers)
You can create a LUN copy on the storage system of the production center and DR center to
implement initial synchronization for a remote replication.
A.1 Introduction
After a remote replication is established between the production center and the DR center,
initial synchronization is automatically started to replicate data from the production center to
the DR center.
In actual application scenarios, the amount of data in the production center is large (up to a
terabyte level), but the data transmission bandwidth is limited between the DR center and the
production center. As a result, it takes a long time for initial synchronization to complete.
Performing initial synchronization using the external storage system can greatly reduce the
initial synchronization time.
Figure A-1 shows the process of using the external storage system to perform initial
synchronization for a remote replication.
Figure A-1 The process of using the external storage system to perform initial
synchronization
LUN 2
LUN
Physical transportation
replicaiton
replication
Data
1 3
Data
Remote
LUN replication LUN
4
Storage Storage
system system
Methods for performing initial synchronization for a remote replication using the external
storage system vary with provider of the LUN copy feature:
l Method 1: Use the snapshot feature of the production center and the LUN copy feature
of the production and DR centers and create LUN copies in the production center and the
DR center respectively.
l Method 2: Use the snapshot feature of the production center and the LUN copy feature
of the external storage system and create LUN copies on the external storage system.
Prerequisites
l The production center and the DR center have network established, remote storage
systems configured, and the primary and secondary LUNs created.
l A LUN has been created on the external storage system. This LUN has the same capacity
as the primary and secondary LUN of the remote replication.
l The external storage system is in the production center and connected to the storage
system of the production center. The remote storage system has been configured in the
production center.
l The external storage system is a Huawei product and supports the LUN copy feature.
l The production center supports the snapshot feature.
Procedure
Step 1 Configure the remote replication form the production center to the DR center.
1. Log in to the storage system of the production center and create a remote replication. The
primary LUN of the remote replication is the LUN of the production center and the
secondary LUN is the LUN of the DR center.
The remote replication is created from the production center to the DR center and its
Pair Health Status is Normal.
NOTE
When creating remote replication, select The primary and secondary LUNs have consistent
data and data synchronization is not required.
2. Split the remote replication form the production center to the DR center.
The remote replication is split from the production center to the DR center and its Pair
Running Status is Split.
Step 2 Replicate data from the production center to the external storage system.
1. Create and activate a snapshot for the primary LUN of the production center.
The snapshot is created and activated.
2. Log in to the external storage system and create a full LUN copy. The source LUN of the
LUN copy is the snapshot for the storage system of the production center and the target
LUN is the LUN of the external storage system.
The full LUN copy is created and its Health Status is Normal.
3. Start the full LUN copy.
The full LUN copy is started and its Running Status is Copying. After the full LUN
copy is complete, its Running Status becomes Completed.
4. Delete the full LUN copy.
The full LUN copy is deleted.
5. Log in to the storage system of the production center and deactivate and delete the
snapshot.
The snapshot is deactivated and deleted.
Step 3 Move the external storage system to the DR center.
Step 4 Recover data from the external storage system to the storage system of the DR center.
1. Cancel the write protection for the secondary LUN of the remote replication from the
production center to the DR center.
2. Set the property of the secondary LUN (LUN of the DR center) to write through.
NOTE
In developer mode, run the create lun_copy remote name=test local_role=source lun_id=##
remote_device_id=## remote_lun_id=## is_target_lun_check=no command.
4. Delete the full LUN copy.
Step 5 Start to reclaim differences in the data control list (DCL) for the secondary LUN of the DR
center.
NOTE
Step 6 Wait 5 minutes and then cancel the reclamation. If you fail to cancel the reclamation, retry 5
minutes later.
NOTE
Step 7 Configure the write protection for the secondary LUN of the remote replication from the
production center to the DR center.
Step 8 Set the property of the secondary LUN (LUN of the DR center) to write back.
NOTE
Step 9 Start the remote replication form the production center to the DR center.
----End
Prerequisites
l The production center and the DR center have network established, remote storage
systems configured, and the primary and secondary LUNs created.
l A LUN has been created on the external storage system. This LUN has the same capacity
as the primary and secondary LUN of the remote replication.
l The external storage system is in the production center and connected to the storage
system of the production center. The remote storage system has been configured in the
production center.
l The external storage system is a Huawei product.
l The production center supports the snapshot feature and both the production center and
the DR center support the LUN copy feature.
Procedure
Step 1 Configure the remote replication form the production center to the DR center.
1. Log in to the storage system of the production center and create a remote replication. The
primary LUN of the remote replication is the LUN of the production center and the
secondary LUN is the LUN of the DR center.
The remote replication is created from the production center to the DR center and its
Pair Health Status is Normal.
NOTE
When creating remote replication, select The primary and secondary LUNs have consistent
data and data synchronization is not required.
2. Split the remote replication form the production center to the DR center.
The remote replication is split from the production center to the DR center and its Pair
Running Status is Split.
Step 2 Replicate data from the production center to the external storage system.
1. Create and activate a snapshot for the primary LUN of the production center.
The snapshot is created and activated.
2. Log in to the external storage system and create a full LUN copy. The source LUN of the
LUN copy is the snapshot for the storage system of the production center and the target
LUN is the LUN of the external storage system.
The full LUN copy is created and its Health Status is Normal.
3. Start the full LUN copy.
The full LUN copy is started and its Running Status is Copying. After the full LUN
copy is complete, its Running Status becomes Completed.
4. Delete the full LUN copy.
The full LUN copy is deleted.
5. Log in to the storage system of the production center and deactivate and delete the
snapshot.
The snapshot is deactivated and deleted.
Step 4 Recover data from the external storage system to the storage system of the DR center.
1. Log in to the storage system of the DR center and create a full LUN copy. The source
LUN of the LUN copy is the LUN of the external storage system and the target LUN is
the secondary LUN of the remote replication. Then start the LUN copy.
The full LUN copy is created and started. After the full LUN copy is complete, its
Running Status becomes Completed.
NOTE
Create this LUN copy in developer mode and use command parameter is_target_lun_check=no.
Ensure that the remote replication secondary LUN is in the Read-only state during LUN copy and
LUN copy configuration.
2. Delete the full LUN copy.
Step 5 Start the remote replication form the production center to the DR center.
----End
C Glossary
A
AC power The module that transfers the external AC power supply into the
module power supply for internal use.
Application A service processing node (a computer device) in the network.
server Application programs of data services are run on the application
server.
Asynchronous A kind of remote replication. When the data on the primary site is
remote updated, the data does not need to be updated on the mirroring site
replication synchronously to finish the update. In this way, performance is not
reduced due to data mirroring.
B
Backup A periodic operation performed on the data stored in the database for
the purposes of database recovery in case that the database is faulty.
The backup also refers to data synchronization between active and
standby boards.
Bandwidth A range of transmission frequencies that a transmission line or
channel can carry in a network. In fact, it is the difference between
the highest and lowest frequencies the transmission line or channel.
The greater the bandwidth, the faster the data transfer rate.
Baud rate The number of times per second the signal can change on a
transmission line. Commonly, the transmission line uses only two
signal states, making the baud rate equal to the number of bits per
second that can be transferred. The underlying transmission technique
may use some of the bandwidth, so it may not be the case that user
data transfers at the line's specified bit rate.
Bit error A computer that is connected to the serial port or management
network port of the storage system for maintenance.
Bit error rate The percentage of bits that have errors relative to the total number of
bits received in a transmission.
Bonding Bonding can bind multiple independent physical network ports into a
logical port, which ensures the high availability of server network
connections and improving network performance.
Boundary scan A test methodology that uses shift registers in the output connections
of integrated circuits. One IC often is connected to the next. A data
pattern is passed through the chain and the observed returned data
stream affected by the circuit conditions gives an indication of any
faults present. The system is defined under IEEE standard 1149.1 and
is also often known as JTAG (Joint Test Action Group).
Browser/Server An architecture that defines the roles of browser and server, where the
browser is the service request party and the server is the service
provider.
C
Cache hit ratio The ratio of directly accessed I/O from Cache to all the I/O operation
during the read operation.
Cache prefetch According to the operation in which data has been read or is being
strategy read, the required data is read from a disk into the cache in advance.
Captive Screw After the screw is loosened, screw caps and bolts are not disconnected
from the main body.
Cascading Connect the storage system to more disk enclosures through
connection cables, thus expanding the capacity of the storage system.
CHAP A method to periodically verify the identity of the peer using a 3-way
handshake. During the establishment of a link, the authenticator sends
a "challenge" message to the peer. The peer responds with a value
calculated using a "one-way hash" function. The authenticator checks
the response against its own calculation of the expected hash value. If
the values match, the authentication is acknowledged. CHAP
provides protection against playback attack.
Clone A snapshot technology. The source data is completely copied to
generate a data duplicate; therefore the duplicate needs the storage
space as the same size as the source data. It is also called clone. In the
VIS system, it is also called third-mirror break-off snapshot.
Cluster A mechanism adopted to improve the system performance. Several
devices of the same type form a cluster. The exterior of a cluster is
some like a kind of equipment. In the interior of a cluster, the nodes
share the load.
Coffer A technology for ensuring data security and integrity in a storage
system. It is used to store the mission-critical data of the system.
Coffer disk Disks that build up the coffer.
Command device A special LUN through which the host can send inband commands to
storage devices.
Constant prefetch A cache prefetch strategy. The size of the data to be prefetched is the
size as set. This strategy applies to the applications that require
reading data of a fixed size in a certain order. An example is the
streaming media demanded by multiple subscribers who use the same
bit rate.
Controller The core module that processes services in a storage system. It
contains physical components such as system-level CPUs and
memory.
Controller An enclosure that accommodates controllers and provides storage
enclosure services. It is the core component of a storage system, and generally
consists of components such as controllers, power supplies, and fans.
Copyback The process of copying the data from the hot spare disk back to the
previous disk when the faulty member disk is restored or replaced by
a new one.
Copying A state of pair. The state indicates that the source LUN data is being
synchronized to the target LUN.
D
Data compress A data reduction method. Data compression involves encoding
information using fewer bits than the original representation.
Data A specialized data compression technique for eliminating coarse-
deduplication grained redundant data, typically to improve storage utilization. In the
deduplication process, duplicate data is deleted, leaving only one
copy of the data to be stored, along with references to the unique copy
of data. Deduplication is able to reduce the required storage capacity
since only the unique data is stored.
Data flow A process that involves processing the data extracted from the source
system, such as filtering, integration, calculation, and summary,
finding and solving data inconsistency, and deleting invalid data so
that the processed data meets the requirements of the destination
system for the input data.
Data migration It is the process to cleanse and transform history data, and then load
them to the new system.
Data source A system, database, or file that can make BOs persistent. A data
source can be a database instance or a database user.
Data switch A data switch used for interconnections between controllers.
Dirty data The data that is stored temporarily on cache and has not been written
onto disks.
E
eDevLUN Logic space created by third-party storage systems.
(external device
LUN)
Engine Two controllers in one enclosure are called Engine.
Expansion Connecting a storage system to more disk enclosures through
connection cables, thus expanding the capacity of the storage system.
Expander module A component used for expanding.
F
Failover The automatic substitution of a functionally equivalent system
component for a failed one. The term failover is most often applied to
intelligent controllers connected to the same storage devices and host
computers. If one of the controllers fails, failover occurs, and the
survivor takes over its I/O load.
Field replaceable A unit that can function as a circuit board, part, or component of an
unit electronic device. It can be quickly and easily removed from a
personal computer or other electronic devices. If an FRU becomes
faulty, users can replace it with a new one instead of sending the
entire product or system for maintenance.
G
Gateway The entrance node on another network.
Global system for The second-generation mobile networking standard defined by
mobile European Telecommunications Standards Institute (ETSI). It is aimed
communications at designing a standard for global mobile phone networks. The
standard allows a subscriber to use a phone globally. GSM consists of
three main parts: mobile switching subsystem (MSS), base station
subsystem (BSS), and mobile station (MS).
H
Hard disk A non-volatile storage device that stores digitally encoded data on
rapidly rotating platters with magnetic surfaces. Hard disks generally
offer more storage and quicker access to data than floppy disks do.
Hard disk tray The tray that bears the hard disk.
Hard quota The value to limit the space used in quota configuration. If the space
used arrives hard quota, file operation is not allowed to continue.
Heartbeat Heartbeats are the packets, requiring no acknowledgement,
transmitted between two devices. The device can judge the validity
status of the peer device. Heartbeat supports node communication,
fault diagnosis, and event triggering.
Hit ratio The ratio of directly accessed I/Os from cache to all I/Os.
Hot swap In a running system, inserting or removing a blade does not affect
normal running of the system.
I
I/O Data movement process between memory and peripheral devices in
the computer system. I/O is a collective name, indicating the
operations reading data into the memory and writing data to other
places from computer memory.
L
Load balance A method of adjusting the system, application components and data to
averagely distribute the applied I/O or computing requests for
physical resources of the system.
Logical unit The entity is located inside the SCSI object, and can execute I/O
commands. After a SCSI I/O command is sent to an object, the logic
unit inside the object executes this command. Usually, each SCSI
physical disk has one logic unit. A tape drive and array controller may
have multiple logic units, which process different I/O commands.
Each logic unit inside an array controller corresponds to a virtual
disk.
Logical unit The SCSI identifier of the internal logical unit of a target.
number
LUN format The process of writing 0 bits in the data area on the logical drive and
generating related parity bits so that the logical drive can be in the
ready state.
LUN mapping The storage system maps LUNs to ASs so that the ASs can access the
storage reorganization.
LUN migration A method for the data in the LUN to migrate between different
physical storage space while ensuring data integrity and uninterrupted
operation host services.
LUN copy The function of copying the original LUN data to one or multiple
target LUNs.
M
Maintenance The computer that is connected through a serial port or management
terminal network port and maintains the storage system.
Management An entity that provides a means to transmit and process the
network information related to network management.
Management The network port on the controller enclosure that is connected to the
network port maintenance terminal. It is provided for the remote maintenance
terminal.
N
Node A managed device in the network. For a device with a single frame,
one node stands for one device. For a device with multiple frames,
one node stands for one frame of the device.
O
Out-of-band A management mode used during out-of-band networking. In the out-
management of-band management mode, the management and control information
of the network and the bearer service information of the user network
are transmitted through different logical channels.
Original LUN The LUN where the original data of the copied LUNs is located.
Owning For a LUN, the user configures the owing controller, that is, specified
controller the created LUN to a certain controller. When the owning controller
of the LUN is invalid, another controller manages the LUN
automatically. When the owning controller of the LUN is restored, the
original controller manages the LUN again.
P
Power failure When the external power failure occurs, the AC PEM depends on the
protection battery for power supply, which ensures the integrity of the dirty data
in cache.
Pre-copy When the system monitors that a member disk in a RAID group is to
fail, the system copies the data on the disk to a hot spare disk in
advance. This technology is called pre-copy.
Primary storage The controller that plays a leading role in controlling the management
controller is the primary storage controller. It can perform relevant management
operations on the controller enclosure.
Primary/ A procedure during which the two controllers of the storage system
Secondary change their master/slave states.
switchover
Prior controller For the application server LUN, prior controller means that the
working controller is the owner controller of the corresponding array
LUN.
Q
Quota tree A first-level directory of file system that can be managed with quota.
R
RAID level The application of different redundant types to a logical drive. A
RAID level improves the fault tolerance or performance of the logical
drive but reduces the available capacity of the logical drive. You must
specify a RAID level for each logical drive.
Reconstruction A function of the storage system. It refers to the process of restoring
the data saved in the faulty member disk in a storage pool.
Redundancy The scheme to add more than one channels, elements or parts that
have the same functions with the counterparts in the system or device
at a critical place. When a fault occurs, the system or device can work
well, and the reliability is then improved.
Remote A core technology for disaster recovery and a foundation that
replication implements remote data synchronization and disaster recovery. This
technology remotely maintains a set of data mirror through the remote
data connection function of the storage devices that are separated in
different places. Even when a disaster occurs, the data backup on the
remote storage device is not affected. Remote replication can be
divided into synchronous remote replication and asynchronous remote
replication by whether the host that requires mirrors needs the
confirmation information of the remote replication site.
Reverse The process of restoring data from the redundancy machine (RM)
synchronizing when the services of the production machine (PM) are recovering.
Route The path that network traffic takes from its source to its destination.
In a TCP/IP network, each IP packet is routed independently. Routes
can change dynamically.
S
Script A collection of data statements used to perform an operation.
Secondary (1) A controller that backs up service and management data of the
controller primary controller in a clustered system. When the primary controller
fails, the secondary controller is upgraded to the primary controller
and takes over the management and services of the controller
enclosure. (2) A controller that backs up the management data of the
primary controller in a block-level array. When the primary controller
fails, the secondary controller is upgraded to the primary controller
and takes over the management of the system.
Serial port An input/output location (channel) that sends and receives data to and
from a computer's CPU or a communications device one bit at a time.
Serial ports are used for serial data communication and as interfaces
with some peripheral devices, such as mice and printers.
Service data Data which is saved in data disk. Service data is the data source for
computing check data in parity disk.
Service network The network port that is used to store services.
port
SFP optical A component that can make data conversion between optical signals
transceiver and electrical signals and that can receive and transfer data.
Simple network A network management protocol of TCP/IP. It enables remote users to
management view and modify the management information of a network element.
protocol This protocol ensures the transmission of management information
between any two points. The polling mechanism is adopted to provide
basic function sets. According to SNMP, agents, which can be
hardware as well as software, can monitor the activities of various
devices on the network and report these activities to the network
console workstation. Control information about each device is
maintained by a management information block.
Single point A type of failure. Data transmission over a network is stopped and
failure cannot be recovered automatically if a single point failure occurs. The
point can be an interface, a board, a device, or a link.
Small computer A set of standards for physically connecting and transferring data
system interface between computers and peripheral devices. SCSI is most commonly
used for hard disks and tape drives, but it can connect a wide range of
other devices, including scanners, and optical drive.
SmartTenancy A feature of Huawei storage system. With SmartTenancy, multiple
virtual storage systems can be created in one physical storage system,
which allows tenants to share the same storage system hardware
resource without affecting data security and privacy of each other.
This feature achieves more flexible, easy-to-manage and low-cost
shared storage in a multi-protocol unified storage architecture.
Snapshot A data backup technology through which a fully usable copy of a data
object can be quickly generated. The duplicate contains the image of
the data object at a point in time.
Snapshot copy A copy of the snapshot LUN, which is also a snapshot LUN.
Soft quota The value to alarm space usage in quota configuration. After used
space arrives this value, an alarm triggered; if space used from above
this value becomes lower than, the previous alarm eliminated.
Storage system An integrated system. It consists of the following parts: controller,
storage array, host bus adapter, physical connection between storage
units, and all control software.
Storage unit An abstract definition of backup storage media for storing backup
data. The storage unit is connected with actual storage media, used to
back up data.
Streaming media A transmission mode in which data is consumed (read, listened to, or
watched) while it is being sent.
Stripe The set of strips at corresponding locations of each member extent of
a disk array which uses striped data mapping. The strips in a stripe are
associated with each other in a way (e.g., relative extent block
addresses) that allows membership in the stripe to be quickly and
uniquely determined by a computational algorithm. Parity RAID uses
stripes to map virtual disk block addresses to member extent block
addresses.
Subnet A type of smaller networks that form a larger network according to a
rule, for example, according to different districts. This facilitates the
management of the large network.
Subnet mask The technique used by the IP protocol to determine which network
segment packets are destined for. The subnet mask is a binary pattern
that is stored in the device and is matched with the IP address.
Synchronous A kind of remote replication. When the data on the primary site is
remote updated, the data must be synchronously updated on the mirroring site
replication before the update is complete. In this way, the data that is stored on
both the primary and mirroring sites can be synchronized.
T
Target A system component that can receive SCSI I/O operation commands.
Target LUN The target for LUN copy.
Tenant A property of SmartTenancy, which represents a virtual storage
system in a physical one. The private and independent logical
resource of a tenant mainly includes disk domain space, LUN, file
system and ports. Tenants get complete storage services, but also
remain resource and network isolation with other tenants, which
avoids interference.
Thin provisioning A mechanism that offers on-demand allocation of storage space.
Thin LUN The thin LUN is a logic disk that can be accessed by hosts. The thin
LUN dynamically allocates storage resources from the thin pool
according to the actual capacity requirements of users.
Timing Snapshot To create snapshots periodically to continuously protect data.
Topology The logical layout where computer systems and network components
are interconnected with one another. The topological structure gives
the interconnection relationship among the components from the
viewpoint of communication, but the topological structure does not
specify the exact locations of components and the interconnection
mode.
Trap A type of SNMP message that indicates the occurrence of an event.
This type of message is transmitted to the received through UDP. The
transmission process is not completely reliable.
U
User datagram A TCP/IP standard protocol that allows an application program on
protocol one device to send a datagram to an application program on another.
User Datagram Protocol (UDP) uses IP to deliver datagram. UDP
provides application programs with the unreliable connectionless
packet delivery service. There is a possibility that UDP messages will
be lost, duplicated, delayed, or delivered out of order. The destination
device does not confirm whether a data packet is received.
User interface The space in which users interact with a machine.
V
Variable prefetch A cache prefetch strategy. The size of the data to be prefetched is the
multiple for prefetching multiplied by the length of a read command.
This strategy applies to the applications that require reading data of
variable size in a certain order or to the situations where multiple
subscribers read data concurrently but no fixed prefetch size can be
set, because the amount of pre-read data cannot be judged. An
example is the streaming media demanded by multiple subscribers
who use different bit rates.
vStore A property of SmartTenancy. In Huawei SmartTenancy, a tenant is
called a vStore, which represents a virtual storage system.
W
Working The controller used by the array LUN to read and write a disk.
controller
Write back A caching technology in which the completion of a write request is
signaled as soon as the data is in cache, and actual writing to non-
volatile media occurs at a later time. Write back includes an inherent
risk that an application will take some action predicated on the write
completion signal, and a system failure before the data is written to
non-volatile media will cause media contents to be inconsistent with
that subsequent action. For this reason, good write back
implementations include mechanisms to preserve cache contents
across system failures (including power failures) and to flush the
cache at system restart time.
Write through A caching technology in which the completion of a write request is
not signaled until data is safely stored on non-volatile media. Write
performance with the write through technology is approximately that
of a non-cached system, but if the data written is also held in cache,
subsequent read performance may be dramatically improved.
Zone In the fiber channel, a collection of the N_Port or the NL_Port which
can communicate with each other in the optical network. Only the
N_Port or the NL_Port in the same partition can communicate with
each other through the optical network.
C
CHAP Challenge Handshake Authentication Protocol
CLI Command Line Interface
F
FC Fiber Channel
G
GUI Graphical User Interface
H
HBA Host Bus Adapter
I
IP Internet Protocol
iSCSI Internet Small Computer Systems Interface
L
LUN Logical Unit Number
R
RAID Redundant Array of Independent Disks
S
SAS Serial Attached SCSI
SSD Solid State Drive
U
UI User Interface
UTC Universal Time Coordinated