Vous êtes sur la page 1sur 5

Enable the TDEV for Recoverpoint

Production TDEVs stay only in their normal production SG but they are enabled for Recover Point (they get exposed to recoverpoint by the mirrored masking
view)

The HQ-RPA_CLUSTER_SG only contains journals or local replicas

Create the target TDEV in Columbia. Add it to the DC2-RPA-CLUSTER_SG.


Enable the target TDEV for recoverpoint too.

Unrelated note: you can look at a TDEV and go to "flags" to see "Device User Pinned". If this is pinned, then FAST can't move it to another pool.
DO NOT CHECK "Add as clean". That would guarantee that the target already had the same data on it.
When adding new replication sets, it will briefly give errors, just initial cosmetic error. It will go away in a few minutes.

RPA Mirror Masking Views:


All RPA's must be able to access the same production FA Port as each production TDEV. This is accomplished by creating a unique "RPA Mirror" Masking View
with Symmetrix Mgmt Console (SMC) for each production Host Masking View (MV). This unique RP-Mirror_MV will be structured the same as the Host MV with
the exception of the Initiator Group (IG), which will be replaced with the RPA_Cluster_IG

Fixing RecoverPoint cosmetic error (working with Dave Thibodeau on 12/7/2012 to finalize VMAX / RPA configs):
This is basically because the other MV's have different PG's than the RPA PG. Eventually EMC should fix this in a future release.
To get rid of the "cosmetic" error about the two TDEVS that are in both SRM and DMZ cluster.
Making a new "fake" MV for the DMZ cluster, even though they are already covered in the SRM MV.
Then the two TDEVS that are in the DMZ cluster which are not in RecoverPoint, we will enable them for RPA but not really do anything with them.

Go to the TDEV, right click - Replication - RecoverPoint - Enable for RecoverPoint


(doesn't hurt anything to enable a TDEV, it will make it available to Recoverpoint to select, but we will never select them)

Distributed Consistency Groups (on 12/7/2012):


Can have a max of 8 CG's with distributed groups.
Note: For any two CG's with parallel bookmarks (HQ-DBPRD & HQ-DBPRD-os-data) keep them on the same primary RPAs and the same config on the
distributed consistency groups.
HQ-APPRD2
Currently primary RPA is RPA 2
Go to Policy - Advanced
Checkmark "Distribute Group"

From in the field experience, we know that you either want to be at one RPA or four RPA's. You can do two or three but you aren't getting much of a benefits.
Huge impact with that fourth RPA.
It will do a brief pause and a "short sweep".
On the next consistency group, make the primary RPA a different one that the first.

Parallel Bookmarks (in case we use Replication Manager in the future):

Sometimes people will checkmark the "set snapshot consolidation policy", but by default it will consolidate after a month anyway. If you don't checkmark the
box here at this box, there is still a default setting on the consistency group anyway that will apply. We did not checkmark this box.
Now to review, see the the next icon - Group Sets - shows frequency is 30 minutes and we never want to less than that.

Dave says we need Replication Manager to quiesce the database. May not be getting any benefit from the parallel bookmarks without RM quiescing the
database every 30 minutes.

For GLMES - since it already has 45% journals but still only 13 hour protection, could consider doing a snapshot consolidation policy.
Could go to remote replica - Policy - Protection - Enable RecoverPoint Snapshot Consolidation
Start out by not being too agressive - consolidate snapshots that ......no this won't work.....we're writing a ton of data, not like we have a long
protection window.

REMOVING / DELETING TDEVS THAT WERE IN RECOVERPOINT:


1.
Highlight the TDEVS - right click - remove from Recoverpoint
2.
Storage Group maintenance - remove them from their server SG - be sure to click the UNMAP box.
3.
Unbind the TDEVs from the storage pool.
4.
Now they will show up in the UNBOUND devices. If Meta, then Dissolve Meta.
5.
Finally, now that the Meta is dissolved, you can delete the individual TDEVs.

Vous aimerez peut-être aussi