Vous êtes sur la page 1sur 4

HPUX MCSG migrage Storage from

netapp 3240 to 3270


old enclosure to new storage
old controller abandon
from:
http://h30499.www3.hp.com/t5/System-Administration/Data-Migrationfrom-old-EMC-Storage-to-new-EMC-Storage/td-p/4563344#.UdWMufnI3nI
Step 1:
Backup the root VG (VG00) #make_tape_recovery -Av Take backup of strings
/etc/lvmtab file
#cp p /etc/lvmtab /etc/lvmtab_bak
Step 2:
Take backup of cluster configuration file /etc/cmcluster/*
Step 3:
Take backup of #ioscan -fnC disk output.
Step 4:
Take backup all vg* information and vgid information by executing the below command
#xd -j8200 -N16 -tu <disk device filename in the vg>
For example
#xd -j8200 -N16 -tu /dev/rdsk/c4t0d1
Step 5:
Assign LUN's from new storage to both nodes.
Step 6:
Check the status the news luns are getting detecting in the both nodes. by
comparing the ioscan output which you have taken.
#ioscan fnCdisk #insf
Step 7:
#pvcreate <raw device file name>

For ex:

#pvcreate /dev/rdsk/c0t6d1
Step 8:
Check whether lv's are stripped before adding disk to vg. Before steps works fine
for non-stripped lv's
Allocate the disk to vg by using vgextend.
#vgextend vg01 <new lun device file>
For ex: #vgextend vg01 /dev/dsk/c0t6d1
Step 9:

Do mirroring of lv's in the vg.

#lvextend -m 1 <lvname> <new disk device>


for ex:
#lvextend -m 1 /dev/vg01/lvol1 /dev/dsk/c0t6d1
Continue the above steps for all remaning lvs
Step 10:
Check the mirror status of all LV's
#lvdispaly <lvname>
You will get the below output and check the mirror copies column it should be 1
Status must is syncd.if it be not ,please run lvsync command.
# lvdisplay /dev/vg00/lvol3
--- Logical volumes ---LV Name

/dev/vg00/lvol3VG Name

/dev/vg00LV Permission

read/writeLV Status

available/syncdMirror copies

Step11:
Once you confirmed the its mirrored the remove the old disk from the vg.
# lvreduce <lvname> <old disk device> file
Do for all remaining lv's
for ex: #lvreduce m 0 /dev/vg01/lvol1 /dev/dsk/c3t6d0 #vgreduce /dev/vg01
/dev/dsk/c3t6d0 [Where /dev/dsk/c3t6d0is the old disk]
# rmsf a

/dev/dsk/c3t6d0 **Note:

before doing

(Remember to reduce the disk from vg

rmsf)

Remove a specific device file:


# rmsf /dev/dsk/c0t3d0
Remove all the device files associated with a device:
# rmsf -a /dev/dsk/c0t3d0
Or specify the deviceshardware path:
# rmsf H 0/0/1/0/0.3.0

Now unpresent the old LUN from EVA command view


Step 13:
Now we need to Follow the below steps provided which is as follows:
Node1: node1#vgexport -p -v -s -m /tmp/vg01.map vg01
-p: is only preview, not delete.-s: take ssid from disk, secure for to take
another disk-v: verbose-m: map file
now copy this map file to node2

node1#scp /tmp/vg01 node2:/tmp


On node2 remove the existing configuration of vg01 and import it with new map file.
Also note down the minor number vg01 before removeing.
Node2:
node2# ll /dev/vg01/groupnode2# vgexport -v /dev/vg01node2# mkdir /dev/vg01node2#
mknod /dev/vg01/group c 64 0x010000node2# vgimport -v -s -m /tmp/vg01.map vg01
Step 14:
If your cluster contains lock disk you need to change the device file to check
whether your cluster contains lock disk triger the below command.
#cmgetconf ( for more details check the man page)
create a ascii file.
To generate the cluster ASCII configuration file for clusterA, andstore the
information in clusterA.config, do the following:
For ex: #cmgetconf -c

clusterA

/tmp/clusterA.ascii

Step 15:
Edit clusterA.ascii and change the cluster lock disk and save it
Step 16:
To verify the cluster configuration and package files, do thefollowing:
#cmcheckconf v -C clusterA.ascii

[-P pkg1.config -P pkg2.config]

#cmapplyconf v C clusterA.ascii
Step 17:
Start the cluster and package. # cmruncl
# cmrunnode
#cmrunpkg

-f v n scp1

scp2
-n scp1 v cdrpkg

#cmmodpkg v e scppkg
# cmviewcl

-v

# cmmodpkg v n scp1 scppkg

Total Procedure Summary:

Normally storage migration comprise these

steps.
1. Assign LUN's from new storage to both nodes.
2. Add the new luns to respective volume groups by vgextend.
3. Use mirror or pvmove to copy the data from old storage luns to new luns. Prefer
to go for mirror. pvmove is always little risky
4. Remove the mirror copy from old storage luns
5. vgreduce the old storage luns from respective volume groups
6. Remove the device files and unpresent from the old storage.
7. Take the map file and import on other nodes
8. Reinitialize the cluster lock disk.

http://viktorbalogh.net/blog/hp-ux/hp-ux_sysadm/migrating-a-

serviceguard-cluster-to-another-storage
http://h30499.www3.hp.com/t5/LVM-and-VxVM/HP-UX-Cluster-Migrationfrom-one-SAN-to-another/td-p/4525391#.UdWRNvnI3nI

Vous aimerez peut-être aussi