Vous êtes sur la page 1sur 14

VxVM General Troubleshooting

1. How to fix a volume that has plex in DISABLED/RECOVER state?


One of the plex is in DISABLED/RECOVER and the other one is ENABLED/ACTIVE.
v testvol ENABLED ACTIVE 32768000 SELECT gen
pl testvol-01 testvol ENABLED ACTIVE 32768000 CONCAT - RW
sd c92t52d1-01 testvol-01 c92t52d1 0 32768000 0
Disk_17 ENA
pl testvol-02 testvol DISABLED RECOVER 32768000 CONCAT - RW
sd c92t55d1-57 testvol-02 c92t55d1 1707622400 32768000 0 Disk_12 ENA
Force the plex into OFFLINE state:
# vxmend -g testdg -o force off testvol-02

(This will take the Plex to DISABLED OFFLINE state)

v testvol ENABLED ACTIVE 32768000 SELECT gen


pl testvol-01 testvol ENABLED ACTIVE 32768000 CONCAT - RW
sd c92t52d1-01 testvol-01 c92t52d1 0 32768000 0
Disk_17 ENA
pl testvol-02 testvol DISABLED OFFLINE 32768000 CONCAT - RW
sd c92t55d1-57 testvol-02 c92t55d1 1707622400 32768000 0 Disk_12 ENA
Place into STALE state:
# vxmend -g testdg on testvol-02

(This will change the plex to DISABLED STALE state)

v testvol ENABLED ACTIVE 32768000 SELECT gen


pl testvol-01 testvol ENABLED ACTIVE 32768000 CONCAT - RW
sd c92t52d1-01 testvol-01 c92t52d1 0 32768000 0
Disk_17 ENA
pl testvol-02 testvol DISABLED STALE 32768000 CONCAT - RW
sd c92t55d1-57 testvol-02 c92t55d1 1707622400 32768000 0 Disk_12 ENA
If there are other ACTIVE or CLEAN plexes in the volum, reattach those plexes to volume (even
thoughthey already are attached).
If the volume is already ENABLED, resynchronisation of the plex is started immediately but
unfortunately it waits until it synchronises completely.
# vxplex -g testdg att testvol-02 testvol
# vxprint testvol
v testvol gen
ENABLED 32768000 ACTIVE pl testvol-01 testvol ENABLED 32768000 - ACTIVE sd c92t52d1-01 testvol-01 ENABLED 32768000 0 pl testvol-02 testvol ENABLED 32768000 - ACTIVE sd c92t55d1-57 testvol-02 ENABLED 32768000 0 -

If there are no other ACTIVE or CLEAN plexes in the volume, make the plex CLEAN

# vxmend -g testdg fix clean testvol-02 (DISABLED/CLEAN)


If the volume is not ENABLED, use the vxvol command to start it, and perform any resynchronisation
of the plexes in the backgroup
# vxvol -g testdg -o bg start testvol
(If the data in the plex was corrupted, and the volume has no ACTIVE or CLEAN redundant plexes
from which its contents can be resynchronized, it must be restored from a backup or from a
snapshot image)

2. When vxdisk list show dgdisabled and there are other disks in the same diskgroup which
are not Imported.
Umount the file systems withing this DG and then deport and import the disk group. If this doesnt
work, the only option is to reboot the system.
# vxdisk -o alldgs list
DEVICE
TYPE
DISK
GROUP
STATUS
fabric_6 auto:sliced c90t53d3 dg_test1 online dgdisabled
fabric_7 auto:sliced (dg_test1) online
# vxdg deport dg_test1
# vxdisk -o alldgs list
fabric_6 auto
fabric_7 auto:sliced -

error
(dg_test1) online

# vxdg import dg_midoffprd1


VxVM vxdg ERROR V-5-1-10978 Disk group dg_test1: import failed:
No valid disk found containing disk group
# reboot -- -r
3. vxdisk -o alldgs list shows all disks but you cant import it , while importing it will throw
the error No valid disk found containing disk group message.
So to resolve the issue follow the below steps
Possible causes are:
1. Check udid of the disk as per Veritas (vxdisk list fabric_0 | grep udid) and compare that with the
actual udid on the array. If they are different, then reboot the system to pick up the new disks.

2. Check the number of enabled configs on each disk in a diskgroup if none of the disk have config
state=enabled then diskgroup does not have valid configuration to import. Edit the nconfig=all on
diskgroup
3. Try importing by clearing the lock
# vxdg -C -f import testdg
4. How to disable boot from vxvm and start it manually?
1. Boot into single user mode
2. Edit /etc/system. Comment out the vx parameter as follow:
*rootdev:/pseudo/vxio@0:0
*set vxio:vol_rootdev_is_volume=1
3. cd /etc/vx/reconfig.d/state.d/; rm *; touch install-db
(This should remove root-done; and prevent vxvm from starting)
4. cp -p /etc/vfstab /etc/vfstab; cp -p /etc/vfstab.prevm /etc/vfstab
(restore original vfstab)
5. init 6
6. After the system is up, start the Volume Manager service manually as follows
# vxiod set 10
# ps -ef |grep vxconfigd. If vxconfigd is not running, then run "/usr/sbin/vxconfigd -m disable"
# vxdctl mode. Should see it is in disabled mode.
# vxdctl init
# vxdctl enable
5. How to recreate diskgroup info?
1. # vxprint -mpvsh -g DISKGROUP >DISKGROUP.out
2. Destroy the diskgroup
3. Create the diskgroup with the same disk names
4. Edit DISKGROUP.out and change the disknames manually if needed
5. # vxmake -g DGNAME -d /DISKGROUP.out (to rebuild the config in one go)
6. All the volumes should now be defined and in DISABLED/EMPTY state; plex should be in
DISABLED/EMPTY state; subdisk should be in ENABLED/ACTIVE state
7. Init and start the volume as below:
# vxvol -g dg_dodgeprd4 init active dodgeprd4_data_1
This command will init the volume to active (start the plexes and volumes)

6. How to recover from splitbrain error while trying to import?


Depending on configuration, one/many/all disks in dg stores the diskgroup configurations. When
different configuration is found among these disks, splitbrain situation happens while importing it.
Try following steps:
- Decide the disk with valid config. If you cant decide now, you can decide after running vxsplitlines
using different diskids
- Run vxsplitlines -g DG to find out the problem
- Run vxdisk list on good disk and note down its Disk ID
- Run vxsplitlines -g DG -c DISKID to get the exact mismatch
- Import the diskgroup with
# vxdg -o overridessb -o selectcp=DISKID import DG

7. How to get a volume working if it is in DETACHED DETACH state?


It went into this state because underlying plexes went offline thereby causing volume to into
maintenance mode (no ios). This gives a chance to enable active individual plex to figure out
the clean plex. If you know which plex is clean for sure, then you can recover using vxvol start.
# vxvol -g testdg start testvol
Different scenarios where volumes were in different state before they were recovered using vxvol
start
Scenario 1
# vxprint testvol
v testvol fsgen DETACHED 1048444928 - DETACH pl testvol-01 testvol ENABLED 1048444928 - ACTIVE sd c92t72d1-01 testvol-01 ENABLED 1048444928 0 pl testvol-02 testvol DISABLED 1048444928 - IOFAIL sd c90t70d1-01 testvol-02 ENABLED 1048444928 0 RELOCATE -

# vxplex -g testdg dis testvol-02


# vxvol -g testdg start testvol
# vxprint testvol
v testvol fsgen ENABLED 1048444928 - ACTIVE pl testvol-01 testvol ENABLED 1048444928 - ACTIVE sd c92t72d1-01 testvol-01 ENABLED 1048444928 0 Now attach the plex back to volume. It should start synchronising again.

Scenario 2
# vxprint testvol
v testvol gen
DETACHED 409600 DETACH pl testvol-01 testvol DISABLED 409600 - RECOVER sd c92t58d1-25 testvol-01 ENABLED 409600 0 pl testvol-02 testvol ENABLED 409600 - ACTIVE sd c92t52d1-66 testvol-02 ENABLED 409600 0 # vxvol -g testdg start testvol
# vxprint testvol
v testvol gen
ENABLED 409600 ACTIVE pl testvol-01 testvol ENABLED 409600 - ACTIVE sd c92t58d1-25 testvol-01 ENABLED 409600 0 pl testvol-02 testvol ENABLED 409600 - ACTIVE sd c92t52d1-66 testvol-02 ENABLED 409600 0 -

Scenario 3
# vxprint testvol_3
TY NAME
ASSOC
KSTATE LENGTH PLOFFS STATE TUTIL0 PUTIL0
v testvol_3 gen DETACHED 409600 DETACH pl testvol_3-01 testvol_3 DISABLED 409600 - IOFAIL - sd c92t58d1-27 testvol_3-01 ENABLED 409600 0 pl testvol_3-02 testvol_3 ENABLED 409600 - ACTIVE - sd c92t52d1-68 testvol_3-02 ENABLED 409600 0 # vxvol -g testdg start testvol_3
# vxprint testvol_3
v testvol_3 gen ENABLED 409600 ACTIVE pl testvol_3-01 testvol_3 ENABLED 409600 - ACTIVE - sd c92t58d1-27 testvol_3-01 ENABLED 409600 0 pl testvol_3-02 testvol_3 ENABLED 409600 - ACTIVE - sd c92t52d1-68 testvol_3-02 ENABLED 409600 0 -

8. A volume with 2 plexes one plex with Recover state and other in STALE state, How do
you recover?
Plex P1 is RECOVER indicates it was in the ACTIVE state prior to the failure
Plex P2 is STALE indicates it was not participating in I/Os and had stale data.
Run following commands:
# vxmend fix stale P1
# vxmend fix stale P2
# vxmend fix clean P1
# vxrecover -s V1

9. A Volume is disabled and not startable. No CLEAN plexes. Good Plex is not known. How do
you recover?
* Take all but one plex offline and set that plex to CLEAN
* Run vxrecover -s
* Verify data on the volume
* Run vxvol stop
* Repeat this for all plexes until you identify the plex with good data.
10. How to remove disabled paths from Veritas?
Run vxdctl enable to make sure veritas has released its grip on the device.
# vxdctl enable
Make sure the device is offlined from Solariss view.
# luxadm -e offline /dev/rdsk/c2t5006048452A83978d206s2
Clear out the device from Solariss view.
# devfsadm -Cv
11. How to run vxconfigd in debug mode?
# vxconfigd -k -m enable -x debug_level
(0 no debugging, 9 highest debugging)
-x log log all console output to the /var/vxvm/vxconfigd.log file
-x logfile=name use the specified log file instead
-x syslog Direct all console output through the syslog interface
-x timestamp Attach a date and time-of-day timestamp to all messages
-x tracefile=name log all possible tracing information in the given file
12. /var/vxvm/tempdb directory is missing, misnamed, or corrupted
It stores configuration information about imported disk groups. The contents are recreated after a
reboot.
Causes: Directory is missing, misnamed, or corrupted
Solution:
To remove and recreate this directory:
# vxconfigd -k -x cleartempdir

13. To change disk to sliced?


EVA80003_10 auto:none online invalid
/etc/vx/bin/vxdisksetup -i EVA80003_10 format=sliced
EVA80003_10 auto:sliced online
14. To start volume without recovery?
# vxrecover -sn
15. To remove online failin in vxdisk list output for a good disk
# vxedit -g dgname set failing=off diskname
16. Disk Group Can not be imported. Serial Split Brain Detected.
Serial Split Brain Error
Disk group is not imported. Error being "vxvm:vxconfigd: [ID 457036 daemon.notice]
V-5-1-9576 Split Brain. da id is 0.2, while d
m id is 0.1 for dm A0D5
May 10 16:34:33 tncdx15 vxvm:vxconfigd: [ID 220643 daemon.error]
V-5-1-569 Disk group datad
Solution
Serial Split Brain Error
Disk group is not imported. Error being "vxvm:vxconfigd: [ID 457036 daemon.notice]
V-5-1-9576 Split Brain. da id is 0.2, while d
m id is 0.1 for dm A0D5
May 10 16:34:33 tncdx15 vxvm:vxconfigd: [ID 220643 daemon.error]
V-5-1-569 Disk group datadg, Disk c3t21d0s2: C
annot auto-import group:"
Disk group is not imported automatically.
Causes: A disk was being replaced in array
Serial Split brain condition arises when the "SSB_ID" parameter stored into private region of every
disk in a diskgroup doesn't match.
This could happen if any disk was taken out of diskgroup (because of failure or to transfer some data
into other host)
Solution

First command "vxsplitlines" need to be run on the disk group. This gives result as which & all disks
are suffered with serial split brain.
# vxspiltlines -g datadg
VxVM vxsplitlines NOTICE V-5-2-2708 There are 1 pools.
The Following are the disks in each pool. Each disk in the same pool
has config copies that are similar.
VxVM vxsplitlines INFO V-5-2-2707 Pool 0.
c3t0d0s2 A0D1
To see the configuration copy from this disk issue
/etc/vx/diag.d/vxprivutil dumpconfig /dev/vx/dmp/c3t0d0s2
To import the diskgroup with config copy from this
disk use the following command
/usr/sbin/vxdg -o selectcp=1141218744.29.tncdx15 import datadg
The following are the disks whose ssb ids don't match in this config
copy
A0D3
A0D5
Above error mentions that disk A0D3 & A0D5 are suffering with Split brain. To verify this run
following command:
# vxsplitlines -g <disk group> -c <disk name>
For e.g
#vxsplitlines -g datadg -c c3t0d0s2
VxVM vxsplitlines INFO V-5-2-2701 DANAME(DMNAME) || Actual SSB || Expected SSB
VxVM vxsplitlines INFO V-5-2-2700 c3t0d0s2( A0D1 ) || 0.1
|| 0.1 ssb ids match
VxVM vxsplitlines INFO V-5-2-2700 c3t1d0s2( A0D2 ) || 0.1
|| 0.1 ssb ids match
VxVM vxsplitlines INFO V-5-2-2700 c3t2d0s2( A0D3 ) || 0.2
|| 0.1 ssb ids don't match
VxVM vxsplitlines INFO V-5-2-2700 c3t3d0s2( A0D4 ) || 0.1
|| 0.1 ssb ids match
VxVM vxsplitlines INFO V-5-2-2700 c3t4d0s2( A0D5 ) || 0.2
|| 0.1 ssb ids don't match
VxVM vxsplitlines INFO V-5-2-2700 c3t5d0s2( A0D6 ) || 0.1
|| 0.1 ssb ids match
VxVM vxsplitlines INFO V-5-2-2700 c3t6d0s2( A0D7 ) || 0.1
|| 0.1 ssb ids match
VxVM vxsplitlines INFO V-5-2-2700 c3t7d0s2( A0D8 ) || 0.1
|| 0.1 ssb ids match
VxVM vxsplitlines INFO V-5-2-2700 c3t9d0s2( A0D10 ) || 0.1
|| 0.1 ssb ids match
VxVM vxsplitlines INFO V-5-2-2700 c3t16d0s2( A0D12 ) || 0.1
|| 0.1 ssb ids match
from the output above can bee seen that AOD3 & A0D5 have different ssb_id.
ssb_id could also be verified by running "vxdisk list" on that disk.
#vxdisk list c3t0d0s2
devicetag: c3t0d0
type: auto
hostid: tncdx15
disk: name= id=1141218744.29.tncdx15

group: name=datadg id=1141218774.31.tncdx15


info: format=cdsdisk,privoffset=256,pubslice=2,privslice=2
flags: online ready private autoconfig autoimport
pubpaths: block=/dev/vx/dmp/c3t0d0s2 char=/dev/vx/rdmp/c3t0d0s2
version: 3.1
iosize: min=512 (bytes) max=2048 (blocks)
public: slice=2 offset=2304 len=71124864 disk_offset=0
private: slice=2 offset=256 len=2048 disk_offset=0
update: time=1178813226 seqno=0.123415
ssb:
actual_seqno=0.1
Compare "vxdisk list" outputs of various disks in the diskgroup. It is quite possible that some of the
disks might have similar ssb_id, but it is not necessary that those disks have latest configuration
copy.
To figure out which disk has latest configuration copy, run following command on multiple disks in a
diskgroup.
# /etc/vx/diag.d/vxprivutil dumpconfig /dev/rdsk/c3t0d0s2 >dump_c3t0d0s2
(Check for private slice for proper dumpconfig output)
# /etc/vx/diag.d/vxprivutil dumpconfig /dev/rdsk/c3t2d0s2 >dump_c3t3d0s2
# /etc/vx/diag.d/vxprivutil dumpconfig /dev/rdsk/c3t7d0s2 >dump_c3t7d0s2
From the various dumpconfig outputs, make a note of following information:
dump_c3t0d0s2
update_tid = 0.1027
config_tid = 0.1355
ssb_id =0.1

dump_c3t2d0s2
update_tid=0.1027
config_tid =0.1357
ssb_id =0.2

dump_c3t3d0s2
update_tid = 0.1027
config_tid = 0.1355
ssb_id=0.1
Now here it becomes a bit confusing, as we can see that dump_c3t3d0s2 has latest config_tid
(0.1357) & at same time it has ssb_id 0.2 which doesn't match with expected ssb_id that is 0.1.
To clear this confusion, construct a vxprint output with above "vxprivutil" output.

# cat dump_c3t2d0s2 | vxprint -ht -D Disk group: datadg


DG NAME
NCONFIG NLOG MINORS GROUP-ID
ST NAME
STATE
DM_CNT SPARE_CNT
APPVOL_CNT
DM NAME
DEVICE
TYPE PRIVLEN PUBLEN STATE
RV NAME
RLINK_CNT KSTATE STATE PRIMARY DATAVOLS SRL
RL NAME
RVG
KSTATE STATE REM_HOST REM_DG REM_RLNK
CO NAME
CACHEVOL KSTATE STATE
VT NAME
NVOLUME KSTATE STATE
V NAME
RVG/VSET/CO KSTATE STATE LENGTH READPOL PREFPLEX UTYPE
PL NAME
VOLUME
KSTATE STATE LENGTH LAYOUT NCOL/WID MODE
SD NAME
PLEX
DISK DISKOFFS LENGTH [COL/]OFF DEVICE MODE
SV NAME
PLEX
VOLNAME NVOLLAYR LENGTH [COL/]OFF AM/NM MODE
SC NAME
PLEX
CACHE DISKOFFS LENGTH [COL/]OFF DEVICE MODE
DC NAME
PARENTVOL LOGVOL
SP NAME
SNAPVOL DCO
dg datadg
default default 55000 1141218774.31.tncdx15
dm A0D1
dm A0D2
dm A0D3
dm A0D4
dm A0D5
dm A0D6
dm A0D7
dm A0D8
dm A0D9
dm A0D10
dm A0D11
dm A0D12
dm A0D13
dm A0D14
SPARE
dm A0D15
SPARE
dm A0D16
REMOVED
dm A0D17
dm A0D18
dm A0D19
dm A0D20
dm A0D21
dm A0D22
v db001_v DISABLED ACTIVE 25165824 SELECT fsgen
pl db001_v-01 db001_v DISABLED RECOVER 25165824 CONCAT RW
sd A0D1-01 db001_v-01 A0D1 0
20971520 0
DIS
sd A0D1-08 db001_v-01 A0D1 20974688 4194304 20971520 DIS
pl db001_v-02 db001_v DISABLED RECOVER 25165824 CONCAT RW
sd A0D16-01 db001_v-02 A0D16 0
25165824 0
DIS
pl db001_v-03 db001_v DISABLED RECOVER LOGONLY CONCAT RW

sd A0D1-02 db001_v-03 A0D1 20971520 528 LOG


DIS
v db002_v DISABLED ACTIVE 6291456 SELECT fsgen
pl db002_v-01 db002_v DISABLED RECOVER 6291456 STRIPE 2/128 RW
sd A0D2-01 db002_v-01 A0D2 0
3145728 0/0
RLOC
sd A0D3-01 db002_v-01 A0D3 0
3145728 1/0
DIS
pl db002_v-02 db002_v DISABLED ACTIVE 6291456 STRIPE 2/128 RW
sd A0D17-01 db002_v-02 A0D17 0
3145728 0/0
DIS
sd A0D18-01 db002_v-02 A0D18 0
3145728 1/0
DIS
pl db002_v-03 db002_v DISABLED RECOVER LOGONLY CONCAT RW
sd A0D1-03 db002_v-03 A0D1 20972048 528 LOG
DIS
v db003_v DISABLED ACTIVE 8388608 SELECT fsgen
pl db003_v-01 db003_v DISABLED RECOVER 8388608 CONCAT RW
sd A0D4-01 db003_v-01 A0D4 0
8388608 0
DIS
pl db003_v-02 db003_v DISABLED RECOVER 8388608 CONCAT RW
sd A0D19-UR-001 db003_v-02 A0D19 0
8388608 0
RLOC
pl db003_v-03 db003_v DISABLED RECOVER LOGONLY CONCAT RW
sd A0D1-04 db003_v-03 A0D1 20972576 528 LOG
DIS
v db004_v DISABLED ACTIVE 6291456 SELECT fsgen
pl db004_v-01 db004_v DISABLED ACTIVE 6291456 CONCAT RW
sd A0D5-01 db004_v-01 A0D5 0
6291456 0
DIS
pl db004_v-02 db004_v DISABLED ACTIVE 6291456 CONCAT RW
sd A0D20-01 db004_v-02 A0D20 0
6291456 0
DIS
pl db004_v-03 db004_v DISABLED RECOVER LOGONLY CONCAT RW
sd A0D1-05 db004_v-03 A0D1 20973104 528 LOG
DIS
v db005_v DISABLED ACTIVE 12582912 SELECT fsgen
pl db005_v-01 db005_v DISABLED ACTIVE 12582912 CONCAT RW
sd A0D6-01 db005_v-01 A0D6 0
12582912 0
DIS
pl db005_v-02 db005_v DISABLED ACTIVE 12582912 CONCAT RW
sd A0D21-01 db005_v-02 A0D21 0
12582912 0
DIS
pl db005_v-03 db005_v DISABLED RECOVER LOGONLY CONCAT RW
sd A0D1-06 db005_v-03 A0D1 20973632 528 LOG
DIS
v db006_v DISABLED ACTIVE 10485760 SELECT fsgen
pl db006_v-01 db006_v DISABLED ACTIVE 10485760 CONCAT RW
sd A0D7-01 db006_v-01 A0D7 0
10485760 0
DIS
pl db006_v-02 db006_v DISABLED ACTIVE 10485760 CONCAT RW
sd A0D22-01 db006_v-02 A0D22 0
10485760 0
DIS
pl db006_v-03 db006_v DISABLED RECOVER LOGONLY CONCAT RW
sd A0D1-07 db006_v-03 A0D1 20974160 528 LOG
DIS
v repository DISABLED ACTIVE 20971520 SELECT fsgen
pl repository-01 repository DISABLED ACTIVE 20971776 STRIPE 6/128 RW
sd A0D8-01 repository-01 A0D8 0
3495296 0/0
DIS
sd A0D9-01 repository-01 A0D9 0
3495296 1/0
DIS
sd A0D10-01 repository-01 A0D10 0
3495296 2/0
DIS
sd A0D11-01 repository-01 A0D11 0
3495296 3/0
DIS
sd A0D12-01 repository-01 A0D12 0
3495296 4/0
DIS
sd A0D13-01 repository-01 A0D13 0
3495296 5/0
DIS
#

Check with customer if generated output appears to him as correct. If it appears as correct you can
import the diskgroup with configuration present on this disk.
# vxdg -o selectcp=<disk id> import <diskgroup>
For e.g
# /usr/sbin/vxdg -o selectcp=1141219312.37.tncdx15 import datadg
[ Please note it is quite possible that diskgroup wont import here. If it fails give an -Cf option with
vxdg.
#/usr/sbin/vxdg -Cf -o selectcp=1141219312.37.tncdx15 import datadg ]
Confirm that disk group is imported.
# vxdisk list
Start the volume
#vxvol -g <diskgroup> start <volume name>
(If plexes are in recover state, you need to follow recovery procedure of plexes)
Mount the volume
#mount -F <fs type> /dev/vx/dsk/dg/vol-name /mount-point
(It may ask to run a fsck here)
When diskgroup is imported, ssb_id parameter in all the hard disks is resetted to 0.0.
Volume layout
A question arises many a times what is the layout of given volume just by looking at vxprint output.
Output similar to below should tell you what it is:
layout=contact-mirror
Size volume is equal to size of main plex, Size of main plex is equal to that of subvolume, Size of all
plexes (mirrors) under subvolume is same and is equal to the size of subvolume. Main volume with
main plex, Main plex with one subvolume, One subvolume with number of plexes (equal to number of
mirrors).

# vxassist -g testdg make testvol 5m layout=concat-mirror,nolog nmirror=3


v testvol fsgen
ENABLED ACTIVE 10240 SELECT pl testvol-04 testvol ENABLED ACTIVE 10240 CONCAT RW
sv testvol-S01 testvol-04 testvol-L01 1 10240 0
3/3 ENA

v testvol-L01 fsgen
ENABLED ACTIVE 10240 SELECT pl testvol-P01 testvol-L01 ENABLED ACTIVE 10240 CONCAT RW
sd disk01-04 testvol-P01 disk01 16240 10240 0
c1t5d2 ENA
pl catpro1-P02 testvol-L01 ENABLED ACTIVE 10240 CONCAT RW
sd disk07-05 testvol-P02 disk07 16240 10240 0
c1t4d2 ENA
pl catpro1-P03 testvol-L01 ENABLED ACTIVE 10240 CONCAT RW
sd disk06-06 testvol-P03 disk06 16800 10240 0
c1t3d0 ENA
layout=stripe-mirror (mirrored stripe Raid 10)
Size of volume is equal to size of main plex, Size of main plex is equal to total of subvolumes, Size of
plexes (mirrors) under each subvolume is same
and is equal to the size of subvolume. Main volume with main plex, Main plex with more than one
subvolumes (equal to number of columns), Each subvolume with more than one plex (equal to
number of mirrors)
# vxassist -g testdt make testvol 5m layout=stripe-mirror,nolog stripeunit=128 ncolumns=3
v testvol
fsgen
ENABLED ACTIVE 10240 SELECT testvol-03
pl testvol-03 testvol ENABLED ACTIVE 10368 STRIPE 3/128 RW
sv testvol-S01 testvol-03 testvol-L01 1 3456 0/0
2/2 ENA
sv testvol-S02 testvol-03 testvol-L02 1 3456 1/0
2/2 ENA
sv testvol-S03 testvol-03 testvol-L03 1 3456 2/0
2/2 ENA
v testvol-L01
pl testvol-P01
sd disk01-04
pl testvol-P02
sd disk02-04

fsgen
ENABLED ACTIVE 3456 SELECT testvol-L01 ENABLED ACTIVE 3456 CONCAT RW
testvol-P01 disk01 14560 3456 0
c1t5d2 ENA
testvol-L01 ENABLED ACTIVE 3456 CONCAT RW
testvol-P02 disk02 14560 3456 0
c1t2d0 ENA

v testvol-L02
pl testvol-P03
sd disk07-05
pl testvol-P04
sd disk04-04

fsgen
ENABLED ACTIVE 3456 SELECT testvol-L02 ENABLED ACTIVE 3456 CONCAT RW
testvol-P03 disk07 14560 3456 0
c1t4d2 ENA
testvol-L02 ENABLED ACTIVE 3456 CONCAT RW
testvol-P04 disk04 14560 3456 0
c1t3d1 ENA

v testvol-L03
pl testvol-P05
sd disk06-06
pl testvol-P06
sd disk05-04

fsgen
ENABLED ACTIVE 3456 SELECT testvol-L03 ENABLED ACTIVE 3456 CONCAT RW
testvol-P05 disk06 15120 3456 0
c1t3d0 ENA
testvol-L03 ENABLED ACTIVE 3456 CONCAT RW
testvol-P06 disk05 14560 3456 0
c1t5d0 ENA

layout=mirror-stripe (striped mirror Raid 01)


Size of volume is equal to size of main plexes, all plexes of same size, Size of each main plex is the
equal to total of individual subdisks. Main volume with multiple plexes (equal to number of mirrors),
Each plex with number of subdisks to total up required plex size.

# vxassist -g testdg make testvol 5m layout=striped,nolog nmirror=2 stripeunit=128 ncolumn=3


v testvol fsgen
ENABLED ACTIVE 10240 SELECT pl testvol-01 testvol ENABLED ACTIVE 11856 STRIPE 3/128 RW
sd disk01-01 testvol-01 disk01 10640 3920 0/0
c1t5d2 ENA
sd disk07-03 testvol-01 disk07 10640 3920 1/0
c1t4d2 ENA
sd disk06-01 testvol-01 disk06 11200 3920 2/0
c1t3d0 ENA
pl testvol-02
sd disk02-02
sd disk04-02
sd disk05-02

testvol ENABLED
testvol-02 disk02
testvol-02 disk04
testvol-02 disk05

ACTIVE
10640
10640
10640

11856
3920
3920
3920

STRIPE 3/128 RW
0/0
c1t2d0 ENA
1/0
c1t3d1 ENA
2/0
c1t5d0 ENA

Vous aimerez peut-être aussi