Vous êtes sur la page 1sur 7

bash-3.

00# vxdisk list

DEVICE TYPE DISK GROUP STATUS


disk_0 auto:none - - online invalid
st2540-0_0 auto - - error
st2540-0_1 auto:sliced - - online
st2540-0_2 auto:sliced - - online
st2540-0_3 auto:sliced - - online
st2540-0_4 auto:sliced - - online
st2540-0_5 auto:sliced - - online
st2540-0_6 auto:none - - nolabel
st2540-0_7 auto:none - - nolabel

bash-3.00# vxdmpadm getsubpaths all

PATH- ENCLR-
NAME STATE[A] TYPE[M] DMPNODENAME NAME CTLR ATTRS
c1t0d0s2 ENABLED(A) - disk_0 disk c1 -
c2t202300A0B85BBDE1d0s2 ENABLED(A) PRIMARY st2540-0_0 st2540-0 c2 -
c2t202300A0B85BBDE1d1s2 ENABLED(A) PRIMARY st2540-0_1 st2540-0 c2 -
c2t202300A0B85BBDE1d2s2 ENABLED(A) PRIMARY st2540-0_2 st2540-0 c2 -
c2t202300A0B85BBDE1d3s2 ENABLED(A) PRIMARY st2540-0_3 st2540-0 c2 -
c2t202300A0B85BBDE1d4s2 ENABLED(A) PRIMARY st2540-0_4 st2540-0 c2 -
c2t202300A0B85BBDE1d5s2 ENABLED(A) PRIMARY st2540-0_5 st2540-0 c2 -
c2t202300A0B85BBDE1d6s2 ENABLED(A) PRIMARY st2540-0_6 st2540-0 c2 -
c2t202300A0B85BBDE1d7s2 ENABLED(A) PRIMARY st2540-0_7 st2540-0 c2 -

Labeling the New disks assigned to the server


bash-3.00# echo|format

c2t202300A0B85BBDE1d6: configured with capacity of 96.00MB


c2t202300A0B85BBDE1d7: configured with capacity of 96.00MB

Rescan the disks


bash-3.00# vxdctl enable

bash-3.00# vxdisk list


DEVICE TYPE DISK GROUP STATUS
disk_0 auto:none - - online invalid
st2540-0_0 auto - - error
st2540-0_1 auto:sliced - - online
st2540-0_2 auto:sliced - - online
st2540-0_3 auto:sliced - - online
st2540-0_4 auto:sliced - - online
st2540-0_5 auto:sliced - - online
st2540-0_6 auto:none - - online invalid
st2540-0_7 auto:none - - online invalid

Initialize disks to use in volume


bash-3.00# vxdisksetup -i st2540-0_6
bash-3.00# vxdisksetup -i st2540-0_ 7

Verify SCSI-3 persistent reservation compatibility

Check whether the coordinator disks st2540-0_6 & st2540-0_ 7 are ready to use.

bash-3.00# which vxfentsthdw


/opt/VRTS/bin/vxfentsthdw

bash-3.00# vxfentsthdw

VERITAS vxfentsthdw version 5.1 Solaris


The utility vxfentsthdw works on the two nodes of the cluster. The utility verifies that
the shared storage one intends to use is configured to support I/O fencing. It issues a
series of vxfenadm commands to setup SCSI-3 registrations on the disk, verifies the
registrations on the disk, and removes the registrations from the disk.
******** WARNING!!!!!!!! ********
THIS UTILITY WILL DESTROY THE DATA ON THE DISK!!

Do you still want to continue : [y/n] (default: n) y


The logfile generated for vxfentsthdw is
/var/VRTSvcs/log/vxfen/vxfentsthdw.log.23896

Enter the first node of the cluster: vcsnode1


Enter the second node of the cluster: vcsnode2
Enter the disk name to be checked for SCSI-3 PGR on node vcsnode1 in the format:
for dmp: /dev/vx/rdmp/cxtxdxsx
for raw: /dev/rdsk/cxtxdxsx
Make sure its the same disk as seen by nodes vcsnode1 and vcsnode2
/dev/vx/rdmp/st2540-0_6
Enter the disk name to be checked for SCSI-3 PGR on node vcsnode2 in the format:
for dmp: /dev/vx/rdmp/cxtxdxsx
for raw: /dev/rdsk/cxtxdxsx
Make sure its the same disk as seen by nodes vcsnode1 and vcsnode2
/dev/vx/rdmp/st2540-0_6
************************************************************************************************
******

Testing vcsnode1 /dev/vx/rdmp/st2540-0_6 vcsnode2 /dev/vx/rdmp/st2540-0_6


Evaluate the disk before testing ........................ No Pre-existing keys
RegisterIgnoreKeys on disk /dev/vx/rdmp/st2540-0_6 from node vcsnode1 .. ..Passed
Verify registrations for disk /dev/vx/rdmp/st2540-0_6 on node vcsnode1 . Passed
RegisterIgnoreKeys on disk /dev/vx/rdmp/st2540-0_6 from node vcsnode2 . Passed
Verify registrations for disk /dev/vx/rdmp/st2540-0_6 on node vcsnode2 . Passed
Unregister keys on disk /dev/vx/rdmp/st2540-0_6 from node vcsnode1 .......... Passed
Verify registrations for disk /dev/vx/rdmp/st2540-0_6 on node vcsnode2 ..
Passed
Unregister keys on disk /dev/vx/rdmp/st2540-0_6 from node vcsnode2 ...........
Passed
RegisterIgnoreKeys on disk /dev/vx/rdmp/st2540-0_6 from node vcsnode1 ..
.Passed
Verify registrations for disk /dev/vx/rdmp/st2540-0_6 on node vcsnode1
..Passed
Read from disk /dev/vx/rdmp/st2540-0_6 on node vcsnode1 .............................
Passed
Write to disk /dev/vx/rdmp/st2540-0_6 from node vcsnode1 ................. Passed
Read from disk /dev/vx/rdmp/st2540-0_6 on node vcsnode2 ................. Passed
Write to disk /dev/vx/rdmp/st2540-0_6 from node vcsnode2 ................. Passed
Reserve disk /dev/vx/rdmp/st2540-0_6 from node vcsnode1 .................. Passed
Verify reservation for disk /dev/vx/rdmp/st2540-0_6 on node vcsnode1 ... Passed
Read from disk /dev/vx/rdmp/st2540-0_6 on node vcsnode1 ................ Passed
Read from disk /dev/vx/rdmp/st2540-0_6 on node vcsnode2 ................ Passed
Write to disk /dev/vx/rdmp/st2540-0_6 from node vcsnode1 ................. Passed
Expect no writes for disk /dev/vx/rdmp/st2540-0_6 on node vcsnode2 ..... Passed
RegisterIgnoreKeys on disk /dev/vx/rdmp/st2540-0_6 from node vcsnode2 .. Passed
Verify registrations for disk /dev/vx/rdmp/st2540-0_6 on node vcsnode1 .. Passed
Verify registrations for disk /dev/vx/rdmp/st2540-0_6 on node vcsnode2 . Passed
Write to disk /dev/vx/rdmp/st2540-0_6 from node vcsnode1 ............... Passed
Write to disk /dev/vx/rdmp/st2540-0_6 from node vcsnode2 ............... Passed
Preempt and abort key KeyA using key KeyB on node vcsnode2 ............. Passed
Test to see if I/O on node vcsnode1 terminated ...........................................Passed
RegisterIgnoreKeys on disk /dev/vx/rdmp/st2540-0_6 from node vcsnode1 .. Passed
Verify registrations for disk /dev/vx/rdmp/st2540-0_6 on node vcsnode1 . Passed
Preempt key KeyC using key KeyB on node vcsnode2 ......................................Passed
Test to see if I/O on node vcsnode1 terminated
..................................................Passed
Verify registrations for disk /dev/vx/rdmp/st2540-0_6 on node vcsnode1 . .Passed
Verify registrations for disk /dev/vx/rdmp/st2540-0_6 on node vcsnode2 . Passed
Verify reservation for disk /dev/vx/rdmp/st2540-0_6 on node vcsnode1 ....Passed
Verify reservation for disk /dev/vx/rdmp/st2540-0_6 on node vcsnode2 ... Passed
Remove key KeyB on node vcsnode2 ....................................... Passed
Check to verify there are no keys from node vcsnode1 ................... Passed
Check to verify there are no keys from node vcsnode2 ................... Passed
RegisterIgnoreKeys on disk /dev/vx/rdmp/st2540-0_6 from node vcsnode1 .Passed
Verify registrations for disk /dev/vx/rdmp/st2540-0_6 on node vcsnode1 .Passed
RegisterIgnoreKeys on disk /dev/vx/rdmp/st2540-0_6 from node vcsnode1 Passed
Verify registrations for disk /dev/vx/rdmp/st2540-0_6 on node vcsnode1 ..Passed
Clear PGR on node vcsnode1
..................................................................................Passed
Check to verify there are no keys from node vcsnode1 ......................................Passed
ALL tests on the disk /dev/vx/rdmp/st2540-0_6 have PASSED.
The disk is now ready to be configured for I/O Fencing on node vcsnode1.

ALL tests on the disk /dev/vx/rdmp/st2540-0_6 have PASSED.


The disk is now ready to be configured for I/O Fencing on node vcsnode2.

ALL tests on the disk /dev/vx/rdmp/st2540-0_6 have PASSED.


The disk is now ready to be configured for I/O Fencing on node vcsnode1.

ALL tests on the disk /dev/vx/rdmp/st2540-0_6 have PASSED.


The disk is now ready to be configured for I/O Fencing on node vcsnode2.

Removing test keys and temporary files, if any...

Note:- Check the same way for the other coordinator disks

bash-3.00# pwd
/software/dvd1-sol_sparc

Setup the Fencing

Note: -Make sure that cpserver is accessible via cluster nodes without password.
So we have configured /.rhosts in cpserver with +.

# installvcs -fencing

Veritas Cluster Server 5.1 Install Program

Copyright (c) 2009 Symantec Corporation. All rights reserved. Symantec, the
Symantec Logo are trademarks or registered trademarks of Symantec Corporation or
its affiliates in the U.S. and other countries. Other names may be trademarks of their
respective owners.

The Licensed Software and Documentation are deemed to be "commercial computer


software" and "commercial computer software documentation" as defined in FAR
Sections 12.212 and DFARS Section 227.7202.
Logs are being written to /var/tmp/installvcs-202311212354PDj while installvcs is in
progress.
Checking communication on vcsnode1 .......................... Done
Checking release compatibility on vcsnode1 .................. Done
Checking VCS installation on vcsnode1 ........................ 5.1

Cluster information verification:


Cluster Name: campus
Cluster ID Number: 1234
Systems: vcsnode1 vcsnode2
Would you like to configure I/O fencing on the cluster? [y,n,q] y
Checking communication on vcsnode1 .......................... Done
Checking release compatibility on vcsnode1 .................. Done
Checking VCS installation on vcsnode1 ........................ 5.1
Checking communication on vcsnode2 .......................... Done
Checking release compatibility on vcsnode2 .................. Done
Checking VCS installation on vcsnode2 ........................ 5.1

Fencing configuration
1) Configure CP client based fencing
2) Configure disk based fencing
3) Configure fencing in disabled mode

Select the fencing mechanism to be configured in this Application Cluster: [1-3,q] 1


Since you have selected to configure CP clients, you would be asked to give details
about CP Servers/Disks to be used as co-ordination points.

Press [Enter] to continue:


Enter the total number of co-ordination points including both CP servers and disks:
[b] (3)
Enter the total number of disks among these: [b] (0) 2

You are now going to be asked for the Virtual IP addresses/hostnames of the CP
Servers.
Note that the installer assumes these values to be the identical as viewed from all the
client cluster nodes.

Press [Enter] to continue:


Enter the Virtual IP address/fully qualified host name for the Co-ordination Point
Server #1: [b] 192.168.100.50

Enter the port in the range [49152, 65535] which the Co-ordination PointServer
192.168.100.50 would be listening on or simply accept the default port suggested: [b]
(14250) 14250

Enter fencing mechanism for the disk(s) (raw/dmp): [b,q,?] raw


Select disk number 1 for co-ordination point
1) st2540-0_6
2) st2540-0_7

Enter a valid disk which is available from all the cluster nodes for co-ordination point
[1-2,q] 2

It is strongly recommended to run the 'VxFen Test Hardware' utility located at


'/opt/VRTSvcs/vxfen/bin/vxfentsthdw' in another window before continuing.
The utility verifies if the shared storage you intend to use is configured to support I/O
fencing.
Use the disk you just selected for this verification. Come back here after you have
completed the above step to continue with the configuration.

Press [Enter] to continue:


As per the 'vxfentsthdw' run you performed, do you want to continue with this disk?
[y,n,q] (y)
Select disk number 2 for co-ordination point
1) st2540-0_6

Enter a valid disk which is available from all the cluster nodes for co-ordination point
[1-1,q] 1

It is strongly recommended to run the 'VxFen Test Hardware' utility located at


'/opt/VRTSvcs/vxfen/bin/vxfentsthdw' in another window before continuing.
The utility verifies if the shared storage you intend to use is configured to support I/O
fencing. Use the disk you just selected for this verification. Come back here after you
have completed the above step to continue with the configuration.

Press [Enter] to continue:


As per the 'vxfentsthdw' run you performed, do you want to continue with this disk?
[y,n,q] (y) y

Press [Enter] to continue:


Enter the disk group name for coordinating disk(s): [b] (vxfencoorddg)

CPS based fencing configuration: Coordination points verification

Total number of coordination points being used: 3


CP Server (Port):
1. 192.168.100.50 (14250)
SCSI-3 disks:
1. st2540-0_7
2. st2540-0_6
Disk Group name for the disks in customized fencing: vxfencoorddg
Disk mechanism used for customized fencing: raw

Is this information correct? [y,n,q] (y)


Intializing disk st2540-0_7 on vcsnode1 ..................... Done
Intializing disk st2540-0_6 on vcsnode1 ..................... Done
Initializing disk group vxfencoorddg on vcsnode1 ............ Done
Deporting the disk group vxfencoorddg on vcsnode1 ........... Done

Press [Enter] to continue:

While it is recommended to have secure communication configured between CP


Servers and CP client cluster, the client cluster must be in the same mode (secure or
non-secure) as the CP servers are.

Since the CP servers are configured in non-secure mode, the installer will not try to
configure the client cluster as a secure cluster.

Press [Enter] to continue:

CPS based fencing configuration: Client cluster verification


CPS Admin utility location: /opt/VRTScps/bin/cpsadm
Cluster ID: 1234
Cluster Name: campus
UUID for the above cluster: {ed3a93a6-1dd1-11b2-a161-00144fd20ea2}
Is this information correct? [y,n,q] (y) y

Updating client cluster information on CP Server 192.168.100.50


Adding the client cluster to the CP Server 192.168.100.50 ... Done
Registering client node vcsnode1 with CP Server 192.168.100.50 ... Done
Adding CPClient user for communicating to CP Server 192.168.100.50 ... Done
Adding cluster campus to the CPClient user on CP Server 192.168.100.50 ... Done
Registering client node vcsnode2 with CP Server 192.168.100.50 ... Done
Adding CPClient user for communicating to CP Server 192.168.100.50 ... Done
Adding cluster campus to the CPClient user on CP Server 192.168.100.50 ... Done

Updating /etc/vxfenmode file on vcsnode1 .................... Done


Updating /etc/vxfenmode file on vcsnode2 .................... Done
Do you want to configure CP Agent on the client cluster? [y,n,q] (y) y

Enter a non-existing name for the service group for CP Agent: [b] (vxfen)
Adding CP Agent via vcsnode1 ................................ Done
Stopping VCS on vcsnode1.................................... Done
Stopping Fencing on vcsnode1................................ Done
Stopping VCS on vcsnode2 .................................... Done
Stopping Fencing on vcsnode2 ................................ Done
Starting Fencing on vcsnode1 ................................ Done
Starting Fencing on vcsnode2................................ Done
Updating main.cf with fencing ............................... Done
Starting VCS on vcsnode1 .................................... Done
Starting VCS on vcsnode2.................................... Done
I/O Fencing configuration ................................... Done
I/O Fencing configuration completed successfully
installvcs log files, summary file, and response file are saved at:

/opt/VRTS/install/logs/installvcs-202311212354PDj

Vous aimerez peut-être aussi