Vous êtes sur la page 1sur 13

How to Build a Secure Cluster

Using Trusted Extensions with Oracle Solaris Cluster 4.1 by Subarna Ganguly This article discusses how to enable Trusted Extensions on an Oracle Solaris Cluster 4.1 cluster and configure a labeled-branded Exclusive-IP type of zone cluster.

Published February 2013 (updated March 2013)

About Trusted Extensions Support for Trusted Extensions on Oracle Solaris Cluster 4.1 extends the Trusted Extensions concept of security containers in Oracle Solaris 11 (also known as non-global zones) to zone clusters. These special zone clusters, or Trusted Zone Clusters, are cluster-wide security containers. Oracle Solaris Trusted Extensions confine applications and data to a specific security label within a non-global zone. To provide high availability, Oracle Solaris Cluster extends that feature to a clustered set of systems in the form of labeled zone clusters. The zones (or nodes) in the zone clusters are a brand of their own and are known as labeled-branded zones. Oracle Solaris Cluster 4.1 supports both Shared-IP and Exclusive-IP types of labeled-branded zone clusters. Configuration Assumptions To enable Trusted Extensions on an Oracle Solaris Cluster 4.1 cluster and configure a labeled-branded Exclusive-IP type of zone cluster using the procedure provided in this article, you must have the following:

OTN is all about helping you become familiar enough with Oracle technologies to make an informed decision. Articles, software downloads, documentation, and more.Join up and get the technical resources you need to do your job.

A two-node cluster must already be installed and configured with Oracle Solaris 11.1 and Oracle Solaris Cluster 4.1. For instructions about installing a two-node cluster, see "How to Install and Configure a Two-Node Cluster." For more details, see theOracle Solaris Cluster Software Installation Guide. All repositories that are needed for Oracle Solaris and Oracle Solaris Cluster must be configured on the cluster nodes. The cluster hardware must be a supported configuration for the Oracle Solaris Cluster 4.1 software. Each node must have two spare network interfaces or virtual interfaces that are used as private interconnects (also known as transports) and at least one other network interface or virtual interface that is connected to the public network subnet. These interfaces are used by the zone cluster. Shared disk storage must be connected to the two nodes. Figure 1 illustrates the configuration discussed in this article. Note: Although not required, it is recommended that you have console access to the nodes during installation, configuration, and administration.

Figure 1 Nomenclature Global cluster name: test Global cluster node names: ptest1 and ptest2 Global cluster private interconnects: vnic11 (on net1) and vnic55 (on net5) on each node Global cluster public subnet: 10.134.98.0 Global cluster public interface: net0 Zone cluster name: TX-zc-xip Zone cluster node names: vztest1d and vztest2d Zone cluster private interconnects: vnic1 (on net1) and vnic5 (on net5) on each node Zone cluster public subnet: 10.134.99.0 Zone cluster public interface: net3 Prerequisites

To create a cluster-wide security container, or in other words, to create a labeled-branded zone cluster, ensure that you meet the following prerequisites: 1. Ensure that the cluster nodes are configured and are healthy.

The command in Listing 1 displays the nodes, quorum status, transport information, and other data that reflect the health of the cluster. Per the configuration shown in Figure 1, the node names are ptest1 and ptest2.

# cluster show === Cluster Nodes === --- Node Status --Node Name --------ptest1 ptest2 === Cluster Transport Paths === Endpoint1 --------ptest1:vnic55 ptest1:vnic11 === Cluster Quorum === --- Quorum Votes Summary from (latest node reconfiguration) --Needed -----2 Present ------3 Possible -------3 Endpoint2 --------ptest2:vnic55 ptest2:vnic11 Status -----Path online Path online Status -----Online Online

--- Quorum Votes by Node (current status) --Node Name --------ptest1 ptest2 Present ------1 1 Possible -------1 1 Status -----Online Online

--- Quorum Votes by Device (current status) --Device Name ----------d1


2.

Present ------1

Possible -------1

Status -----Online

Listing 1 Ensure that the appropriate Oracle Solaris and Oracle Solaris Cluster versions are installed on each node.

3. # more /etc/release 4. 5. Oracle Solaris 11.1 SPARC 6. Copyright (c) 1983, 2012, Oracle and/or its affiliates. All rights reserved. 7. Assembled 19 September 2012 8. 9. # more /etc/cluster/release 10. 11. Oracle Solaris Cluster 4.1 0.18.2 for Solaris 11 sparc

12.

Copyright (c) 2000, 2012, Oracle and/or its affiliates. All rights reserved.

13. Ensure that the correct Oracle Solaris and Oracle Solaris Cluster publishers are set.

You can see how the publishers are set using the command shown in the following example.

# pkg publisher PUBLISHER TYPE solaris origin server.xyz.com/solaris11/dev/ ha-cluster origin

STATUS P LOCATION online F http://solarisonline F http://cluster-server.xyz.com:1234/

14. Ensure that Network Information Service (NIS) is disabled for switch services and the NIS service is offline. If the Trusted Extensions (TX) LDAP server is available, add ldap after files. If there are no Trusted Extensions LDAP servers in the network, set the switches shown in Listing 2.

15. 16. 17. 18. 19.

# svcs -a | grep nis disabled 11:36:39 svc:/network/nis/domain:default disabled 11:37:11 svc:/network/nis/client:default

# /usr/sbin/svccfg -s svc:/system/name-service/switch listprop config/netmask 20. config/netmask astring "cluster files" 21. # /usr/sbin/svccfg -s svc:/system/name-service/switch listprop config/host 22. config/host astring "cluster files" 23. # /usr/sbin/svccfg -s svc:/system/name-service/switch listprop config/automount 24. config/automount astring "files" 25. # /usr/sbin/svcadm refresh svc:/system/name-service/switch 26. # nscfg import -f name-service/switch
Listing 2 27. Ensure that the /etc/hosts file has the names and addresses of all hosts that the cluster nodes are going to access, including the following:

o o o

Package publishers Default routers Any required NFS or application servers

28. Add the cluster private host names to the /etc/hosts file.

a.

First, check the private host name for each cluster node, as shown in Listing 3 and Listing 4.

On the cluster node ptest1, the node ID is 1. Therefore, the private host name is clusternode1-priv and the IP address is172.16.2.1.

# more /etc/cluster/nodeid 1 # ipadm show-addr ADDROBJ lo0/v4 sc_ipmp0/static1 sc_ipmp0/zoneadmd.v4 sc_ipmp0/zoneadmd.v4a vnic11/? vnic55/? clprivnet0/? TYPE static static static static static static static STATE ok ok ok ok ok ok ok ADDR 127.0.0.1/8 10.134.98.214/24 10.134.98.219/8 10.134.98.218/24 172.16.0.65/26 172.16.0.129/26 172.16.2.1/24

lo0/v6

static

ok

::1/128

Listing 3 On the cluster node ptest2, the node ID is 2. Therefore, the private host name is clusternode2-priv and the IP address is172.16.2.2.

# more /etc/cluster/nodeid 2 # ipadm show-addr ADDROBJ lo0/v4 sc_ipmp0/static1 sc_ipmp0/zoneadmd.v4 sc_ipmp0/zoneadmd.v4a vnic11/? vnic55/? clprivnet0/? lo0/v6
b.

TYPE static static static static static static static static

STATE ok ok ok ok ok ok ok ok

ADDR 127.0.0.1/8 10.134.98.215/24 10.134.98.221/24 10.134.98.222/8 172.16.0.66/26 172.16.0.130/26 172.16.2.2/24 ::1/128

Listing 4 Then add the following lines to the /etc/hosts file of each node.

c. # vi /etc/hosts d. e. 172.16.2.1 clusternode1-priv f. 172.16.2.2 clusternode2-priv


1. Installing and Enabling Trusted Extensions On each cluster node, install the Trusted Extensions package.

2. # pkg install system/trusted/trusted-extensions


3. Verify the installation of the Trusted Extensions package, as shown in Listing 5.

4. # pkg info trusted-extensions 5. Name: system/trusted/trusted-extensions 6. Summary: Trusted Extensions 7. Category: Desktop (GNOME)/Trusted Extensions 8. State: Installed 9. Publisher: solaris 10. Version: 0.5.11 11. Build Release: 5.11 12. Branch: 0.175.0.0.0.1.0 13. Packaging Date: Wed Oct 12 14:36:05 2011 14. Size: 5.45 kB 15. FMRI: pkg://solaris/system/trusted/trusted-extensions@0.5.11,5.110.175.0.0.0.1.0:20111012T143605Z
Listing 5 16. To enable permission-based access to untrusted systems, such as default routers or NFS servers, allow connections to and from untrusted hosts to the cluster nodes. Make a copy of the /etc/pam.d/other file before making any changes.

a.

b. # cp /etc/pam.d/other /etc/pam.d/other.orig
c. Modify the following entries in the /etc/pam.d/other file.

d. e.

pam_roles: Allows remote login by roles pam_tsol_account: Allows unlabeled hosts to contact Trusted Extensions systems

# pfedit /etc/pam.d/other ...

f. account requisite pam_roles.so.1 allow_remote g. ... h. account required pam_tsol_account.so.1 allow_unlabeled


17. Enable Trusted Extensions on each node and reboot.

18. 19. 20. 21.

# svcadm enable -s labeld # svcs -a | grep labeld online 11:53:49 svc:/system/labeld:default # init 6

22. Change the hostmodel property to weak on each node, as shown in Listing 6.

When Trusted Extensions is enabled, the hostmodel property for both IPv4 and IPv6 is set to strong.

# ipadm show-prop | more ... ipv4 hostmodel rw strong strong weak strong, src-prio weak ipv4 hostmodel rw strong strong weak strong, src-prio ... # ipadm set-prop -p hostmodel=weak ipv4 # ipadm set-prop -p hostmodel=weak ipv6
Listing 6 23. Add the external hosts that the cluster nodes require as admin_low host types, as shown in Listing 7.

These external hosts are used by the cluster nodes to configure zone clusters such as package publishers, network connectivity by using default routers and other servers such as NFS or application servers. Access is required for these hosts even though they do not run Trusted Extensions. On each node, type the following commands. The following example shows the command for the default router.

# netstat -rn Routing Table: IPv4 Destination Gateway ------------- ------------default 10.134.98.1 10.134.98.0 10.134.98.214 127.0.0.1 127.0.0.1 172.16.0.64 172.16.0.65 172.16.0.128 172.16.0.129 172.16.2.0 172.16.2.1

Flags ----UG U UH U U U

Ref ----5 7 2 3 3 3

Use ---------2810 97 2058 26390 25821 173

Interface --------sc_ipmp0 sc_ipmp0 lo0 vnic11 vnic55 clprivnet0

Routing Table: IPv6 Destination/Mask Gateway Flags Ref Use If ---------------- ----------------- ----- --- ------- ----::1 ::1 UH 2 0 lo0 # tncfg -t admin_low add host=10.134.98.1 # tncfg:admin_low> info name=admin_low host_type=unlabeled doi=1 def_label=ADMIN_LOW min_label=ADMIN_LOW max_label=ADMIN_HIGH host=10.134.98.1/32 tncfg:admin_low> exit
Listing 7

24. Add the cluster transport interface/adapter and the IP addresses of the private hosts to the cipso host template.

On ptest1, type the command shown in Listing 8:

a. b. c. d. e. f. g. h. i. j.
k.

# ipadm show-addr ADDROBJ lo0/v4 sc_ipmp0/static1 sc_ipmp0/zoneadmd.v4 sc_ipmp0/zoneadmd.v4a vnic11/? vnic55/? clprivnet0/? lo0/v6

TYPE static static static static static static static static

STATE ok ok ok ok ok ok ok ok

ADDR 127.0.0.1/8 10.134.98.214/24 10.134.98.219/8 10.134.98.218/24 172.16.0.65/26 172.16.0.129/26 172.16.2.1/24 ::1/128

Listing 8 On ptest2, type the command shown in Listing 9:

l. m. n. o. p. q. r. s. t. u.

# ipadm show-addr ADDROBJ lo0/v4 sc_ipmp0/static1 sc_ipmp0/zoneadmd.v4 sc_ipmp0/zoneadmd.v4a vnic11/? vnic55/? clprivnet0/? lo0/v6

TYPE static static static static static static static static

STATE ok ok ok ok ok ok ok ok

ADDR 127.0.0.1/8 10.134.98.215/24 10.134.98.221/24 10.134.98.222/8 172.16.0.66/26 172.16.0.130/26 172.16.2.2/24 ::1/128

Listing 9 The output in Listing 8 and Listing 9 shows that the following addresses are added on the transport end points:

172.16.0.65, 172.16.0.129, 172.16.0.66, 172.16.0.130


The following are the private node names hosted on the clprivnet0 interfaces: 172.16.2.1 and 172.16.2.2 On each node, type the following commands:

v.

w. # tncfg x. tncfg:cipso> add host=172.16.2.1 y. tncfg:cipso> add host=172.16.2.2 z. tncfg:cipso> add host=172.16.0.65 aa. tncfg:cipso> add host=172.16.0.66 bb. tncfg:cipso> add host=172.16.0.129 cc. tncfg:cipso> add host=172.16.0.130
The entries above are stored in the /etc/security/tsol/tnrhdb file. Creating and Configuring a Trusted Zone Cluster To create a Trusted Zone Cluster or labeled-branded zone cluster, use the Zone Cluster wizard of the clsetup utility. The utility is menu-driven and self-explanatory. If you do not want to use the Zone Cluster wizard of the clsetup utility, follow the steps below: On each node of the physical interfaces on which the private interconnects of the global cluster are created, create a VNIC for the transport interface of the Exclusive-IP (XIP, henceforth) zone cluster.

1.

2. # dladm create-vnic -l net1 vnic1 3. # dladm create-vnic -l net5 vnic5

4.

net1 and net5 are the physical interfaces on which the transport links of the global cluster are created. On one of the nodes, configure the zone cluster, as shown in Listing 10.

5. # clzc configure TX-zc-xip 6. TX-zc-xip: No such zone cluster configured 7. Use 'create' to begin configuring a new zone cluster. 8. clzc:TX-zc-xip> create 9. clzc:TX-zc-xip> set zonepath=/zones/TX-zc-xip 10. clzc:TX-zc-xip> set brand=labeled 11. clzc:TX-zc-xip> set enable_priv_net=true 12. clzc:TX-zc-xip> set ip-type=exclusive 13. clzc:TX-zc-xip> add node 14. clzc:TX-zc-xip:node> set physical-host=ptest1 15. clzc:TX-zc-xip:node> set hostname=vztest1d 16. clzc:TX-zc-xip:node> add net 17. clzc:TX-zc-xip:node:net> set physical=net3 18. clzc:TX-zc-xip:node:net> end 19. clzc:TX-zc-xip:node> add privnet 20. clzc:TX-zc-xip:node:privnet> set physical=vnic1 21. clzc:TX-zc-xip:node:privnet> end 22. clzc:TX-zc-xip:node> add privnet 23. clzc:TX-zc-xip:node:privnet> set physical=vnic5 24. clzc:TX-zc-xip:node:privnet> end 25. clzc:TX-zc-xip:node> end 26. clzc:TX-zc-xip> add node 27. clzc:TX-zc-xip:node> set physical-host=ptest2 28. clzc:TX-zc-xip:node> set hostname=vztest2d 29. clzc:TX-zc-xip:node> add net 30. clzc:TX-zc-xip:node:net> set physical=net3 31. clzc:TX-zc-xip:node:net> end 32. clzc:TX-zc-xip:node> add privnet 33. clzc:TX-zc-xip:node:privnet> set physical=vnic1 34. clzc:TX-zc-xip:node:privnet> end 35. clzc:TX-zc-xip:node> add privnet 36. clzc:TX-zc-xip:node:privnet> set physical=vnic5 37. clzc:TX-zc-xip:node:privnet> end 38. clzc:TX-zc-xip:node> end 39. clzc:TX-zc-xip> verify 40. clzc:TX-zc-xip> commit 41. Jul 31 15:43:19 ptest1 Cluster.RGM.rgmdstarter: did_update called 42. Jul 31 15:43:19 ptest1 Cluster.RGM.rgmdstarter: new cluster TX-zcxip added
Listing 10 43. On each node, type the following commands:

44. 45. 46. 48.

tncfg -z TX-zc-xip tncfg:TX-zc-xip> set label=PUBLIC tncfg:TX-zc-shared> exit # clzc install TX-zc-xip

47. From one of the nodes, install the zone cluster TX-zc-xip file. You can do this from the node ptest1. 49. On each node, run the txzonemgr utility. Ensure that the environment variable is set to DISPLAY.

For example:

# DISPLAY=scc60:2 # export DISPLAY # txzonemgr

Select the global zone. Then, select to configure a per-zone name service. 50. To perform the sysid configuration on an Exclusive-IP labeled-brand zone cluster, perform the following steps for one zone cluster node at a time:

a.

Boot the zone cluster node.

b. # zoneadm -z TX-zc-xip boot


c. Unconfigure the Oracle Solaris instance and reboot the zone.

d. # zlogin TX-zc-xip e. # sysconfig unconfigure f. # reboot


The zlogin session terminates during the reboot. Issue the zlogin command and progress through the interactive screens.

g.

h. # zlogin -C TX-zc-xip
i. Open the console connections to the zone cluster nodes. Open new terminal windows for each node and follow through the interactive sysconfig screens to set up the host name, IP address, LDAP server (if applicable), DNS, and locale. Ensure that you do not enable NIS. When finished, exit the zone console. From the global zone, halt the zone cluster node.

j.

k. # zoneadm -z TX-zc-xip halt


l. Repeat the preceding steps for the other zone cluster node. 51. From one of the nodes, boot the zone cluster.

52.

# clzc boot TX-zc-xip

53. Log in to the zone cluster nodes and set the root password.

54. 55. 56.


1.

# zlogin TX-zc-xip # passwd # exit


Setting Up IPMP Create IPMP groups on both zone cluster nodes. On zone cluster node vztest1d, create an IPMP group for the public interface.

a.

b. c. d. e. f. g. h. i.
j.

# # # # # # # #

ipadm show-addr ipadm delete-addr ipadm delete-addr ipadm create-ipmp ipadm add-ipmp -i ipadm create-addr ipadm show-if ipmpstat -g

net3/v4 net3/v6 XIPZCipmp0 net3 XIPZCipmp0 -T static -a local=10.134.99.192/24 XIPZCipmp0

On zone cluster node vztest2d, create an IPMP group for the public interface.

k. # ipadm show-addr

l. m. n. o. p. q. r.
s.

# # # # # # #

ipadm delete-addr ipadm delete-addr ipadm create-ipmp ipadm add-ipmp -i ipadm create-addr ipadm show-if ipmpstat -g

net3/v4 net3/v6 XIPZCipmp0 net3 XIPZCipmp0 -T static -a local=10.134.99.195/24 XIPZCipmp0

On vztest1d, type the following command:

t. # ipmpstat -g u. GROUP GROUPNAME v. XIPZCipmp0 XIPZCipmp0


w.

STATE ok

FDT --

INTERFACES net3

On vztest2d, type the following command:

x. # ipmpstat -g y. GROUP GROUPNAME z. XIPZCipmp0 XIPZCipmp0


2.

STATE ok

FDT --

INTERFACES net3

Add the IP addresses of the transport interfaces and the public network IP addresses from each zone cluster node to the cipsotemplate.

On vztest1d, run the following:

a. b. c. d. e. f. g. h.
i.

# ipadm show-addr ADDROBJ lo0/v4 XIPZCipmp0/v4 vnic1/? vnic5/? clprivnet1/? lo0/v6

TYPE static static static static static static

STATE ok ok ok ok ok ok

ADDR 127.0.0.1/8 10.134.99.192/24 172.16.4.1/26 172.16.4.65/26 172.16.3.193/26 ::1/128

On vztest2d, run the following:

j. k. l. m. n. o. p. q.
r.

# ipadm show-addr ADDROBJ lo0/v4 XIPZCipmp0/v4 vnic1/? vnic5/? clprivnet1/? lo0/v6

TYPE static static static static static static

STATE ok ok ok ok ok ok

ADDR 127.0.0.1/8 10.134.99.195/24 172.16.4.2/26 172.16.4.66/26 172.16.3.194/26 ::1/128

Log in to the global zone nodes. Add the following addresses to the cipso template using the tncfg command.

s. t. u. v. w. x. y. z.

10.134.99.192 10.134.99.195 172.16.4.1 172.16.4.65 172.16.3.193 172.16.4.2 172.16.4.66 172.16.3.194


In addition, you can add other external hosts to the cipso template. These externals hosts must be trusted and contacted or communicated by the Trusted Zone Cluster nodes. For two-way communication, add the public interfaces of the zone cluster nodes to the cipso templates of the other external hosts that you added. For example:

# tncfg -t cipso tncfg:cipso> add host=10.134.99.192

The zone cluster is now ready to be configured for a failover application. Configuring a Failover Application The procedure for configuring a failover application is similar to that for configuring a regular zone cluster. Note that pxfs file systems cannot be mounted inside a labeled-branded zone cluster in read-write mode. This following process discusses how to create a failover resource group in the Trusted Zone Cluster with an IP address resource and a storage resource. It uses an example and makes the following assumptions:


1.

Solaris Volume Manager is used to create a file system for the storage resource. On each global cluster node (ptest1 and ptest2), a non-shared disk slice must be selected to create the local metadb. In this example, each node has the rpool on the local disk c3t0d0 and another local disk, c3t1d0, on which a slice s4 of size 1 GB is reserved. The metadb is created on that slice. Create the metadb on the slice.

2. # metadb -a -c 3 -f c3t1d0s4
3. Create a device group (testdg) and file system (/testfs) in the global zone. On one of the nodes, ptest1, select the DID disks that are going to be added to the device group.

This example uses the following disks:

o o

/dev/did/rdsk/d6 /dev/did/rdsk/d7

# # # # # # # #
4.

metaset -s testdg -a -h ptest1 ptest2 metaset -s testdg -a -m ptest1 ptest2 metaset -s testdg -a /dev/did/rdsk/d6 /dev/did/rdsk/d7 metainit -s testdg d0 1 1 /dev/did/rdsk/d6s0 metainit -s testdg d1 1 1 /dev/did/rdsk/d7s0 metainit -s testdg d10 -m d0 metattach -s testdg d10 d1 newfs /dev/md/testdg/rdsk/d10

Select an IP address to use as a logical host name.

5.

This example uses test-5, which is available for use in the zone cluster. Add the selected IP address and the newly created file system to the Trusted Zone Cluster, as shown in Listing 11.

6. # clzc configure TX-zc-xip 7. clzc:TX-zc-xip> add net 8. clzc:TX-zc-xip:net> set address=test-5 9. clzc:TX-zc-xip:net> verify 10. clzc:TX-zc-xip:net> end 11. clzc:TX-zc-xip> commit 12. clzc:TX-zc-xip> add fs 13. clzc:TX-zc-xip:fs> set dir=/testfs 14. clzc:TX-zc-xip:fs> set raw=/dev/md/testdg/rdsk/d10 15. clzc:TX-zc-xip:fs> set special=/dev/md/testdg/dsk/d10 16. clzc:TX-zc-xip:fs> set options=rw,logging 17. clzc:TX-zc-xip:fs> set type=ufs 18. clzc:TX-zc-xip:fs> info 19. fs: 20. dir: /testfs 21. special: /dev/md/testdg/dsk/d10

22. 23. 24. 25. 26. 27. 28. 29.

raw: /dev/md/testdg/rdsk/d10 type: ufs options: [rw,logging] cluster-control: true clzc:TX-zc-xip:fs> verify clzc:TX-zc-xip:fs> end clzc:TX-zc-xip> commit clzc:TX-zc-xip> exit

Listing 11 30. Log in to the zone cluster nodes and create the file system mount points.

31. 32. 33.

# zlogin TX-zc-xip # mkdir /testfs # reboot

34. Create a resource group (testrg) with the logical host name resource (test-5) and the storage resource (has-res).

From one of the nodes, log in to the zone cluster.

# zlogin TX-zc-xip # cd /usr/cluster/bin # ./clrt register SUNW.HAStoragePlus # ./clrg create testrg # ./clrslh create -g testrg -h test-5 test-5 # ./clrs create -g testrg -t SUNW.HAStoragePlus -p FilesystemMountPoints=/testfs has-res # ./clrg manage testrg # ./clrg online testrg # ./clrg status === Cluster Resource Groups === Group Name Node Name -----------------testrg vztest1d vztest2d # ./clrs status === Cluster Resources === Resource Name Node Name --------------------has-res vztest1d vztest2d test-5 vztest1d vztest2d online. State ----Online Offline Online Offline Status Message -------------Online Offline Online Offline - LogicalHostname Suspended --------No No Status -----Online Offline

Listing 12 35. View the resource group switchover to another node, as shown in Listing 13.

36. 37. 38. 39. 40. 41. 42. 43. 44. 45.

# ./clrg switch -n vztest2d testrg # ./clrs status === Cluster Resources === Resource Name Node Name --------------------has-res vztest1d vztest2d test-5 vztest1d vztest2d LogicalHostname online. State ----Offline Online Offline Online Status Message -------------Offline Online Offline Online -

Listing 13 To the above resource group, which contains a network resource and storage resource, you can add an application resource intended for use in a trusted environment. See Also Please refer to the following links for more information:

Download Oracle Solaris Cluster Access the Oracle Solaris Cluster documentation See all Oracle Solaris Cluster technical resources Also see the following resources: Trusted Extensions Configuration and Administration Trusted Extensions User's Guide Official Oracle Solaris blog About the Author Subarna Ganguly has worked at Sun and Oracle for over 12 years, first in the Education and Training group primarily training customers and internal engineers on Oracle Solaris networking and Oracle Solaris Cluster productsand then as a quality engineer for the Oracle Solaris Cluster product. Revision 1.4, 03/11/2013

Vous aimerez peut-être aussi