Vous êtes sur la page 1sur 12

RH436 Clustering and Storage Mgmt

RH436 6.2 en-1-20120720


Errata and Commentary
Final Su!mitted to Curri"ulum
#age 1
$%dated 0&'20'2013
#age (y%e Student )uide
Unit 1
p12
Connecting to Your Virtual Machines
A console connection (e.g., *irt-manager, *irt-*ie+er or *irs, "onsole) is required to view oot sequence
!essages during a cluster node reoot. SSH ter!inal windows are onl" availale when the networ# is up
and con$igured, ut additionall" allow window row, colu!n and $ont resi%ing, and te&t cut'and'paste
etween cluster nodes. (or auto!atic te&t'entr" replication on !ultiple s"ste!s, (i.e., $or tas#s to e t"ped
identicall" on each node), the )*+ -onsole ter!inal and the +,+- "lusterSSH app have this $eature.
Unit 1
p1. (2//)
0etting up "our environ!ent, 0tep 11.
2he .ote aout con$iguring an .ss,'"on/ig is use$ul and ti!esaving. 3t allows des#top45s student user to
006 as root to each cluster node. 2his is the onl" access that is required to per$or! the course e&ercises.
Add Stri"tHost-eyC,e"0ing no to each host entr" to avoid having to clear #e"s when nodes are reuilt.
Unit 1
p1. (2//)
0tep 11.
(or co!pleteness, des#top45s student user should e ale to 006 as student to each node without eing
queried $or a password. 2o do so, correct the 0+-inu& conte&t on student5s 1'.ss,'aut,ori2ed30eys.
7epeat $ro! the student ho!e director" on all nodes. 2he correct 0+-inu& conte&t is ss,3,ome3t.
[student@nodeY ~]$ restorecon .ssh/authorized_keys
Unit 1
p1. (2/8,2//)
0tep 11.
2he instructions state, 9:nce the script is co!plete, oot the newl" created cluster nodes.9 2his is
unecessar"; the la!-!uild-"luster script auto!aticall" started the newl" created nodes. *o not per$or! the
!ultiple *irs, start node4 co!!ands shown on p2//.
Unit 1
p1. (2/8,2//)
At an" ti!e during the course, s"ste!s can e reuilt to restart e&ercise practice $ro! scratch. Choose the
$ollowing steps depending on how !uch o$ the in$rastructure is needed to e reuilt.
(optional)
(optional)
(with luci)
(w<o luci)
7einstall $ull des#top using ,4+ oot, select )5S +or0station $ro! !enu.
,repare des#top4 using 9Setting u% your En*ironment9, p1= (2/.). 0tart with
this e&ercise to reuild the !aster clusterbase i!age when necessar".
7euild the virtual !achine snapshots $ro! the !aster i!age>
[root@desktopX ~]# lab-build-cluster -1234
[root@desktopX ~]# lab-build-cluster -123
*elete the stale #e"s $or replaced nodes $ro! the #nown?hosts $ile.
[student@desktopX ~]# vi .ssh/known_hosts
,repare the iscsi targets on node=.
[root@node4 ~]# lab-setup-targets
,repare the iscsi devices and !ultipathing on ever" other nodes.
[root@node{123} ~]# lab-setup-iscsi -12m
Continue cluster setup using 9Create a .e+ (,ree .ode Cluster9, p@A (2@B).
Unit 2
p2@ (2B1)
0tep =.
Con$ir! that the server'side con$iguration is correct and pulishing the correctl" con$igured -UCs>
[root@node4 ~]# tgt-admin -s
L
e
g
e
n
d
6n/ormation ' e&tended co!!entar" or notice.
6m%ortant ' signi$icant co!!entar" or warning.
Edit ' t"pographic corrections, $or i!proved clarit" or accurac".
Error ' the e&ercise or associated so$tware does not per$or! as docu!ented.
Resear", ' a re$erence or a re!inder $or additional research.
.o Ma",ine ' relevent when running in the Virtual 2raining environ!ent.
.e+ ' added or !odi$ied since the previous pulished errata.
RH436 Clustering and Storage Mgmt
RH436 6.2 en-1-20120720
Errata and Commentary
Final Su!mitted to Curri"ulum
#age 2
$%dated 0&'20'2013
#age (y%e Student )uide
Unit 2
pAA (2B1)
Correcting a !isna!ed i0C03 initiator requires stopping oth the is"si and is"sid services to recogni%e the
new na!e. 2he iscsid service need onl" e stopped; it will restart itsel$ when required>
[root@nodeY ~]# service iscsi stop
[root@nodeY ~]# service iscsid stop
[root@nodeY ~]# service iscsi start
Unit A
p.A (2B/)
0tep 1.
2he results $or previous device events are seen with ude*adm in/o --e7%ort-d!. 2his dataase can e
queried to correlate the #ernel device path to the device $ile na!e. (or a device re!oval event, ude*
queries $or the na!e o$ the device $ile to e deleted. 2he purpose o$ this quer" is to research the unique
device identi$ication !anuall" assigned in the tgtd con$iguration and use it in a custo! ude* rule.
Unit =
p8. (2B@)
0tep =.
2i!eout setting descriptions are in the well'co!!ented 'et"'is"si'is"sid."on/ $ile (no !an page). 0etting 2
second ti!eouts pro!otes rapid $ailure detection during class, ut is inappropriatel" short $or production.
(or in$or!ation on the relationships etween di$$erent ti!eout settings and con$iguring to include 0C03
hardware $ailover, see RHE5 6 Storage 8dministration )uide 24.16.2. iSCSI Settings With dm-multipath.
Unit =
p8. (2B@)
0tep ..
,artitions used to uild a !ultipath device !ust e the sa!e on all nodes $or prole!'$ree con$iguration.
(or e&a!ple, /dis0 -"ul 'de*'sd9 !ust show the sa!e dis#s (e.g., sda is sda on all nodes, not sda DD sd" on
another node). Ee sure to per$or! iscsi discover" and target login in the sa!e order on each node.
Unit =
p8B (2@1)
0tep ..
Ee patient. 2he tou", co!!and !a" appear to hang, and multi%at, -ll !a" ta#e ti!e to displa".
Unit .
pB@ (2@.)
2he 9Create Cew Cluster9 we page has a t"po>
7eplace 97eoot 7odes Ee$ore Foining Cluster9
with 97eoot Codes Ee$ore Foining Cluster9
Unit .
pB@ (2@.)
Using luci to install pac#ages, reoot nodes, and con$igure the initial cluster nodes all at once can result in
un$inished nodes that are not cluster !e!ers i$ the s"ste!s reoot during the process. 2o avoid this
prole!, leave 97eoot Codes Ee$ore Foining Cluster9 unchec#ed. 3t will e necessar" to reoot all o$ the
)VM nodes a$ter the cluster is uilt (or "ou will e&perience erratic cluster node ehavior later).
Unit .
pB@ (2@.)
3$ the 97eoot Codes Ee$ore Foining Cluster9 chec#o& is chec#ed during installation, using luci to install
pac#ages, reoot nodes, and con$igure the initial cluster nodes $requentl" results in un$inished nodes that
are not cluster !e!ers. Co!!onl", node1 is a cluster !e!er while node2 is not. 3$ this has occurred,
$inish the install with these tas#s; use the wor#ing cluster !e!er as the node $ro! which to cop">
[root@node1 ~]# scp /etc/cluster/cluster.con node2!/etc/cluster/
2hen, in the luci we inter$ace, select Manage Clusters, !ouse clic# "lusterX to ring up the .odes
screen, select the chec#o& ne&t to the $ailed node, and press :oin Cluster. Gait patientl" $or the new node
to Hoin and displa". 7e$resh the ta i$ necessar".
Unit .
pB@ (2@.)
2he previous un$inished cluster installation prole! has le$t incorrect ",0"on/ig settings on the two nodes.
Ghen oth nodes are Hoined and the cluster is stale with services running, $i& the ch#con$ig settings>
[root@node1 ~]# yum install -y ccs
[root@node1 ~]# ccs -h node1 --startall
[root@node1 ~]# chkconig --list
[root@node2 ~]# chkconig --list
RH436 Clustering and Storage Mgmt
RH436 6.2 en-1-20120720
Errata and Commentary
Final Su!mitted to Curri"ulum
#age 3
$%dated 0&'20'2013
#age (y%e Student )uide
Unit .
pB@ (2@.)
A success$ul installation procedure results in these service settings on all nodes>
clv!d on
c!an on may also start fenced, dis!d, groupd, dlm"controld, gfs"controld
coros"nc o$$ :# to be off, since this is started by cman $hen cman starts
g$s2 on
!essageus on also started by ricci
!odclusterd o$$ %# to be off, since this is started on demand by ricci&odd'ob
oddHod o$$ %# to be off, since this is started by ricci $hen ricci starts
rg!anager on
ricci on
saslauthd o$$ also started by ricci
Unit .
pB@ (2@.)
Certain co!!ands, introduced in later units, help trouleshoot un$inished con$igurations. A$ter ensuring
that the ""s pac#age is installed on all nodes (including node=), these are use$ul, each $or a speci$ic purpose>
2o onl" cop" the latest con$iguration $ile to all nodes (where ricci !ust e running) ut no $urther action>
[root@node4 ~]# cman_tool version -r
2o cop" the con$iguration $ile to other nodes $hen the all nodes are already cluster members>
[root@node4 ~]# ccs -h node1 --sync
2o cop" the con$iguration $ile to other nodes and load the new con$iguration now. Use --",e"0 to see i$
activation succeeded or is needed>
[root@node4 ~]# ccs -h node1 --sync --activate
[root@node4 ~]# ccs -h node1 --check
2o start cluster services a$ter !anuall" con$r!ing that cluster.con$ is on all nodes and the sa!e version. *o
not co!ine this option with --syn" --a"ti*ate on the sa!e co!!and line invocation>
[root@node4 ~]# ccs -h node1 --startall
2o stop all cluster services e$ore atte!pting to re'distriute the con$iguration, use>
[root@node4 ~]# ccs -h node1 --stopall
Unit .
p@A (2@/)
Correction in the (efore you begin... steps, third ullet.
7eplace [root@nodeY ~]# lab-setup-iscsi -m
with [root@nodeY ~]# lab-setup-iscsi -12m
Unit .
p@A (2@/)
Using luci to install pac#ages, reoot nodes, con$igure and start the initial cluster nodes $requentl" results in
un$inished nodes that are not cluster !e!ers. Co!!onl", node1 is a cluster !e!er while other nodes
are not. (inish the install with these tas#s; use the wor#ing cluster !e!er as the node $ro! which to cop">
[root@node1 ~]# scp /etc/cluster/cluster.con node2!/etc/cluster/
[root@node1 ~]# scp /etc/cluster/cluster.con node3!/etc/cluster/
2hen, in the luci we inter$ace, select Manage Clusters, !ouse clic# "lusterX to ring up the .odes
screen, select the chec#o& ne&t to the two $ailed nodes, and press :oin Cluster. Gait patientl" $or the new
nodes to Hoin and displa". 7e$resh the ta i$ necessar".
RH436 Clustering and Storage Mgmt
RH436 6.2 en-1-20120720
Errata and Commentary
Final Su!mitted to Curri"ulum
#age 4
$%dated 0&'20'2013
#age (y%e Student )uide
Unit .
p@A (2@/)
2he previous un$inished cluster installation prole! has le$t incorrect ",0"on/ig settings on the all nodes.
Ghen the three nodes are Hoined and the cluster is stale with services running, $i& the ch#con$ig settings>
[root@node1 ~]# yum install -y css
[root@node1 ~]# ccs -h node1 --startall
As with the previous installation e&ercise, chec# i$ the correct services are set to start at oot, on all nodes>
[root@nodeY ~]# chkconig --list
(inall", since a lu"i'ased install does not install ccs, also install ccs on the re!aining two nodes>
[root@node2 ~]# yum install -y css
[root@node3 ~]# yum install -y css
Unit 8
p111 (A12)
0tep =
3n 7ed 6at Cluster architecture version 2, ""s3tool queried and !odi$ied con$iguration through the ""sd
con$iguration dae!on. 2hat ""sd $unctionalit" has now een integrated into ri""i. 2he ""s3tool utilit" is
no longer used $or editing or s"ncing con$iguration, ut can still view con$igured node para!eters.
Cote that viewing the "luster."on/ con$iguration using ""s3tool is di$$erent than quer"ing c!an5s in'
!e!or" view o$ the cluster using, $or e&a!ple, "man3tool status.
Unit /
p (A1=)
0tep @. )ypesetting correction in the *+, e-ample...
7eplace name="clusterreplacea!le"X#replacea!le""
with name="clusterX"
Unit /
p12/ (A1/,A1B)
0tep A.
Ghen using /en"e3*irtd -" to enter $encing con$iguration, e sure that entered $ields do not contain e&tra
characters or white space. Chec# /en"e3*irt."on/ to loo# $or !ista#es (e.g., ac#end D 9livirt 9).
Unit B
p1=='1=.
(A11'A1.)
0+-inu& AVC denial caused " lac# o$ root3t conte&t on apache cached runti!e con$iguration $ile
'a%a",e'a%a",e;,tt%d',tt%d."on/. 7epetitive issue in classroo!, ut not ale to consistentl" reproduce.
(i& " creating se!anage dataase entr" or restorecond con$iguration. 0"!pto!s to recogni%e are that
rg3test succeeds ut "lus*"adm -e on sa!e service $ails, ut onl" when 0+-inu& is set $or +n$orcing. 0ee
AVC error in '*ar'log'audit'audit.log.
Unit B
p1=1 (A12)
0tep 8.
2his s"nta& is unnecessar" on 76+-8, as stated, ut it also onl" wor#s previous to 76+-8 and not here.
(Caution> on 76+-8, the dot'ending s"nta& wor#s $or the ","on co!!and, ut not restore"on).
7eplace [root@node1 ~]# restorecon -"v /var/www/html/.
with [root@node1 ~]# restorecon -"v /var/www/html/
Ghen the restorecon success$ull" relaels, the co!!and will output the resulting change in $ile conte&t.
Chec# that when the $iles"ste! is !ounted, '*ar'+++',tml has the 0+-inu& t"pe ,tt%d3sys3"ontent3t
Unit B
p1=/
2o increase the level o$ trouleshooting detail, con$igure logging in /etc/cluster/cluster.con.
Ghen a cluster is stale and the reasons $or e&tra trouleshooting have een resolved, deug logging can e
re!oved to eli!inate the dis# activit". 2hese are s"nta& e&a!ples onl"; using luci is easiest con$iguration.
cluster name="alp$a" con%&'()ers&on="1""
lo''&n' de!u'="on"#"
lo''&n'"
lo''&n'(daemon name="coros*nc" de!u'="on"#"
lo''&n'(daemon name="coros*nc" su!s*s="+,-." de!u'="on"#"
lo''&n'(daemon name="/d&skd" de!u'="on"#"
000
#lo''&n'"
000
RH436 Clustering and Storage Mgmt
RH436 6.2 en-1-20120720
Errata and Commentary
Final Su!mitted to Curri"ulum
#age <
$%dated 0&'20'2013
#age (y%e Student )uide
Unit B
p1=/
*ae!on na!es include "orosyn" =dis0d grou%d /en"ed dlm3"ontrold g/s3"ontrold rgmanager.
Coros"nc settings appl" to all coros"nc sus"ste!s " de$ault, as in the $irst e&a!ple aove, ut
sus"ste!s can also e con$igured individuall", as in the second e&a!ple. 0us"ste!s include C5M C#)
M86. SER> CM8. (?(EM @$?R$M C?.FAB C-#( E>(. (urther logging attriutes, (e.g.
s"slog interaction) !a" e set at the gloal, dae!on and sus"ste! level. 0ee cluster.con#$%
Coros"nc sus"ste!s>
C-M Cluster Me!ership trac!ing membership configuration
C,I Closed ,rocess Iroup select message subscription
MA3C Main $unction corosync initiali.ation, address configuraton
0+7V 0ervice 6andler ser/ice handling routines
CMAC Cluster Manager cluster manager application
2:2+M 2ote! 0tac# detection of node 'oin&lea/e
JU:7UM Juoru! membership /ote counting
C:C(*E Con$iguration *ataase ob'ect database to configure ser/ices
C),2 Chec#pointing memory image chec!pointing
+V2 +venting message e/enting ser/ice
Unit B
p1=B (A18)
0tep 8.
Ghen using "lus*"adm -F to enale a service with $ailover do!ain rules, the co!!and ouput displa"s the
node on which "lus*"adm was run, not the correct node on which the service was started. 2his appears to
e onl" a "lus*"adm displa" error, since the service did start on the correct requested node.
Unit B
p1=@ (A1/)
0+-inu& AVC denial caused " lac# o$ root3t conte&t on sa!a cached runti!e con$iguration $ile
'sam!a'sam!a;sam!a'sm!."on/. 7epetitive issue in classroo!, ut not ale to consistentl" reproduce.
(i& " creating se!anage dataase entr" or restorecond con$iguration. 0"!pto!s to recogni%e are that
rg3test succeeds ut "lus*"adm -e on sa!e service $ails, ut onl" when 0+-inu& is set $or +n$orcing. 0ee
AVC error in '*ar'log'audit'audit.log.
Unit B
p1=@ (A1B)
Add a C3(0 0ervice to a Cluster, (efore you begin...
C6FS S,are tale section.
2his section descries the share con$iguration in 'et"'sam!a'%u!li"."on/, not a cluster resource.
Unit B
p1=@ (A1@)
0tep /.
Unli#e the e&ercise with restorecon, the dot'ending notation $or chcon is correct and changes the $iles"ste!
root, not the !ountpoint director" itsel$. 2o see this ehavior, replace step / with the $ollowing>
[root@node1 ~]# ls -l&d /cis-e'port K to see original !ountpoint conte&t
[root@node1 ~]# mount /dev/mapper/clusterstoragep2 /cis-e'port
[root@node1 ~]# ls -l&d /cis-e'port K to see conte&t a$ter !ount
[root@node1 ~]# chcon -" -t samba_share_t /cis-e'port/.
[root@node1 ~]# ls -l&d /cis-e'port K to see conte&t a$ter correction
[root@node1 ~]# echo ()ow with 2*+ more content,( -- /cis-e'port/test
[root@node1 ~]# umount /cis-e'port
[root@node1 ~]# ls -l&d /cis-e'port K to see original !ountpoint conte&t again
Unit B
p1=@ (A1@)
0tep B. Caution0 )his step is commonly mistyped.
2he 5':5 is how +get creates an output $ile. 3$ redirection (e.g., 5L5) is used instead, the +get screen output
will overwrite the $ile, resulting in a corrupted %u!li"."on/ and a %u!li"s,are service that will not start.
RH436 Clustering and Storage Mgmt
RH436 6.2 en-1-20120720
Errata and Commentary
Final Su!mitted to Curri"ulum
#age 6
$%dated 0&'20'2013
#age (y%e Student )uide
Unit B
p1=@ (A1/)
p1=@ (A1@)
0tep @.
*o not enter the te&t 9Multipathed i0C03 0torage9, enter the actual 1+ultipathed iSCSI Storage2 partition.
7eplace Multipathed i0C03 0torage
with 1+ultipathed iSCSI Storage2
3$ previous e&ercises have een $ollowed $aith$ull", the device to e entered here was created in the !ost
recent /dis0 tas# (pA1@, step =) as 'de*'ma%%er'"lusterstorage%2.
Unit B
p1=@ (A1@)
0tep 11.
C2?02A2U0?C:CC+C23:C?7+(U0+*
Unale to connect to the share. 3t does not e&ist or the service is not ound to the correct port or inter$aces.
Unit B
p1=@ (A1@)
0tep 11.
C2?02A2U0?-:I:C?(A3-U7+
2he password requested is the user5s sa!a password, not their UC34 password. Eut, since we have not
created an" sa!a users, pressing CEnterD switches to the guest account, which has no password.
Unit 11
p1B8 (AA1)
0tep /.
3$ the quoru! dis# is re!oved using ccs, clustat continues to displa" the qdis# oHect until the cluster is
restarted. 2his is correct and is a re!inder that restarting a$ter critical con$iguration changes is important.
Unit 11
p211 (AA8)
Using 6A'-VM
Con$iguring with 6A'-VM is $requentl" prole!atic ecause it is eas" to error in placing resources in
proper order as the resource group is uilt. ,a" special attention to the e&ercise instructions.
Unit 11
p21. (AA@)
0tep A.
2he loc#ing t"pe !a" eco!e reset to 919 " "l*md i$ cluster co!!unication prole!s occur, such as split
rain or node $ailure. Also, loo# $or *-M errors in '*ar'log'messages. A$ter trouleshooting and $i&ing
prole!, !anuall" reset loc#ing t"pe to 9A9. 2o initiall" ensure a loc#ing t"pe 9A9 con$iguration, each
cluster node !ust e installed with Ena!led S,ared Storage Su%%ort chec#ed 4es, the 97esilient 0torage9
l*m2-"luster pac#age installed and the loc# t"pe set " l*m"on/ or !anuall" in 'et"'"luster'"luster."on/.
Eehavior controlled " para!eter /all!a"03to3lo"al3lo"0ing E 1 in 'et"'l*m'l*m."on/.
Unit 11
p218 (A=2)
0tep 12.
7eplace [student@desktopX ~]$ smbclient //cis.public.cluster..e'ample.com/publicshare
with [student@desktopX ~]$ smbclient //cis.public.cluster..e'ample.com/public
Unit 1A
p2AB (A.8)
0tep 11
Yes, this is the correct s"nta&. 2here is no '$ ecause the output is piped to 02*:U2 using the single dash.
Unit 1=
pp (A.@'A8.)
2he 4<Y usage issue (detailed ne&t in this errata) onl" cover the e&ercise steps5 co!!ands. Also correct the
4 and Y usage $or the e&a!ple co!!and output displa"s and instruction te&t throughout ppA.@'A8..
Unit 1=
p2.A (A81)
Con$iguring a 7eplicated Volu!e, 0tep ..
Use 3 $or the node na!e and * $or the cluster nu!er, opposite to how the solution is written. 2his la
consistentl" !isleads students who have een using * $or the cluster nu!er and 3 $or the node all wee#.
[root@node1 ~]# mkdir /nY_e'port1
[root@node1 ~]# echo //dev/vgsrv/brick1 /nY_e'port1 's deaults * 1/ --/etc/stab
Unit 1=
p2.A (A81)
0tep 8. 0a!e 4<Y issue.
[root@node1 ~]# or )012 in node32..44.private.clusterX.e'ample.com5 do
- gluster peer probe 63)0124
- done
RH436 Clustering and Storage Mgmt
RH436 6.2 en-1-20120720
Errata and Commentary
Final Su!mitted to Curri"ulum
#age 7
$%dated 0&'20'2013
#age (y%e Student )uide
Unit 1=
p2.A (A81)
0tep B. 0a!e 4<Y issue. Also add line continuation characters in the oo#.
[root@node1 ~]# gluster volume create newvol replica 7
2 node1.private.clusterX.e'ample.com!/n1_e'port1 7
node2.private.clusterX.e'ample.com!/n2_e'port1
Unit 1=
p2.8 (A82)
0tep A. 0a!e 4<Y issue.
[root@desktopX ~]# mount -t glusters node1.private.clusterX.e'ample.com!/newvol /newvol
Unit 1=
p2.8 (A82)
0tep /. 2"pographic error in !ountpoint na!e.
1eplace [root@node1]# ls -lh /e'port1/bigile
with [root@node1]# ls -lh /n1_e'port1/bigile
Unit 1=
p2.@ (A8A)
+&pand a Volu!e, 0tep 2. 0a!e 4<Y issue. Also add line continuation characters in the oo#.
[root@node1 ~]# gluster volume add-brick newvol 7
node3.private.clusterX.e'ample.com!/n3_e'port1 7
node4.private.clusterX.e'ample.com!/n4_e'port1
Unit 1=
p2.@ (A8=)
0tep A. 2"pographic error.
7eplace .um!er o% 2r&cks3 2 4 2 = 2
with .um!er o% 2r&cks3 2 4 2 = 4
end of Student 4uide
RH436 Clustering and Storage Mgmt
RH436 6.2 en-1-20120720
Errata and Commentary
Final Su!mitted to Curri"ulum
#age &
$%dated 0&'20'2013
INSTRUCTOR CONFIDENTIAL
#age (y%e 6nstru"tor )uide
p1A 0tep 11.
2he instructions state, 9:nce the script is co!plete, oot the newl" created cluster nodes.9 2his is
unecessar"; the la!-!uild-"luster script auto!aticall" started the newl" created nodes.
p== *e$inition o$ Mte!pnode> 2he na!e o$ a created te!porar" device node to provide access to the device
$ro! a e&ternal progra! e$ore the real node is created. ude*F7G.
p.= 0tep 2. A!iguous ter!inolog".
9+&plain the priorit" order o$ those sections> general de/aults are overridden " speci$ic de*i"es settings,
which can e $urther overridden " speci$ic multi%at, settings (!ultipaths L devices L de$aults).9
p// 0tep =.
7eplace Clic# on the Su!mit utton
with Clic# on the Create Cluster utton
pB1 Correction in the (efore you begin... steps, third ullet.
7eplace [root@nodeY ~]# lab-setup-iscsi -m
with [root@nodeY ~]# lab-setup-iscsi -12m
p@1 (igure 8.1.1 is !isnu!ered and should e called (igure 8.1.2
p11A 7eplace 5nter%ace [none]3 82nter-
with 5nter%ace [none]3 private
p1A1 0tep 8. 0a!e error as in student guide p1=1 (A12).
7eplace [root@node1 ~]# restorecon -"v /var/www/html/.
with [root@node1 ~]# restorecon -"v /var/www/html/
Chec# that when the $iles"ste! is !ounted, '*ar'+++',tml has the 0+-inu& t"pe ,tt%d3sys3"ontent3t
p1AA As discussed aove $or the student guide and in the Cote on this page>
0+-inu& AVC denial caused " lac# o$ root3t conte&t on apache cached runti!e con$iguration $ile
'a%a",e'a%a",e;,tt%d',tt%d."on/. 7epetitive issue in classroo!, ut not ale to consistentl" reproduce.
0tudents end up with de/ault3t on the runti!e director" (ut not " per$or!ing a restore"on there.)
0"!pto!s to recogni%e are that rg3test succeeds ut "lus*"adm -e on sa!e service $ails, ut onl" when
0+-inu& is set $or +n$orcing. 0ee AVC error in '*ar'log'audit'audit.log. (i& " creating an se!anage
dataase entr" and restorecond con$iguration on each node.
[root@nodeY ~]# semanage conte't -a -t root_t //apache#/.9%:/
[root@nodeY ~]# restorecon -"v /apache
[root@nodeY ~]# echo //apache/ -- /etc/selinu'/restorecond.con
[root@nodeY ~]# service restorecond restart
RH436 Clustering and Storage Mgmt
RH436 6.2 en-1-20120720
Errata and Commentary
Final Su!mitted to Curri"ulum
#age H
$%dated 0&'20'2013
INSTRUCTOR CONFIDENTIAL
#age (y%e 6nstru"tor )uide
p1=1 As discussed aove $or the student guide>
0+-inu& AVC denial caused " lac# o$ root3t conte&t on sa!a cached runti!e con$iguration $ile
'sam!a'sam!a;sam!a'sm!."on/. 7epetitive issue in classroo!, ut not ale to consistentl" reproduce.
0tudents end up with de/ault3t on the runti!e director" (ut not " per$or!ing a restore"on there.)
0"!pto!s to recogni%e are that rg3test succeeds ut "lus*"adm -e on sa!e service $ails, ut onl" when
0+-inu& is set $or +n$orcing. 0ee AVC error in '*ar'log'audit'audit.log. (i& " creating an se!anage
dataase entr" and restorecond con$iguration on each node.
[root@nodeY ~]# semanage conte't -a -t root_t //samba#/.9%:/
[root@nodeY ~]# restorecon -"v /samba
[root@nodeY ~]# echo //samba/ -- /etc/selinu'/restorecond.con
[root@nodeY ~]# service restorecond restart
p1@2
CLVM Overview LVM HA-LVM Clustered LVM
uses clvmd no yes yes
'locking_type' setting 1 3 3
VG 'clustered' setting no yes yes
LV 'activation' setting yes no yes
available nodes at one time 1 (non-cluster) 1 all
for use with filesystems ext3,ext4
xfs
ext3,ext4
xfs
gfs2
p1@2 0tep 2.
9herp'a'derp9 is co!!onl" a vulgar slang ter!. Can this e replaced with !ore pro$essional te&tN
p1@A 0tep ..
2he loc#ing?t"pe !a" need to e set, even i$ originall" set " luci, since "l*md !a" have caused the
loc#ing?t"pe to $ailac# to 1 in certain circu!stances. 3$ so, run the l*m"on/ co!!and on e/ery node.
7eplace [root@node1 ~]# lvmcon --enable-cluster
with [root@nodeY ~]# lvmcon --enable-cluster
p1@A 0tep . and 8. )ypesetting correction.
7eplace [root@nodeY ~#] ...
with [root@nodeY ~]# ...
p2.A 0i!ilar to 3nstructor Iuide p1A, it is unecessar" to !anuall" start the nodes, la!-!uild-"luster alread" has.
7eplace [root@desktopX ~]# or ; in node31..445 do virsh start 63;45 done
with [root@desktopX ~]# virsh list
pp 2.@'28= 2he 4<Y usage issue (detailed ne&t in this errata) onl" cover the e&ercise steps5 co!!ands. Also correct the
4 and Y usage $or the e&a!ple co!!and output displa"s and instruction te&t throughout pp2.@'28=.
p2.@ Con$iguring a 7eplicated Volu!e, 0tep ..
Use 3 $or the node na!e and * $or the cluster nu!er, opposite to how the solution is written. 2his la
consistentl" !isleads students who have een using * $or the cluster nu!er and 3 $or the node all wee#.
[root@node1 ~]# mkdir /nY_e'port1
[root@node1 ~]# echo //dev/vgsrv/brick1 /nY_e'port1 's deaults * 1/ --/etc/stab
RH436 Clustering and Storage Mgmt
RH436 6.2 en-1-20120720
Errata and Commentary
Final Su!mitted to Curri"ulum
#age 10
$%dated 0&'20'2013
INSTRUCTOR CONFIDENTIAL
#age (y%e 6nstru"tor )uide
p2.@ 0tep 8. 0a!e 4<Y issue.
[root@node1 ~]# or )012 in node32..44.private.clusterX.e'ample.com5 do
- gluster peer probe 63)0124
- done
p281 0tep B. 0a!e 4<Y issue. Also add line continuation characters in the oo#.
[root@node1 ~]# gluster volume create newvol replica 7
2 node1.private.clusterX.e'ample.com!/n1_e'port1 7
node2.private.clusterX.e'ample.com!/n2_e'port1
p28= 0tep A. 0a!e 4<Y issue.
[root@desktopX ~]# mount -t glusters node1.private.clusterX.e'ample.com!/newvol /newvol
p28= 0tep /. 2"pographic error in !ountpoint na!e.
1eplace [root@node1]# ls -lh /e'port1/bigile
with [root@node1]# ls -lh /n1_e'port1/bigile
p28/ +&pand a Volu!e, 0tep 2. 0a!e 4<Y issue. Also add line continuation characters in the oo#.
[root@node1 ~]# gluster volume add-brick newvol 7
node3.private.clusterX.e'ample.com!/n3_e'port1 7
node4.private.clusterX.e'ample.com!/n4_e'port1
end of Instructor 4uide
RH436 Clustering and Storage Mgmt
RH436 6.2 en-1-20120720
Errata and Commentary
Final Su!mitted to Curri"ulum
#age 11
$%dated 0&'20'2013
INSTRUCTOR CONFIDENTIAL
Course 8rea (y%e Course Commentary
0etup 3!portantO Con$ir! the instructor ti!e a$ter installation. 3$ instructor did not use U2C $or a previous class,
then setting U2C during installation will cause the resulting 76+-'displa"ed ti!e to e wrong. 3$ the ti!e
or ti!e%one is incorrect, per$or! these steps prior to #ic#starting student s"ste!s>
[root@ser)er1 ~]# service ntpd stop
[root@ser)er1 ~]# date MMDDhhmm
[root@ser)er1 ~]# vi /etc/sysconig/clock (to set ti!e%one, i$ necessar")
[root@ser)er1 ~]# date (to con$ir! ti!e and ti!e%one)
[root@ser)er1 ~]# hwclock --utc --systohc
[root@ser)er1 ~]# service ntpd start
0etup 2he local ti!e%one is entered interactivel" during the 76C3<I-03C02 instructor s"ste! install, resulting in
student '0i"0start $iles con$igured $or that ti!e%one. 6owever, installing the 76=A8 course 7,M creates
+or0station-de/ault."/g-r,436 which re!ains con$igured $or U0<+astern. 2here$ore, the ,4+ option
9install I-0 wor#station9 creates des#top4 s"ste!s in the U0<+astern ti!e%one. Modi$" the #ic#start $ile
(using the '0i"0start'+or0station."/g s"!olic lin#) prior to #ic#starting student s"ste!s>
timezone <sia/=ingapore --utc
>t&me6one 78#9astern ::utc
0etup 2he student virtuals are uilt $ro! "luster!ase."/g, which also de$aults to U0<+astern. Change this during
course setup to !atch the instructor s"ste!5s ti!e%one. :therwise, students !ust tediousl" use s"ste!'
con$ig'date to change the ti!e%one on each VM a$ter $irst oot a$ter each la'uild'cluster. 3t is $ar easier
to change the #ic#start $ile once, then have students run la!-!uild-"luster -m again i$ necessar".
timezone <sia/=ingapore --utc
>t&me6one 78#9astern ::utc
0etup 2he rh=A8'con$ig 7,M is !issing reverse loo#up %ones $or the pulic, private, storage1, and storage2
networ#s. 3n classroo!s with internet access, to avoid reverse'loo#up ti!eouts (e.g., 006), create e!pt"
loo#up %ones in na!ed.con$ $or the 1/2.18<18, 1/2.1/<18, 1/2.1B<18, and 1/2.1@<18 networ#s. Add an
appropriate loc# $or each o$ the reverse address networ#s (18.1/2, 1/.1/2, 1B.1/2, 1@.1/2), $or e&a!ple>
6one "1;01<20&n:addr0arpa" {
t*pe master=
%&le "named0empt*"=
%or>arders {}=
}=
Create <*ar'named'",root'*ar'named'named.em%ty owner root;named per!issions 0640 containing>
$??@ 3A
@ 5. 8B- @ rname0&n)al&d0 C
D = ser&al
1E = re%res$
1A = retr*
1F = e4p&re
3A G = m&n&mum
.8 @
- 12<0D0D01
---- 331
RH436 Clustering and Storage Mgmt
RH436 6.2 en-1-20120720
Errata and Commentary
Final Su!mitted to Curri"ulum
#age 12
$%dated 0&'20'2013
INSTRUCTOR CONFIDENTIAL
Course 8rea (y%e Course Commentary
0tudent Iuide
Unit 1.
2he Co!prehensive 7eview Unit 2est is the sa!e in the lecture as in the solutions. 2hat is to sa" that there
is no solution. 2here should e a co!plete step " step as $or ever" other e&ercise. +ven such an e&ercise
designed to give practice e$ore, possil", an e&a! or production i!ple!entation should allow a student to
view the est practice or reco!!ended procedure. 2here are a nu!er o$ 76CA courses where the $inal,
e&tra, challenge las are le$t un$inished in the solutions. 2his co!es across in the classroo! as inco!plete.
3t also !a" leave an instructor needing to scra!le to prove that a challenge la actuall" wor#s.
Gander> 92he co!prehensive review does not have a solution " design. Years o$ e&perience have taught us
that a large percentage o$ students will lindl" cop" solutions i$ the" are availale. 2his la does not have
solutions, thus $orcing the! to thin# $or the!selves. 2o veri$" their wor# the" PuseQ la!-grade-"luster. As
an instructor it is "our Ho to !a#e students $eel good aout this.9
Unit 11 Volu!e groups (not logical volu!es) !a" still e $lagged as not clustered when using clv!d, when needing
a local volu!e on one node and not the rest o$ the cluster. 6owever, caveats e&ist and are critical.
0peci$icall", -VM will " onl" !ar# volu!e groups as clustered when all o$ the ,Vs in the volu!e group
are availale on all hosts in the cluster and clv!d is running on all hosts. Un!ar#ing such a clustered VI
can and will lead to data corruption. 3$ an unclustered VI gets !ar#ed as clustered " accident, it is
ecause it was on a shared dis#, availale to all "our nodes, " accident. 2o un!ar# a VI as clustered $or
later dis# reuse, it is critical to create a new ,V to eli!inate lingering arti$acts. 3ou ha/e been $arned.
Unit 11 Juic# listing o$ steps $or resi%ing an i0C03 target while re!aining in (quiescent) use. (or re$erence onl",
co!!on question in class. 7esi%ing an iscsi target is covered in detail in the Storage 5dministration 4uide.
(efore beginning, i$ e&tra space required; use /dis0 to construct new partition or $ull dis#, %*"reate to
prepare <dev<partition $or -VM, and *ge7tend to add <dev<partition to the volu!e group (e.g., vgsrv).
[root@node4]# lve'tend ?$**@ /dev/vgsrv/storage K on target server
[root@node4]# service tgtd orce-reload K on target server
[root@nodeY]# iscsiadm -m node -A 8targetname- -" K run once, sees oth portals
[root@nodeY]# lsblk K raw devices are larger, ut not "et !ultipath device
[root@nodeY]# multipathd -k /resize map clusterstorage/ K !path na!e, not dev
[root@node1]# pvresize /dev/mapper/clusterstorage K one node onl", i$ clv!
[root@node1]# pvdisplay K ph"sical volu!e now shows new si%e
[root@node1]# vgdisplay K volu!e group alread" includes new si%e
Unit 1= A student !a" choose to switch their e&isting a%a",e or sam!a resource group to use the gluster$s
$iles"ste!, requiring an 0+-inu& polic" changes, since chcon<restorecon !ethods will not e allowed.
Create and load the $ollowing polic" to allow apache, $or e&a!ple, to access a !ounted /use/s3t $iles"ste!.
(,olicies were deter!ined with gre% ,tt% '*ar'log'audit'audit.log I audit2allo+)
[root@node1 ~]# vi usehttp.pp
allo> $ttpd(t %use%s(t3d&r { 'etattr searc$ }=
allo> $ttpd(t %use%s(t3%&le { read 'etattr open }=
[root@node1 ~]# semodule -i usehttp.pp
end of Course Commentary