Vous êtes sur la page 1sur 5

5.0.

Heartbeat HA Configuration

Heartbeat Configuration

Node1

Node2

The heartbeat solution is not needed for domain logons; however in mission critical
environments it supports failover if a node becomes unavailable. It provides a heartbeat
through a serial and a crossover connection directly connected to each server. A virtual IP is
shared by the cluster; we connect to this virtual IP Address when accessing a Samba share.

There are 2 main different versions of heartbeat; version 1.2.3 is limited to a two node
cluster; version 2 can span many machines and can become quite complex. Heartbeat version
2 is however backwards compatible with version 1.2.3 configuration files using the “crm no”
option in the ha.cf configuration file.

You must never mix different versions of heartbeat in a cluster; they must all run the same
version. If you do it will create instability and may lead to random rebooting.

If you want to be completely safe I highly recommend using version 1.2.3 for this exercise.

If you are looking for proven stability version 1.2.3 has been used with DRBD for a long
time; it is often used in hospitals to store MRI and other data that needs to be readily
accessible; currently this is limited to a 2 node cluster.

For further documentation on heartbeat, visit their website here.

[edit]

5.1. Requirements

The heartbeat solution is not needed for domain logons; however in mission critical
environments it supports failover if a node becomes unavailable. It provides a heartbeat
through a serial and a crossover connection directly connected to each server. A virtual IP is
shared by the cluster; we connect to this virtual IP Address when accessing a Samba share.

There are 2 main differential versions of heartbeat - version 1.2.3 is limited to a two node
cluster; version 2 can span many machines and can become quite complex. Heartbeat version
2 is however backwards compatible with version 1.2.3 configuration files using the “crm no”
option in the ha.cf configuration file.

You must never mix different versions of heartbeat in a cluster; they must all run the same
version. If you do it will create instability and may lead to random rebooting.

If you want to be completely safe I highly recommend using version 1.2.3.

If you are looking for proven stability version 1.2.3 has been used with DRBD for a long
time; it is often used in hospitals to store MRI and other data that needs to be readily
accessible; currently this is limited to a 2 node cluster.

Step1.

Get a serial cable and connect it to each nodes com1 port. Execute the following; you may
see a lot of garbage on the screen.

[root@node1 ~]# cat </dev/ttyS0

Step2.

You may have to repeat the below a couple of times in rapid succession to see the output on
node1.

[root@node2 ~]# echo hello >/dev/ttyS0


[edit]

5.2. Installation

Heartbeat can now be downloaded with YUM, it will download version 2.

Repeat this process on node2 your backup domain controller, so they are both running
identical versions of heartbeat.

Install heartbeat on both nodes

[root@node1 programs]# cd heartbeat-1.2.3/


[root@node1 heartbeat-1.2.3]# ls
heartbeat-1.2.3-2.rh.9.i386.rpm
heartbeat-ldirectord-1.2.3-2.rh.9.i386.rpm
heartbeat-pils-1.2.3-2.rh.9.i386.rpm
heartbeat-stonith-1.2.3-2.rh.9.i386.rpm
[root@node1 heartbeat-1.2.3]#rpm -Uvh heartbeat-1.2.3-2.rh.9.i386.rpm
heartbeat-ldirectord-1.2.3-2.rh.9.i386.rpm heartbeat-pils-1.2.3-
2.rh.9.i386.rpm
heartbeat- stonith-1.2.3-2.rh.9.i386.rpm
[edit]

5.3. Configuration

Heartbeat running as version 1.2.3 is very easy to configure and manage. The never version 2
is able to support multiple nodes and uses xml type configuration files. If you are using
version 2 I recommend running using crm = no option which provides 1.2.3 backwards
compatability.

Just remember to always run the same version of heartbeat on both nodes.

[edit]

5.3.1. ha.cf

Step1.

On node1 login with root account; the ha.cf file needs to be the same on both nodes.

Note:

The option “crm no” in the ha.cf specifies heartbeat version 2 to behave as version 1.2.3; this
means it is limited to a 2 node cluster.

If you choose to run version 1.2.3 you will need to comment out or delete the “crm no” in the
ha.cf

[root@node1]# cd /etc/ha.d
[root@node1]# vi ha.cf
## /etc/ha.d/ha.cf on node1
## This configuration is to be the same on both machines
## This example is made for version 2, comment out crm if using version 1

keepalive 1
deadtime 5
warntime 3
initdead 20
serial /dev/ttyS0
bcast eth1
auto_failback yes
node node1.differentialdesign.org
node node2.differentialdesign.org
crm no # comment out if using version 1.2.3

Step2.
Copy the ha.cf to node2 so they both have the same configuration file.

[root@node1]# scp /etc/ha.d/ha.cf root@node2:/etc/ha.d/


[edit]

5.3.2. haresources

The haresorces file is called when heartbeat starts. Throughout this document we have
used /data as our mount point for replication raid1 over LAN.

We use node1, which is the master server and use 192.168.0.4 which is the clusters virtual IP
address which will be displayed as eth0:0 on the primary node.

You will see drbddisk Filesystem::/dev/drbd0::/data::ext3 - /dev/drbd0 is our DRBD drive.


We have chosen to mount our DRBD file system at /data – this is our replication mount point,
which we configured in our samba and smbldap-tools configuration.

You can easily make services highly available by adding the appropriate name to the
haresources file as specified below with DNS service named.

Step1.

[root@node1]# vi haresources
## /etc/ha.d/haresources
## This configuration is to be the same on both nodes

node1 192.168.0.4 drbddisk Filesystem::/dev/drbd0::/data::ext3 named

Step2.

Copy the haresources file across to node2 so they are both identical.

[root@node1]# scp /etc/ha.d/haresources root@node2:/etc/ha.d/

[edit]

5.3.3. authkeys

The below method provides no security or authentication, so we recommended not to use. If


however heartbeat communicates over a private link such as in our case (serial and crossover
cable) there is no need to add this additional security.
Step1.

[root@node1]# vi authkeys
## /etc/ha.d/authkeys
auth 1
1 crc

The preferred method is to use sha encryption to authenticate nodes and their packets as
below.

## /etc/ha.d/authkeys
auth 1
1 sha HeartbeatPassword

Step2.

Give the authkeys file correct permissions.

[root@node1]# chmod 600 /etc/ha.d/authkeys

Step3.

Copy the authkeys file to node2 so they can authenticate with each other.

[root@node1]# scp /etc/ha.d/authkeys root@node2:/etc/ha.d/

Step4.

Login to node2 – your backup domain controller, use the exact same configuration as the
primary domain controllers configuration files for heartbeat.

Vous aimerez peut-être aussi