Académique Documents
Professionnel Documents
Culture Documents
Heartbeat HA Configuration
Heartbeat Configuration
Node1
Node2
The heartbeat solution is not needed for domain logons; however in mission critical
environments it supports failover if a node becomes unavailable. It provides a heartbeat
through a serial and a crossover connection directly connected to each server. A virtual IP is
shared by the cluster; we connect to this virtual IP Address when accessing a Samba share.
There are 2 main different versions of heartbeat; version 1.2.3 is limited to a two node
cluster; version 2 can span many machines and can become quite complex. Heartbeat version
2 is however backwards compatible with version 1.2.3 configuration files using the “crm no”
option in the ha.cf configuration file.
You must never mix different versions of heartbeat in a cluster; they must all run the same
version. If you do it will create instability and may lead to random rebooting.
If you want to be completely safe I highly recommend using version 1.2.3 for this exercise.
If you are looking for proven stability version 1.2.3 has been used with DRBD for a long
time; it is often used in hospitals to store MRI and other data that needs to be readily
accessible; currently this is limited to a 2 node cluster.
[edit]
5.1. Requirements
The heartbeat solution is not needed for domain logons; however in mission critical
environments it supports failover if a node becomes unavailable. It provides a heartbeat
through a serial and a crossover connection directly connected to each server. A virtual IP is
shared by the cluster; we connect to this virtual IP Address when accessing a Samba share.
There are 2 main differential versions of heartbeat - version 1.2.3 is limited to a two node
cluster; version 2 can span many machines and can become quite complex. Heartbeat version
2 is however backwards compatible with version 1.2.3 configuration files using the “crm no”
option in the ha.cf configuration file.
You must never mix different versions of heartbeat in a cluster; they must all run the same
version. If you do it will create instability and may lead to random rebooting.
If you are looking for proven stability version 1.2.3 has been used with DRBD for a long
time; it is often used in hospitals to store MRI and other data that needs to be readily
accessible; currently this is limited to a 2 node cluster.
Step1.
Get a serial cable and connect it to each nodes com1 port. Execute the following; you may
see a lot of garbage on the screen.
Step2.
You may have to repeat the below a couple of times in rapid succession to see the output on
node1.
5.2. Installation
Repeat this process on node2 your backup domain controller, so they are both running
identical versions of heartbeat.
5.3. Configuration
Heartbeat running as version 1.2.3 is very easy to configure and manage. The never version 2
is able to support multiple nodes and uses xml type configuration files. If you are using
version 2 I recommend running using crm = no option which provides 1.2.3 backwards
compatability.
Just remember to always run the same version of heartbeat on both nodes.
[edit]
5.3.1. ha.cf
Step1.
On node1 login with root account; the ha.cf file needs to be the same on both nodes.
Note:
The option “crm no” in the ha.cf specifies heartbeat version 2 to behave as version 1.2.3; this
means it is limited to a 2 node cluster.
If you choose to run version 1.2.3 you will need to comment out or delete the “crm no” in the
ha.cf
[root@node1]# cd /etc/ha.d
[root@node1]# vi ha.cf
## /etc/ha.d/ha.cf on node1
## This configuration is to be the same on both machines
## This example is made for version 2, comment out crm if using version 1
keepalive 1
deadtime 5
warntime 3
initdead 20
serial /dev/ttyS0
bcast eth1
auto_failback yes
node node1.differentialdesign.org
node node2.differentialdesign.org
crm no # comment out if using version 1.2.3
Step2.
Copy the ha.cf to node2 so they both have the same configuration file.
5.3.2. haresources
The haresorces file is called when heartbeat starts. Throughout this document we have
used /data as our mount point for replication raid1 over LAN.
We use node1, which is the master server and use 192.168.0.4 which is the clusters virtual IP
address which will be displayed as eth0:0 on the primary node.
You can easily make services highly available by adding the appropriate name to the
haresources file as specified below with DNS service named.
Step1.
[root@node1]# vi haresources
## /etc/ha.d/haresources
## This configuration is to be the same on both nodes
Step2.
Copy the haresources file across to node2 so they are both identical.
[edit]
5.3.3. authkeys
[root@node1]# vi authkeys
## /etc/ha.d/authkeys
auth 1
1 crc
The preferred method is to use sha encryption to authenticate nodes and their packets as
below.
## /etc/ha.d/authkeys
auth 1
1 sha HeartbeatPassword
Step2.
Step3.
Copy the authkeys file to node2 so they can authenticate with each other.
Step4.
Login to node2 – your backup domain controller, use the exact same configuration as the
primary domain controllers configuration files for heartbeat.