Vous êtes sur la page 1sur 25

Norgegatan 2, SE-164 32 Kista, Tel: +46 (0)8 58 83 01 00, Fax: +46 (0)8 23 02 31, www.op5.

com

High availability firewall


Creating a high availability firewall cluster with automated fail-over, state-synchronization and LSB

by Jonathan Petersson jpetersson@op5.com Copyright (C) op5 AB 2010


Last updated on Friday, September 17, 2010

Table of Contents
Introduction Software Disclaimers Server installation Upgrading to Squeeze Software installation Network Layout Configuration IP-forwarding Heartbeat CRM Custom OCF resources conntrackd iptables Shared IPs Conntrackd Iptables LSB-services OpenVPN IPsec Final configuration Monitoring NRPE

Introduction
The purpose of this document is to outline how to set up two firewalls in a redundant setup running in active/passive mode utilizing open-source software in conjunction with custom heartbeat modules developed by op5 AB.

Software
The installation has been made in a virtual environment using the following software: Oracle VirtualBox 3.2.8 r64453 Debian Squeeze/sid with Linux 2.6.32-5 i586 (www.virtualbox.org) (www.debian.org)

To create our cluster well be using: Heartbeat 3.0.3-2 Pacemaker 1.0.9.1 Conntrack-tools 0.9.14-2 In addition the following services will be used: OpenVPN 2.1.0-3 Strongswan 4.3.2-1.3

(www.linux-ha.org) (www.clusterlabs.org) (conntrack-tools.netfilter.org/)

(www.openvpn.org) (www.strongswan.org)

Disclaimers
Please be aware that op5 AB does not acknowledge any responsibility of any information, code or guidelines provided in this document. Please refer to each respective project for support and documentation. Notice that the majority of all configuration examples has been taken from the first node in the cluster, you will need to modify most IP and DNS parameters to make it work in your environment.

To utilize the information in this document youre expected to have two Debian Squeezy servers pre-installed with the software mentioned above with its respective dependencies. Installation of the server-software will not be covered in detail, configuration of the VPN services is entirely left out as theres no special hooks into heartbeat to run these.

Server installation
We wont cover the details of installing the server but heres some general pointers. The Debian project has compiled a well-written manual of the Debian installation which is available at http:/ /www.debian.org/releases/stable/installmanual. Unless youre planning to run any special software on the server keep the installation as small as possible as we want to minimize additional services running on the servers. As far as the installation of a high availability cluster goes theres really no difference in what architecture you use as long as its supported by the Squeeze release. Given that the current stable version of Debian is version 5 (Lenny) well briefly cover how to upgrade to version 6 (Squeeze) as this is a requirement to get the newer packages needed for this guide. If you want to use the stable version of Debian youll have to compile the packages manually.

Upgrading to Squeeze
Once youve installed your two servers you need to make a few changes in the apt-repository sources. Start of by deleting or moving /etc/apt/sources.list followed by recreating it with the following content.
deb http://ftp.se.debian.org/debian/ squeeze main contrib non-free deb-src http://ftp.se.debian.org/debian/ squeeze main contrib non-free deb http://security.debian.org/ squeeze/updates main contrib non-free deb-src http://security.debian.org/ squeeze/updates main contrib non-free

Once finished run aptitude update to update your local repository database with the packages needed. Once this is finished run aptitude safe-upgrade -y and go through any potential questions asked by the system. When all of this is finished youll have two Squeeze servers ready for configuration.

Software installation
To make things easy well install all the necessary software directly from the Squeeze repository using apt-get.

deb-squeeze:~# apt-get install conntrackd iptables-persistent heartbeat pacemaker -y Reading package lists... Done Building dependency tree Reading state information... Done The following extra packages will be installed: cluster-agents cluster-glue fancontrol gawk libcluster-glue libcorosync4 libesmtp5 libglib2.0-0 libglib2.0-data libheartbeat2 libltdl7 libnet1 libnetfilter-conntrack3 libnspr4-0d libnss3-1d libopenhpi2 libopenipmi0 libperl5.10 libsensors4 libsnmp-base libsnmp15 libtimedate-perl libxml2-utils libxslt1.1 lm-sensors openhpid psmisc shared-mime-info

Suggested packages: snmp-mibs-downloader sensord read-edid i2c-tools The following NEW packages will be installed: cluster-agents cluster-glue conntrackd fancontrol gawk heartbeat iptables-persistent libcluster-glue libcorosync4 libesmtp5 libglib2.0-0 libglib2.0-data libheartbeat2 libltdl7 libnet1 libnetfilter-conntrack3 libnspr4-0d libnss3-1d libopenhpi2 libopenipmi0 libperl5.10 libsensors4 libsnmp-base libsnmp15 libtimedate-perl libxml2-utils libxslt1.1 lm-sensors openhpid pacemaker psmisc shared-mime-info 0 upgraded, 32 newly installed, 0 to remove and 6 not upgraded. Need to get 13.0MB of archives. After this operation, 36.8MB of additional disk space will be used.

In addition to this you want to have ntpd installed and configured, time-drifting will cause your cluster to fail rendering it unusuable. For details on how to configure OpenVPN and StrongSwan please refer to their respective websites. OpenVPN http://openvpn.net/index.php/open-source/documentation.html StrongSwan http://wiki.strongswan.org/projects/strongswan/wiki/UserDocumentation

Network
Layout
In this example our firewalls are equipped with 4 network interfaces divided respectively: eth0: WAN/External interface eth1: LAN/Internal interface eth2: Synchronization interface eth3: Management interface

Configuration
For the configuration well utilize the standard Debian network configuration present in /etc/ network/interfaces. deb-fw1
auto lo iface lo inet loopback auto eth0 iface eth0 inet static address 130.131.132.131 netmask 255.255.255.128 auto eth1

deb-fw2
auto lo iface lo inet loopback auto eth0 iface eth0 inet static address 130.131.132.132 netmask 255.255.255.128 auto eth1

iface eth1 inet static address 120.121.122.2 netmask 255.255.255.128 auto eth2 iface eth2 inet static address 130.131.132.253 netmask 255.255.255.252 auto eth3 iface eth3 inet dhcp

iface eth1 inet static address 120.121.122.3 netmask 255.255.255.128 auto eth2 iface eth2 inet static address 130.131.132.254 netmask 255.255.255.252 auto eth3 iface eth3 inet dhcp

As described above each individual interface has its own purpose, you may want to divide them differently, add redundancy with bonding or bridging. In our example weve a cross-over link for the synchronization on interface eth2, this is not necessary but highly recommended to avoid that traffic is seen by other hosts on the network which may put you at risk. Once youve configured the synchronization interface verify that you can communicate between the two nodes.
deb-fw1:~# ping -c1 130.131.132.254 PING 130.131.132.254 (130.131.132.254) 56(84) bytes of data. 64 bytes from 130.131.132.254: icmp_req=1 ttl=64 time=1.48 ms --- 130.131.132.254 ping statistics --1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 1.485/1.485/1.485/0.000 ms

IP-forwarding
To allow network traffic to flow through the server ip_forwarding must be enabled. To enable it on startup set it to 1 in /etc/sysctl.conf

net.ipv4.ip_forward=1

Firewall
To ensure that the firewalls are properly locked down well utilize the iptables-script below which sets up standard rules for lockdown and opens up for services necessary such as heartbeat, conntrackd, OpenVPN and IPsec. Youll need to modify this for the services you find necessary in your environment. The script automatically collects IP-addresses from the node which its initiated from and sets peer-ips in local variables where applicable. In addition to this custom rules for hosts behind the firewall can be configured in separate files present in the folder hosts. The script requires open communication between the two nodes, due to this you must create

SSH-keys without passphrases which can be used for the file-exchange.


deb-fw1:~# ssh-keygen -t dsa -N "" -f /root/.ssh/id_dsa && cat .ssh/id_dsa.pub >> .ssh/ authorized_keys && scp -r .ssh 130.131.132.254:~/ Generating public/private dsa key pair. Your identification has been saved in /root/.ssh/id_dsa. Your public key has been saved in /root/.ssh/id_dsa.pub. The key fingerprint is: 82:e8:dc:92:1f:5a:0e:19:dc:f6:fc:16:60:9c:1c:25 root@deb-fw1.example.com The key's randomart image is: +--[ DSA 1024]----+ | E.. | | .. | | o o | | . o .* | | + +...S | | o * o .. | | B + o . | | B . .. | | . o .. | +-----------------+ root@130.131.132.254's password: id_dsa 100% 668 0.7KB/s 00:00 authorized_keys 100% 1432 1.4KB/s 00:00 id_dsa.pub 100% 610 0.6KB/s 00:00

Make sure that youve each nodes respective SSH host-keys present in known_hosts otherwise the script will fail upon synchronization. The script directory structure is as follows: /root/scripts/iptables iptables.sh hosts ns.op5.se Install iptables-persistent to load the rules on boot.
deb-fw1:~# apt-get install iptables-persistent deb-fw1:~# update-rc.d iptables-persistent defaults

The script is later called using heartbeat upon node fail-over. Upon updates it also triggers the slave to update itself based on the data available at the master.
deb-fw1:~# cat /root/scripts/iptables/iptables.sh #!/bin/bash # # Conntrackd for initating active/backup between two nodes # # Copyright (c) op5 AB, Jonathan Petersson <jpetersson@op5.com> # All Rights Reserved. # # This software has only been tested on Debian Lenny, modifications # may be needed for other distributions and operative-systems. # Conntrackd is required to run in the background, this software only # maintains the active/backup initiations between 2 nodes. # # This program is free software; you can redistribute it and/or modify # it under the terms of version 2 of the GNU General Public License as # published by the Free Software Foundation.

# # This program is distributed in the hope that it would be useful, but # WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Further, this software is distributed without any warranty that it is # free of the rightful claim of any third person regarding infringement # or the like. Any license provided herein, whether implied or # otherwise, applies only to this software file. Patent licenses, if # any, provided herein do not apply to combinations of this program with # other software, or any other product whatsoever. # # You should have received a copy of the GNU General Public License # along with this program; if not, write the Free Software Foundation, # Inc., 59 Temple Place - Suite 330, Boston MA 02111-1307, USA. # ETH0IP=`ip addr show dev eth0 | grep inet | head -n 1 | awk '{print $2 }' | cut -d "/" -f1` ETH1IP=`ip addr show dev eth1 | grep inet | head -n 1 | awk '{print $2 }' | cut -d "/" -f1` ETH2IP=`ip addr show dev eth2 | grep inet | head -n 1 | awk '{print $2 }' | cut -d "/" -f1` if [ $ETH0IP == "130.131.132.131" ]; then ETH0PEER="130.131.132.132" else ETH0PEER="130.131.132.131" fi if [ $ETH1IP == "120.121.122.2" ]; then ETH1PEER="120.121.122.3" else ETH1PEER="120.121.122.2" fi if [ $ETH2IP == "130.131.132.253" ]; then ETH2PEER="130.131.132.254" else ETH2PEER="130.131.132.253" fi master() { IPTABLES="/sbin/iptables " $IPTABLES --flush $IPTABLES -t nat --flush $IPTABLES -P INPUT DROP $IPTABLES -P FORWARD DROP $IPTABLES -P OUTPUT ACCEPT ETH0SHARED='130.131.132.130' ETH1SHARED='120.121.122.1' INTNETS=' 10.0.123.0/24 172.27.76.0/24 172.27.86.0/24 192.168.1.0/24 ' EXTNETS=' 120.121.122.0/24 130.131.132.128/25 ' # Global input rules $IPTABLES -I INPUT -s 127.0.0.0/8 -d 127.0.0.0/8 -m state --state NEW -j ACCEPT $IPTABLES -I INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT $IPTABLES -I INPUT -p icmp --icmp-type echo-request -j ACCEPT # Local services

$IPTABLES -I INPUT -p tcp --dport ssh -m state --state NEW -j ACCEPT $IPTABLES -I INPUT -p udp --dport snmp -d $ETH1IP -m state --state NEW -s 120.121.122.27 -j ACCEPT $IPTABLES -I INPUT -p tcp --dport 5666 -d $ETH1IP -m state --state NEW -s 120.121.122.27 -j ACCEPT # Shared services $IPTABLES -I INPUT -p tcp -d $ETH0SHARED --dport https -m state --state NEW -j ACCEPT ### Sync services # heartbeat $IPTABLES -I INPUT -p udp -s $ETH2PEER -d 224.0.10.100 --dport 694 -j ACCEPT # conntrackd $IPTABLES -I INPUT -p udp -s $ETH2PEER -d 225.0.0.50 --dport 3780 -j ACCEPT ### FORWARD rules $IPTABLES -I FORWARD -m state --state ESTABLISHED,RELATED -j ACCEPT # icmp $IPTABLES -I FORWARD -p icmp --icmp-type time-exceeded -j ACCEPT $IPTABLES -I FORWARD -p icmp --icmp-type echo-reply -j ACCEPT $IPTABLES -I FORWARD -p icmp --icmp-type destination-unreachable -j ACCEPT $IPTABLES -I FORWARD -p icmp --icmp-type echo-request -j ACCEPT # Permit traffic between subnets for NET1 in $INTNETS; do for NET2 in $INTNETS; do if [ $NET1 != $NET2 ]; then $IPTABLES -I FORWARD -s $NET1 -d $NET2 -m state --state NEW -j ACCEPT fi done for NET3 in $EXTNETS; do $IPTABLES -I FORWARD -s $NET1 -d $NET3 -m state --state NEW -j ACCEPT done done ### NAT rules for NET in $INTNETS; do $IPTABLES -t nat -A POSTROUTING -o eth1 -s $NET -j SNAT --to-source $ETH0SHARED done for dr in `ls /root/scripts/iptables/hosts/ | grep -v '~'` do source /root/scripts/iptables/hosts/$dr done ssh $ETH2PEER 'bash /root/scripts/iptables/iptables.sh slave' writeRules exit 0 } slave() { diff <(iptables-save | grep -v -E \[[0-9]+:[0-9]+\] | egrep -v "(#)|($ETH0IP)|($ETH1IP)|($ETH2IP)|($ETH0PEER)|($ETH1PEER) |($ETH2PEER)|(\*)" | sort) <(ssh $ETH2PEER 'cat /etc/iptables/rules' | grep -v -E \[[0-9]+:[0-9]+\] | egrep -v "(#)|($ETH0IP)| ($ETH1IP)|($ETH2IP)|($ETH0PEER)|($ETH1PEER)|($ETH2PEER)|(\*)" | sort) if [ $? -eq 1 ]; then scp -r $ETH2PEER:/root/scripts/iptables /root/scripts/ master fi writeRules exit 0 } writeRules() { diff <(iptables-save | grep -v -E \[[0-9]+:[0-9]+\] | grep -v "#") <(cat /etc/iptables/rules | grep -v -E \[[0-9]+:[0-9]+\] | grep -v "#") if [ $? -eq 1 ]; then rm /etc/iptables/rules

iptables-save > /etc/iptables/rules fi } usage() { echo "$0 {master|slave}" } case "$1" in master) master;; slave) slave;; *) usage exit 1 ;; esac

To set rules for a specific host create a file with the applicable rules in the hosts folder.
deb-fw1:~# cat /root/scripts/iptables/hosts/ns.example.com $IPTABLES -I FORWARD -p tcp --dport 53 -d 120.121.122.2 -m state --state NEW -j ACCEPT $IPTABLES -I FORWARD -p udp --dport 53 -d 120.121.122.2 -m state --state NEW -j ACCEPT

This is the final ruleset pulled from the first node in the cluster. The slave should look identical with the exception of its peer-IPs.
deb-fw1:~# cat /etc/iptables/rules # Generated by iptables-save v1.4.8 on Thu Sep 9 14:18:26 2010 *filter :INPUT DROP [0:0] :FORWARD DROP [0:0] :OUTPUT ACCEPT [106:17718] -A INPUT -s 130.131.132.254/32 -d 225.0.0.50/32 -p udp -m udp --dport 3780 -j ACCEPT -A INPUT -s 130.131.132.254/32 -d 224.0.10.100/32 -p udp -m udp --dport 694 -j ACCEPT -A INPUT -d 130.131.132.130/32 -p udp -m udp --dport 5002 -j ACCEPT -A INPUT -d 130.131.132.130/32 -p udp -m udp --dport 5001 -j ACCEPT -A INPUT -d 130.131.132.130/32 -p udp -m udp --dport 1194 -j ACCEPT -A INPUT -d 130.131.132.130/32 -p tcp -m tcp --dport 443 -m state --state NEW -j ACCEPT -A INPUT -s 120.121.122.27/32 -d 120.121.122.2/32 -p tcp -m tcp --dport 5666 -m state --state NEW -j ACCEPT -A INPUT -s 120.121.122.27/32 -d 120.121.122.2/32 -p udp -m udp --dport 161 -m state --state NEW -j ACCEPT -A INPUT -p tcp -m tcp --dport 22 -m state --state NEW -j ACCEPT -A INPUT -p icmp -m icmp --icmp-type 8 -j ACCEPT -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT -A INPUT -s 127.0.0.0/8 -d 127.0.0.0/8 -m state --state NEW -j ACCEPT -A FORWARD -d 120.121.122.2/32 -p udp -m udp --dport 53 -m state --state NEW -j ACCEPT -A FORWARD -d 120.121.122.2/32 -p tcp -m tcp --dport 53 -m state --state NEW -j ACCEPT COMMIT # Completed on Thu Sep 9 14:18:26 2010 # Generated by iptables-save v1.4.8 on Thu Sep 9 14:18:26 2010 *nat :PREROUTING ACCEPT [0:0] :POSTROUTING ACCEPT [0:0] :OUTPUT ACCEPT [0:0] -A POSTROUTING -s 10.0.123.0/24 -o eth1 -j SNAT --to-source 130.131.132.130 -A POSTROUTING -s 172.27.76.0/24 -o eth1 -j SNAT --to-source 130.131.132.130 -A POSTROUTING -s 172.27.86.0/24 -o eth1 -j SNAT --to-source 130.131.132.130 -A POSTROUTING -s 192.168.1.0/24 -o eth1 -j SNAT --to-source 130.131.132.130 COMMIT # Completed on Thu Sep 9 14:18:26 2010

Conntrack
The conntrack-configuration is a plain copy of the FTFW example packaged with conntrackd. Its been modified with the appropriate peer-addresses. Notice that youll need to update these to the IPs youre planning to use. Make sure that you populate Address Ignore with the fixed IPs of the firewalls as you wont have any usage of synchronizing state of these IPs. Its essential that you do not put the shared IPs in the ignore-section as any traffic being NATed over these will loose its state upon failover. If youre running OpenVPN or a similar service you want to add the tunnel IP to this section as well. Notice that the UDP and ICMP support is relatively new and may be unstable, remove support for this if you notice any issues.
deb-fw1:~# cat /etc/conntrackd/conntrackd.conf Sync { Mode FTFW { } Multicast { IPv4_address 225.0.0.50 Group 3780 IPv4_interface 130.131.132.253 Interface eth2 SndSocketBuffer 1249280 RcvSocketBuffer 1249280 Checksum on } } General { Nice -20 HashSize 32768 HashLimit 131072 LogFile on LockFile /var/lock/conntrack.lock UNIX { Path /var/run/conntrackd.ctl Backlog 20 } NetlinkBufferSize 2097152 NetlinkBufferSizeMaxGrowth 8388608 Filter From Userspace { Protocol Accept { TCP

UDP ICMP } Address Ignore { IPv4_address 127.0.0.1 # loopback IPv4_address 130.131.132.132 # WAN Interface IPv4_address 120.121.122.3 # LAN interface IPv4_address 130.131.132.254 # Sync Interface } } }

Heartbeat
Heartbeat is configured utilizing multicast on the synchronization interface to communicate with its neighbor node. Heartbeat relies on the node-name to be resolvable. Due to this its recommended to set static pointers in /etc/hosts if the DNS-server for some reason would become unavailable, this will also decrease the lookup-time.
deb-fw1:~# cat /etc/hosts 127.0.0.1 localhost 127.0.1.1 deb-fw1.example.com.example.com

deb-fw1.example.com

# The following lines are desirable for IPv6 capable hosts ::1 localhost ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters ff02::3 ip6-allhosts 130.131.132.253 deb-fw1.example.com deb-fw1 130.131.132.254 deb-fw2.example.com deb-fw2

Heartbeat will communicate using multicast over UDP at port 694 with destination 224.0.10.100 which weve opened up for in the firewall. If youve multiple clusters within the same broadcast network make sure to differentiate the multicast address and port used to ensure that you dont get any conflicting communications making your systems unreliable, using a dedicated link will eliminate this potential issue. For this setup weve enabled debug mode to be able to troubleshoot the setup. In a live environment you want to have this turned off as it generates a lot of log-data. If its essential for you to have this information consider sending it to a syslog server. Weve used recommended keepalive, warntime, deadtime and initdead time variables, you may want to modify these depending on the connectivity between your two nodes to avoid split-brain situations.
deb-fw1:~# cat /etc/ha.d/ha.cf crm respawn

debug use_logd logfacility mcast node node autojoin udpport keepalive warntime deadtime initdead debugfile

1 false daemon eth2 224.0.10.100 694 1 0 deb-fw1.example.com deb-fw2.example.com none 694 1 5 10 20 /var/log/ha-debug

In addition to this we must configure authentication-keys which is used between the two nodes. Use the script below to generate it, make sure to replace yoursecret with the string you want to use. Copy this file to the secondary node as the key needs to match on both ends.
echo "auth 1 > 1 sha1 `echo 'yoursecret' | sha1sum | awk '{ print $1 }'`" >> /etc/ha.d/authkeys ; chmod 600 /etc/ha.d/authkeys

Confirm that the output of the file is correct.


deb-fw1:~# cat /etc/conntrackd/authkeys auth1 1 sha1 ee60909613cba07967d32e602dd98641f21fd111

Once youre finished you can go ahead and start heartbeat.


deb-fw1:~# /etc/init.d/heartbeat start Starting High-Availability services: Done.

If you encounter any issues have a look in /var/log/ha-debug for pointers.

CRM
Once heartbeat is started youll be able to interact with it using crm, crm is a CLI-tool used to manage your nodes and resources. Notice that it may take a while before you can interact with heartbeat using the crm command. First of we want to disable stonith, you want to have this configured but well not cover the configuration of this in this document and during setup its beneficial to have it turned off to ease any potential troubleshooting.
deb-fw1:~# crm configure property stonith-enabled false

Once disabled have a quick look at the configuration, you should see the same data on both nodes if theyve established a connection successfully.
deb-fw1:~# crm configure show

node $id="56654cb6-9f5b-442d-9367-f8dc4136c6e4" deb-fw2.example.com \ attributes standby="off" node $id="e2dbecb7-3568-4c34-a000-87dcf4df82d2" deb-fw1.example.com \ attributes standby="off" property $id="cib-bootstrap-options" \ dc-version="1.0.9-74392a28b7f31d7ddc86689598bd23114f58978b" \ cluster-infrastructure="Heartbeat" \ stonith-enabled="false"

Now lets have a look at the status of the nodes.


deb-fw1:~/scripts# crm_mon -1 ============ Last updated: Thu Sep 9 16:49:36 2010 Stack: Heartbeat Current DC: deb-fw1.example.com (e2dbecb7-3568-4c34-a000-87dcf4df82d2) - partition with quorum Version: 1.0.9-74392a28b7f31d7ddc86689598bd23114f58978b 2 Nodes configured, unknown expected votes 0 Resources configured. ============ Online: [ deb-fw2.example.com deb-fw1.example.com ]

Custom OCF resources


Prior to configuring heartbeat resources we need to add support for conntrackd and our iptables-script. These are handled using OCF resources. Theres multiple modules pre-installed for other types of services and in addition to this LSB resources can be used as well. LSB is used to call regular init.d-scripts, however this requires that the script can handle start, stop and status. As conntrackd and iptables is services which is to run at all time these have to be handled a bit differently as we dont want to turn them on or off upon fail-over.

conntrackd
Create a file called conntrackd in /usr/lib/ocf/resource.d/ and populate it with the following script.
#!/bin/sh # # Conntrackd for initating active/backup state sync between two nodes # code based on Dummy template and primary-backup.sh # # Copyright (C) 2010 op5 AB, Jonathan Petersson <jpetersson@op5.com> # All Rights Reserved. # Copyright (C) 2008 by Pablo Neira Ayuso <pablo@netfilter.org> # All Rights Reserved. # Copyright (C) 2004 SUSE LINUX AG, Lars Marowsky-Bre # All Rights Reserved. # # Disclamer: # This software has only been tested on Debian Lenny, modifications # may be needed for other distributions and operating-systems. # # Conntrackd will get started automatically if it's not already # running. However there's no active error-handling for startup errors # please refer to conntrackd regular error-logs for trouble shooting. # # This program is free software; you can redistribute it and/or modify # it under the terms of version 2 of the GNU General Public License as # published by the Free Software Foundation. #

# This program is distributed in the hope that it would be useful, but # WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Further, this software is distributed without any warranty that it is # free of the rightful claim of any third person regarding infringement # or the like. Any license provided herein, whether implied or # otherwise, applies only to this software file. Patent licenses, if # any, provided herein do not apply to combinations of this program with # other software, or any other product whatsoever. # # You should have received a copy of the GNU General Public License # along with this program; if not, write the Free Software Foundation, # Inc., 59 Temple Place - Suite 330, Boston MA 02111-1307, USA. # ####################################################################### # Initialization: : ${OCF_FUNCTIONS_DIR=${OCF_ROOT}/resource.d/heartbeat} . ${OCF_FUNCTIONS_DIR}/.ocf-shellfuncs ####################################################################### # Fill in some defaults if no values are specified OCF_RESKEY_bin_default="/usr/sbin/conntrackd" OCF_RESKEY_cfg_default="/etc/conntrackd/conntrackd.conf" OCF_RESKEY_lck_default="/var/lock/conntrack.lock" CONNTRACKD="${OCF_RESKEY_bin} -C ${OCF_RESKEY_cfg}" : ${OCF_RESKEY_bin=${OCF_RESKEY_bin_default}} : ${OCF_RESKEY_cfg=${OCF_RESKEY_cfg_default}} : ${OCF_RESKEY_lck=${OCF_RESKEY_lck_default}} meta_data() { cat <<END <?xml version="1.0"?> <!DOCTYPE resource-agent SYSTEM "ra-api-1.dtd"> <resource-agent name="Conntrackd" version="0.5"> <version>1.0</version> <longdesc lang="en"> This is a Conntrackd resource to manage primary and secondary state between two firewalls in a cluster. </longdesc> <shortdesc lang="en">Manages primary/backup conntrackd state</shortdesc> <parameters> <parameter name="bin" unique="1"> <longdesc lang="en"> Location of conntrackd binary. </longdesc> <shortdesc lang="en">Conntrackd bin</shortdesc> <contect type="string" default="/usr/sbin/conntrackd"/> </parameter> <parameter name="cfg" unique="1"> <longdesc lang="en"> Location of conntrackd configuration file. </longdesc> <shortdesc lang="en">Conntrackd config</shortdesc> <contect type="string" default="/etc/conntrackd/conntrackd.conf"/> </parameter> <parameter name="lck" unique="1"> <longdesc lang="en"> Location of conntrackd lock-file. </longdesc> <shortdesc lang="en">Conntrackd lock-file</shortdesc> <contect type="string" default="/var/lock/conntrackd.lock"/> </parameter>

</parameters> <actions> <action name="start" timeout="20" /> <action name="stop" timeout="20" /> <action name="monitor" timeout="20" interval="10" depth="0" /> <action name="reload" timeout="20" /> <action name="migrate_to" timeout="20" /> <action name="migrate_from" timeout="20" /> <action name="meta-data" timeout="5" /> <action name="validate-all" timeout="20" /> </actions> </resource-agent> END } ####################################################################### conntrackd_usage() { cat <<END usage: $0 {start|stop|monitor|migrate_to|migrate_from|validate-all|meta-data} Expects to have a fully populated OCF RA-compliant environment set. END } conntrackd_start() { # Call monitor to verify that conntrackd is running conntrackd_monitor if [ $? = $OCF_SUCCESS ]; then # commit the external cache into the kernel table $CONNTRACKD -c if [ $? -eq 1 ] then return $OCF_ERR_GENERIC fi # flush the internal and the external caches $CONNTRACKD -f if [ $? -eq 1 ] then return $OCF_ERR_GENERIC fi # resynchronize my internal cache to the kernel table $CONNTRACKD -R if [ $? -eq 1 ] then return $OCF_ERR_GENERIC fi # send a bulk update to backups $CONNTRACKD -B if [ $? -eq 1 ] then return $OCF_ERR_GENERIC fi return $OCF_SUCCESS fi } conntrackd_stop() { # Call monitor to verify that conntrackd is running conntrackd_monitor

if [ $? = $OCF_SUCCESS ]; then # shorten kernel conntrack timers to remove the zombie entries. $CONNTRACKD -t if [ $? -eq 1 ] then return $OCF_ERR_GENERIC fi # request resynchronization with the master firewall replica $CONNTRACKD -n if [ $? -eq 1 ] then return $OCF_ERR_GENERIC fi fi return $OCF_SUCCESS } conntrackd_monitor() { # Define conntrackd_pid variable local conntrackd_pid=`pidof ${OCF_RESKEY_bin}` # Check for conntrackd lock-file if [ -f $OCF_RESKEY_lck ]; then # Check for conntrackd pid if [ $conntrackd_pid ]; then # Successfull if lock and pid exists return $OCF_SUCCESS else # Error if pid exists but pid isn't running return $OCF_ERR_GENERIC fi else # False if lock and pid missing $OCF_NOT_RUNNING # Start conntrackd daemon $CONNTRACKD -d fi } conntrackd_validate() { # Check if conntrackd binary exists check_binary ${OCF_RESKEY_bin} if [ $? != 0 ]; then return $OCF_ERR_ARGS fi # Check if conntrackd config exists if [ ! -f ${OCF_RESKEY_cfg} ]; then return $OCF_ERR_ARGS fi return $OCF_SUCCESS } case $__OCF_ACTION in meta-data) meta_data exit $OCF_SUCCESS ;; start) conntrackd_start;; stop) conntrackd_stop;;

monitor) conntrackd_monitor;; migrate_to) ocf_log info "Migrating ${OCF_RESOURCE_INSTANCE} to ${OCF_RESKEY_CRM_meta_migrate_to}." conntrackd_stop ;; migrate_from) ocf_log info "Migrating ${OCF_RESOURCE_INSTANCE} to ${OCF_RESKEY_CRM_meta_migrated_from}." conntrackd_start ;; reload) ocf_log err "Reloading..." conntrackd_start ;; validate-all) conntrackd_validate;; usage|help) conntrackd_usage exit $OCF_SUCCESS ;; *) conntrackd_usage exit $OCF_ERR_UNIMPLEMENTED ;; esac rc=$? ocf_log debug "${OCF_RESOURCE_INSTANCE} $__OCF_ACTION : $rc" exit $rc

iptables
Create a file called iptables in /usr/lib/ocf/resource.d/ and populate it with the following script.
#!/bin/sh # # iptables sync wrapper # # Copyright (C) 2010 op5 AB, Jonathan Petersson <jpetersson@op5.com> # All Rights Reserved. # Copyright (C) 2004 SUSE LINUX AG, Lars Marowsky-Bre # All Rights Reserved. # # Disclamer: # # This program is free software; you can redistribute it and/or modify # it under the terms of version 2 of the GNU General Public License as # published by the Free Software Foundation. # # This program is distributed in the hope that it would be useful, but # WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Further, this software is distributed without any warranty that it is # free of the rightful claim of any third person regarding infringement # or the like. Any license provided herein, whether implied or # otherwise, applies only to this software file. Patent licenses, if # any, provided herein do not apply to combinations of this program with # other software, or any other product whatsoever. # # You should have received a copy of the GNU General Public License # along with this program; if not, write the Free Software Foundation, # Inc., 59 Temple Place - Suite 330, Boston MA 02111-1307, USA. # ####################################################################### # Initialization: : ${OCF_FUNCTIONS_DIR=${OCF_ROOT}/resource.d/heartbeat} . ${OCF_FUNCTIONS_DIR}/.ocf-shellfuncs ####################################################################### # Fill in some defaults if no values are specified OCF_RESKEY_bin_default="/root/scripts/iptables/iptables.sh"

: ${OCF_RESKEY_bin=${OCF_RESKEY_bin_default}} meta_data() { cat <<END <?xml version="1.0"?> <!DOCTYPE resource-agent SYSTEM "ra-api-1.dtd"> <resource-agent name="iptables" version="0.5"> <version>1.0</version> <longdesc lang="en"> This is a iptables resource to manage synchronization between two nodes </longdesc> <shortdesc lang="en">Manages iptables sync</shortdesc> <parameters> <parameter name="bin" unique="1"> <longdesc lang="en"> Location of iptables binary. </longdesc> <shortdesc lang="en">iptables bin</shortdesc> <contect type="string" default="/root/scripts/iptables/iptables.sh"/> </parameter> </parameters> <actions> <action name="start" timeout="20" /> <action name="stop" timeout="20" /> <action name="monitor" timeout="20" interval="10" depth="0" /> <action name="reload" timeout="20" /> <action name="migrate_to" timeout="20" /> <action name="migrate_from" timeout="20" /> <action name="meta-data" timeout="5" /> <action name="validate-all" timeout="20" /> </actions> </resource-agent> END } ####################################################################### iptables_usage() { cat <<END usage: $0 {start|stop|monitor|migrate_to|migrate_from|validate-all|meta-data} Expects to have a fully populated OCF RA-compliant environment set. END } iptables_start() { bash $OCF_RESKEY_bin slave if [ $? -eq 0 ]; then return $OCF_SUCCESS else return $OCF_ERR_GENERIC fi } iptables_stop() { bash $OCF_RESKEY_bin master if [ $? -eq 0 ]; then return $OCF_SUCCESS else return $OCF_ERR_GENERIC fi

} iptables_monitor() { return $OCF_SUCCESS } iptables_validate() { # Check if iptables binary exists check_binary ${OCF_RESKEY_bin} if [ $? != 0 ]; then return $OCF_ERR_ARGS fi return $OCF_SUCCESS } case $__OCF_ACTION in meta-data) meta_data exit $OCF_SUCCESS ;; start) iptables_start;; stop) iptables_stop;; monitor) iptables_monitor;; migrate_to) ocf_log info "Migrating ${OCF_RESOURCE_INSTANCE} to ${OCF_RESKEY_CRM_meta_migrate_to}." iptables_stop ;; migrate_from) ocf_log info "Migrating ${OCF_RESOURCE_INSTANCE} to ${OCF_RESKEY_CRM_meta_migrated_from}." iptables_start ;; reload) ocf_log err "Reloading..." iptables_start ;; validate-all) iptables_validate;; usage|help) iptables_usage exit $OCF_SUCCESS ;; *) iptables_usage exit $OCF_ERR_UNIMPLEMENTED ;; esac rc=$? ocf_log debug "${OCF_RESOURCE_INSTANCE} $__OCF_ACTION : $rc" exit $rc

Once youve added both scripts to your resource directory restart heartbeat on both nodes to make them available.

Shared IPs
The essence of having a HA environment is to have shared resources, the primary resource which well share is the routable IPs, these IPs will also serve as service IPs for VPN. In this setup weve two of them, one for the WAN-side and one for the LAN side. Well configure them using the crm cli tool. Notice that you can launch crm by itself and run help to get a list of commands available.

deb-fw1:~# crm crm(live)# help

This is the CRM command line interface program. Available commands: cib manage shadow CIBs resource resources management node nodes management options user preferences configure CRM cluster configuration ra resource agents information center status show cluster status quit,bye,exit exit the program help show help end,cd,up go back one level crm(live)#

When setting the IPs its important that youve the NICs connected identically on both firewalls since heartbeat is told to allocate a certain IP to a certain NIC. Assigning the IP to the wrong NIC will leave you with a broken installation. If youve multiple networks or NICs you can add these the same way.
deb-fw1:~# crm configure resource primitive ExtIP ocf:heartbeat:IPaddr2 \ params ip="130.131.132.130" cidr_netmask="25" nic="eth0" \ op monitor interval="30s" deb-fw1:~# crm configure resource primitive IntIP ocf:heartbeat:IPaddr2 \ params ip="120.121.122.1" cidr_netmask="25" nic="eth1" \ op monitor interval="30s"

In addition to adding the IPs we want to group them to make sure that theyre both present on the same node. Unless this is done theres a risk of the two nodes takes one IP each.
deb-fw1:~# crm configure group IPs ExtIP IntIP \ meta target-role="Started"

Further we want to assign the resource to a primary node, in our case we want deb-fw1 to be the primary node.
deb-fw1:~# crm resource migrate IPs deb-fw1.example.com

If everything is configured correctly deb-fw1 should now own the shared IPs on NIC eth0 and eth1. Verify this by checking that the resource is started with crm_mon.
deb-fw1:~# crm_mon -1 ============ Last updated: Fri Sep 10 13:21:26 2010 Stack: Heartbeat Current DC: deb-fw2.example.com (56654cb6-9f5b-442d-9367-f8dc4136c6e4) - partition with quorum Version: 1.0.9-74392a28b7f31d7ddc86689598bd23114f58978b 2 Nodes configured, unknown expected votes 2 Resources configured. ============

Online: [ deb-fw2.example.com deb-fw1.example.com ] Resource Group: IPs ExtIP (ocf::heartbeat:IPaddr2): IntIP (ocf::heartbeat:IPaddr2): Started deb-fw1.example.com Started deb-fw1.example.com

Verify that the IPs has been set.


deb-fw1:~# ip addr show dev eth0 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 08:00:27:a7:25:0f brd ff:ff:ff:ff:ff:ff inet 130.131.132.131/25 brd 130.131.132.255 scope global eth0 inet 130.131.132.130/25 brd 130.131.132.255 scope global secondary eth0 inet6 fe80::a00:27ff:fea7:250f/64 scope link valid_lft forever preferred_lft forever deb-fw1:~# ip addr show dev eth1 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 08:00:27:8b:a1:3a brd ff:ff:ff:ff:ff:ff inet 120.121.122.2/25 brd 120.121.122.127 scope global eth1 inet 120.121.122.1/25 brd 120.121.122.127 scope global secondary eth1 inet6 fe80::a00:27ff:fe8b:a13a/64 scope link valid_lft forever preferred_lft forever

Conntrackd
To ensure that the firewall state is properly synced on fail-over well call our custom OCF resource which we added above, if your system doesnt have the binaries or configuration-files present in their default locations you can modify these by adding parameters to the resource.

deb-fw1:~# crm configure primitive conntrackd ocf:heartbeat:conntrackd \ bin=/usr/sbin/conntrackd cfg=/etc/conntrackd/conntrackd.conf \ lck=/var/lock/conntrack.lock \ op monitor interval=30s

If conntrackd is available in the standard locations used by Debian you can simple add the resource the following way.
deb-fw1:~# crm configure primitive conntrackd ocf:heartbeat:conntrackd \ op monitor interval=30s

Iptables
To have a proper synchronization of the firewall rules well call our custom OCF triggering our iptables-script to activate and sync all rules. As a node takes ownership of the iptables-resource it will make sure that all rules are consistent on both ends. Notice that you need to run bash iptables.sh master after modifying any rules to apply the rules and sync between the servers otherwise the rule will get wiped upon fail-over.

Upon adding the resource well set the state-parameter to master.


deb-fw1:~# crm configure primitive iptables ocf:heartbeat:iptables \ params state=master \ op monitor interval=30s

In addition to this weve installed iptables-persistent which loads the rules upon boot of the server based on the rules written to /etc/iptables/rules by the script.

LSB-services
Theres a couple of services we only want active on the active node, in our case this is OpenVPN and IPsec. Due to this theres no major need for OCF-resources and well simply call them using LSB which will trigger the init.d-scripts for each respective service. Its important to remember to have the services turned off during startup so they wont cause interference once heartbeat tries to start them.
deb-fw1:~# update-rc.d -f openvpn remove deb-fw1:~# update-rc.d -f ipsec remove

OpenVPN
Once youve configured OpenVPN with the appropriate tunnels you want to use well call it with heartbeat using LSB. Once a node takes ownership of the service it will automatically get started using the init.d-scripts.
deb-fw1:~# crm configure primitive openvpn lsb:openvpn \ op monitor interval=30s

Its important that the OpenVPN service isnt started prior to setting the shared IPs which is used by OpenVPN. If this happens the service wont start and youll be left with a broken VPN. To ensure that OpenVPN isnt started before the IPs are assigned set the following order-rule.
deb-fw1:~# crm configure order IP_before_openvpn inf: IPs openvpn

IPsec
IPsec is configured in the same sense as OpenVPN utilizing LSB.
deb-fw1:~# crm configure primitive ipsec lsb:ipsec \ op monitor interval=30s

Like OpenVPN its important to ensure that IPsec isnt started until the IPs has been assigned

on the node which it is to run on.


deb-fw1:~# crm configure order IP_before_ipsec inf: IPs ipsec

Final configuration
node $id="56654cb6-9f5b-442d-9367-f8dc4136c6e4" deb-fw2.example.com \ attributes standby="off" node $id="e2dbecb7-3568-4c34-a000-87dcf4df82d2" deb-fw1.example.com \ attributes standby="off" primitive ExtIP ocf:heartbeat:IPaddr2 \ params ip="130.131.132.130" cidr_netmask="25" nic="eth0" \ op monitor interval="30s" primitive IntIP ocf:heartbeat:IPaddr2 \ params ip="120.121.122.1" cidr_netmask="25" nic="eth1" \ op monitor interval="30s" primitive conntrackd ocf:heartbeat:conntrackd \ op monitor interval="30s" primitive ipsec lsb:ipsec \ op monitor interval="30s" \ meta target-role="Started" primitive iptables ocf:heartbeat:iptables \ op monitor interval="30s" \ meta target-role="Started" primitive openvpn lsb:openvpn \ op monitor interval="30s" \ meta target-role="Started" group IPs ExtIP IntIP \ meta target-role="Started" location cli-prefer-IPs IPs \ rule $id="cli-prefer-rule-IPs" inf: #uname eq deb-fw1.example.com location cli-prefer-conntrackd conntrackd \ rule $id="cli-prefer-rule-conntrackd" inf: #uname eq deb-fw1.example.com location cli-prefer-ipsec ipsec \ rule $id="cli-prefer-rule-ipsec" inf: #uname eq deb-fw1.example.com location cli-prefer-iptables iptables \ rule $id="cli-prefer-rule-iptables" inf: #uname eq deb-fw1.example.com location cli-prefer-openvpn openvpn \ rule $id="cli-prefer-rule-openvpn" inf: #uname eq deb-fw1.example.com order IP_before_ipsec inf: IPs ipsec order IP_before_openvpn inf: IPs openvpn property $id="cib-bootstrap-options" \ dc-version="1.0.9-74392a28b7f31d7ddc86689598bd23114f58978b" \ cluster-infrastructure="Heartbeat" \ stonith-enabled="false"

Monitoring
As we want to monitor the services running our installation well cover some brief options available. Its expected that youve a Nagios or op5 Monitor installation which allows for NRPE checks. We wont cover the installation or configuration of the monitoring software, please refer to each respective project for details on this.

NRPE
NRPE allows the monitoring server to trigger a binary to execute on the monitored server responding with the current status of the system. In our case we want to make sure that all daemons were relying on are running properly. First well install nrpe.
deb-fw1:~# apt-get install nagios-nrpe-server nagios-nrpe-plugin -y

Well need to open up nrpe to allow for the monitoring host, modify allowed_hosts in /etc/ nrpe.conf and add the IP. Remember to modify the firewall script to allow port 5666/TCP. Further well add some proc-checks to the configuration
command[proc_heartbeat]=/opt/plugins/check_procs -w 4: -c 4:5 -C heartbeat command[proc_conntrackd]=/opt/plugins/check_procs -w 1: -c 1:2 -C conntrackd command[proc_crm]=/opt/plugins/check_procs -w 1: -c 1:2 -C crmd

Once finished restart nrpe and you should be able to monitor the services above.

Vous aimerez peut-être aussi