Tuesday, May 14, 2013

OpenSIPS/Kamailio High Availability Clustering - 2


High Availability Setup

This post was due for a very very long time since I posted about the general design of a HA-SIP-Proxy in one of my old blogpost.

Now lets start working on this. Using this setup we can cluster two or more machine by using one single Public IP on their WAN interface one-at-a-time. To ensure service availability in case primary (current master) server crashes the application heartbeat resource is configured to monitor the service's status every after 30 seconds. If the Service is found to be stopped it tries start it on the same server for couple of times and then migrate the group of services to the other node. 

This tutorial is equally good for OpenSIPS or Kamailio or any other services.


Active/Passive design diagram

Pre-Requisites:

- Atleast two servers with WAN interfaces empty but cable connected, such that if Public IP is assigned it readily gets accessible from Internet.

- LAN interfaces on both servers should be on the same subnet and should have static Private IP configured.

- The WAN and LAN interfaces on both servers should have similar names i,e eth0=WAN, eth1=LAN

- There should be NO default route inserted to these servers.

Installing Packages:

- Insert a default g/w for LAN interface on both machines temporarily

- Install OpenSIPS or Kamailio or any other tools as per requirement.
- Install heartbeat and sipsak 

Linux-console:~# apt-get install heartbeat sipsak

SIPSAK can be used in the opensips LSB init.d script to send an SIP OPTIONS packet to OpenSIPS port and on the server's reply it'll announce the service is running. This is optional and I suggest users to try this on their own. I recommend looking into the sample asterisk lsb-script provided by heartbeat, there they use sipsak to monitor asterisk's sip port and decided if service is up or not.

Configuring Files for Heartbeat


NOTE: All the files we are going to edit here should be copied to the second server as well.

1- Edit the /etc/hosts file to add hostname for the two servers

192.168.100.148 SIP-SERVER_HA1
192.168.100.62 SIP-SERVER_HA2
2- Edit the /etc/heartbeat/ha.cf file and insert the following.

# enable pacemaker, without stonith
crm             yes
# log where ?
logfacility     local0
# warning of soon be dead
warntime        10
# declare a host (the other node) dead after:
deadtime        20
# dead time on boot (could take some time until net is up)
initdead        120
# time between heartbeats
keepalive       2
# the nodes
node            SIP-SERVER_HA2
node            SIP-SERVER_HA1
# heartbeats, over dedicated replication interface!
ucast           eth1 192.168.100.148
# ignored by node1 (owner of ip)
ucast           eth1 192.168.100.62 # ignored by node2 (owner of ip)
# ping the switch to assure we are online
ping            192.168.100.100


3- Edit the /etc/heartbeat/authkeys file and insert the following:

auth 1
1 sha1 S3cr3tP@ssw0rd

- Assign permissions to the above mentioned file:


Linux-console:~# chmod 0600 /etc/heartbeat/authkeys

Files editing is done here. Copy the files to other server(s).

- Start the heartbeat service on both servers:
Linux-console:~# /etc/init.d/heartbeat start


- Wait for at least 30 seconds and then check the status of the cluster by issuing the following command on both servers.

At the end there will be displayed the cluster online nodes.
Linux-console:~# crm status
============
Last updated: Tue Jan 22 08:02:17 2013
Stack: Heartbeat
Current DC: SIP-SERVER_ha2 (8b5cf63e-4f77-448c-9a75-6a91d4a00cb7) - partition with quorum
Version: 1.0.9-74392a28b7f31d7ddc86689598bd23114f58978b
2 Nodes configured, unknown expected votes
0 Resources configured.
============
Node SIP-SERVER_ha1 (fe3e635f-0d4e-4d8c-99e1-195d1952ac53): UNCLEAN (offline)
Online: [ SIP-SERVER_ha2 SIP-SERVER_ha1]



Note that in the very last line above we've a list of nodes which have joined this Heartbeat group.

Configuring Heartbeat

Goto one of the active nodes in the cluster and on that server’s console issue the following commands sequentially. Once these commands are executed on one server they will be replicated to other serves in the cluster automatically so don't need to copy these commands to other servers.


Linux-console:~# crm configure property stonith-enabled=false
Linux-console:~# crm configure primitive FAILOVER-IP ocf:heartbeat:IPaddr2 params ip="11.22.33.44" nic="eth0" cidr_netmask="255.255.255.240" op monitor interval="10s"
Linux-console:~# crm configure primitive OSIPS lsb:opensips op monitor interval="30s"
Linux-console:~# crm configure primitive SETRoute ocf:heartbeat:Route params destination="default" device="eth0" gateway="11.22.33.1" op monitor interval="10s"
Linux-console:~# crm configure group PIP-OSIP-ROUTE FAILOVER-IP SETRoute OSIPS
Linux-console:~# crm configure colocation OSIPS-WITH-PIP-ROUTE inf: FAILOVER-IP SETRoute OSIPS
Linux-console:~# crm configure order IP-ROUTE-OSIPS inf: FAILOVER-IP SETRoute OSIPS



The very first line is important to disable the Shoot The Other Nood In The Head

In second line we're configuring a resource for the Public IP that will be assigned to the Interface eth1 and named it FAILOVER-IP.

In third line we configure the resource for OpenSIPS LSB (/etc/init.d/opensips start/stop/status) script and named it OSIPS

In fourth line we configured the resource for the Linux default route to access to Internet and named it SETRoute.

So now  in fifth line we created a group of these above mentioned resources.
In next line we bound them to move together whenever they're shifted from one machine to another.

In the very last line we arranged the services to be started in such an order that FAILOVER-IP is assigned to Interface first, then the SETRoute resource is executed to put in the default route to reach to Internet, and then OSIP resource is called to start OpenSIPS.


References and Useful Links: