Best Practices from Oracle Development's A‑Team

ISV Implementation Details - Part 4A - Linux Clustering with Pacemaker and Corosync

Tal Altman
Sr. Manager

This is the 4th blog, (Part 4A) in a series of blogs regarding the ISV Validated Design. 

This blog series will contain the following topics

  1. ISV Home Page
  2. ISV Architecture Validated Design 
    • Requirements, Design, Solution
    • Life of a packet
    • High Availability (HA) Concepts
  3. Core Implementation
  4. Failover Implementation – You can choose from the two options below for implementation
  5. Operations
    • How to add a customer to an existing POD
    • How to create a new POD
    • References, key files and commands




In this blog we will focus on the OCI Instances and how to configure them as clustered nodes where they can take over quickly so the ISV never loses connectivity to any of its customers. You can start with a simple setup of two virtual routers, but you can scale up to multiple virtual routers if desired. This blog will focus on the initial setup and configuration of the Pacemaker and Corosync packages for Oracle Linux. 


Design and Strategy


To enable the design below we are going to utilize a Linux clustering package called Pacemaker and Corosync. These linux packages will build a cluster, with a Virtual IP (VIP) that will be monitored across the cluster with a heartbeat mechanism. If a node does not respond for some reason, Pacemaker and Corosync will make a call to the IPaddr2 library.

We will customize this library to include details of our deployment (such as the VNIC OCIDs, and IP Addresses)  and it will utilize those details when it makes a call to the Oracle Command Line Interface (CLI). The CLI will do the heavy lifting of asking the OCI Console to migrate the Secondary IP Address from one node to the other. So in our Example below, if vRouter1 is the primary router, the Pacemaker and Corosync will invoke the IPaddr2 library on vRouter2. vRouter2 then will ask the OCI CLI to unassign the Floating IP Addresses (, and from vRouter1 and assign it to vRouter2.

By allowing the OCI CLI to migrate the IP addresses, we do NOT have to update any route table entries or configurations.

In our testing we can leave a ping test running through the router, and it does not drop any packets. In practice this should provide an HA solution that should work for most virtual routing workloads.

Note #1: High Availability (HA) solutions depend on the use case. For this blog, we focus on building a virtual router cluster and utilize utilities such as pacemaker and corosync. Testing we performed showed that the utilities seem to failover and fail back as expected but we did not subject the virtual routers to heavy load. With any deployments you put into your environment we highly recommend validating not only the functionality but also validating the scale and load of the solution. This solution only provides HA for the linux virtual IP address and does not provide an HA solution for files, data, or any other workloads on the linux hosts.


Note #2: For other use cases and applications, such as Oracle databases, its better to utilize HA and Disaster Recovery (DR) solutions that are native to the use case. For example Oracle has a combination of hardware and software based technologies built into the platform to provide HA and DR. Oracle also has other solutions such as Data Guard, GoldenGate and so on that are designed to keep the data consistent between nodes.


HA Considerations

  • For high availability you will want to have more than one virtual router in case of a configuration or OS issue
  • You can utilize a Secondary IP which can "Float" between two virtual routers
  • Utilizing HA software the Floating IP can be moved (via commands with the OCI CLI).
  • Route tables don't have to be updated.
  • There should be a Floating IP facing the ISV Network, and EACH POD.
  • The benefit is that no routing changes need to be made.

Make sure that you Pre-configure Floating IPs on each vRouter. 

### Execute as root user
[opc@vrouter1 ~]$ sudo bash

[root@vrouter1 ~]# ip addr add dev ens3 label ens3:0
[root@vrouter1 ~]# ip addr add dev ens5 label ens5:0


Note #3: These commands may fail, or you may see errors that a duplicate IP address exists. That is OK and can be safely ignored.


Installing the OCI CLI 


Install the OCI Command Line Interface (CLI) to enable your virtual routers to talk with the Oracle Cloud Infrastructure control plane. These instructions are based on this document on Oracle.com.


Note #4: We assume that the virtual routers have access to the internet to run install and execute the OCI CLI commands. You can implement a NAT gateway, or an Internet Gateway if your virtual routers don't have access to the internet.


Run the OCI CLI setup command on each virtual router to confirm connectivity

### Execute as root user
[opc@vrouter1 ~]$ sudo bash

### Download the OCI CLI (URL subject to change)
[root@vrouter1 opc]# bash -c "$(curl -L https://raw.githubusercontent.com/oracle/oci-cli/master/scripts/install/install.sh)"
### Create a symbolic link for scripts
[root@vrouter1 opc]# sudo ln -s /home/opc/bin/oci /usr/bin/oci
### Run the setup command
[root@vrouter1 opc]# oci setup config
### Verify that it works
[root@vrouter1 opc]# oci iam compartment list --all

This should generate an ~/.oci/config file. A sample configuration file is shown below, but the string values have a suffix of XYZ appended for privacy reasons.


For more information on using the OCI CLI please visit: https://docs.cloud.oracle.com/iaas/Content/API/SDKDocs/cliinstall.htm

Pacemaker Installation

Make sure to change the cluster password from "ChangeMe1234$" to something else. If the password complexity requirements are not met, you may see errors if you try these commands.

### Execute as root user
[opc@vrouter1 ~]$ sudo bash

[root@vrouter1 opc]# yum -y install pacemaker pcs resource-agents
[root@vrouter1 opc]# systemctl start pcsd.service
[root@vrouter1 opc]# systemctl enable pcsd.service
[root@vrouter1 opc]# echo ChangeMe1234$ | passwd --stdin hacluster
[root@vrouter1 opc]# firewall-cmd --permanent --add-service=high-availability
[root@vrouter1 opc]# firewall-cmd --reload

Make sure to back up /usr/lib/ocf/resource.d/heartbeat/IPaddr2

cp /usr/lib/ocf/resource.d/heartbeat/IPaddr2 /usr/lib/ocf/resource.d/heartbeat/IPaddr2.ORIG

Understanding the IPaddr2 customizations

In the IPaddr2 file we add two blocks of code. The first block has the following customizations:

  • We document for each router, each VNIC's OCID value, such as vRouter1vnic, and vRouter1vnicpod1.
  • We define the Secondary IP addresses for each network that will bounce/float from one router to the next such as, and
  • We use the hostname -s command to determine which vRouter is running the IPaddr2 library.

In the second block of code we have an If/Else If/Then block to run some code depending on which virtual Router is running the IPaddr2 library. The idea is that if vRouter1 dies, then pacemaker may ask vRouter2, or vRouter3, to run the IPaddr2 library. The commands it would execute are:

  • On the VNIC1 interface, add the floating IP (secondary IP - in this case) to this host, and un-assign it if its applied elsewhere.
  • On the VNIC2 interface (connected to POD #1), do the same, add the floating IP (secondary IP - in this case) to this host, and un-assign it if its applied elsewhere.
  • Restart the network service in Linux to ensure the interfaces come up.

There are two ways to update the IPaddr2 file. For the first time modifying the file I recommend option #1 as the commands need to be inserted in specific parts of the file.

Update IPaddr2 configuration file with SED (option 1).

In the example below we utilize the SED command to insert a number of lines. This could be used for the first time you modify the IPaddr2 file to insert and add the OCI specific logic into the file. For subsequent updates utilizing text editors is a more reliable approach. In the configuration below I define 3 virtual routers, and the commands to run on each router in the event of a failure. Please replace and update the OCIDs for the region you need. I included a partial OCID for privacy reasons.

### Execute as root user
[opc@vrouter1 ~]$ sudo bash

# Static variables (in theory you can copy this code block, and put it into a text editor to customize)....
sed -i '64i\##### OCI vNIC variables\' /usr/lib/ocf/resource.d/heartbeat/IPaddr2
sed -i '65i\server="`hostname -s`"\' /usr/lib/ocf/resource.d/heartbeat/IPaddr2
# these are the VNIC OCIDs on each virtual router
# vRouter1 VNICS
sed -i '66i\vrouter1vnic="ocid1.vnic.oc1.ca-toronto-1.XYZ1"\' /usr/lib/ocf/resource.d/heartbeat/IPaddr2
sed -i '67i\vrouter1vnicpod1="ocid1.vnic.oc1.ca-toronto-1.XYZ2"\' /usr/lib/ocf/resource.d/heartbeat/IPaddr2
# vRouter2 VNICS
sed -i '68i\vrouter2vnic="ocid1.vnic.oc1.ca-toronto-1.XYZ3"\' /usr/lib/ocf/resource.d/heartbeat/IPaddr2
sed -i '69i\vrouter2vnicpod1="ocid1.vnic.oc1.ca-toronto-1.XYZ4"\' /usr/lib/ocf/resource.d/heartbeat/IPaddr2
# vRouter3 VNICS
sed -i '70i\vrouter3vnic="ocid1.vnic.oc1.ca-toronto-1.XYZ5"\' /usr/lib/ocf/resource.d/heartbeat/IPaddr2
sed -i '71i\vrouter3vnicpod1="ocid1.vnic.oc1.ca-toronto-1.XYZ6"\' /usr/lib/ocf/resource.d/heartbeat/IPaddr2
# Secondary IP addresses for each Network
sed -i '72i\vnicip=""\' /usr/lib/ocf/resource.d/heartbeat/IPaddr2
sed -i '73i\vnicippod1=""\' /usr/lib/ocf/resource.d/heartbeat/IPaddr2
 # These are the commands to execute if pacemaker/corosync triggers
sed -i '616i\##### OCI/IPaddr Integration\' /usr/lib/ocf/resource.d/heartbeat/IPaddr2
sed -i '617i\        if [ $server = "vrouter1" ]; then\' /usr/lib/ocf/resource.d/heartbeat/IPaddr2
sed -i '618i\                /root/bin/oci network vnic assign-private-ip --unassign-if-already-assigned --vnic-id $vrouter1vnic  --ip-address $vnicip \' /usr/lib/ocf/resource.d/heartbeat/IPaddr2
sed -i '619i\                /root/bin/oci network vnic assign-private-ip --unassign-if-already-assigned --vnic-id $vrouter1vnicpod1  --ip-address $vnicippod1 \' /usr/lib/ocf/resource.d/heartbeat/IPaddr2
sed -i '620i\                /bin/systemctl network restart \' /usr/lib/ocf/resource.d/heartbeat/IPaddr2
sed -i '621i\        elif [ $server = "vrouter2" ]; then\' /usr/lib/ocf/resource.d/heartbeat/IPaddr2
sed -i '622i\                /root/bin/oci network vnic assign-private-ip --unassign-if-already-assigned --vnic-id $vrouter2vnic  --ip-address $vnicip \' /usr/lib/ocf/resource.d/heartbeat/IPaddr2
sed -i '623i\                /root/bin/oci network vnic assign-private-ip --unassign-if-already-assigned --vnic-id $vrouter2vnicpod1  --ip-address $vnicippod1 \' /usr/lib/ocf/resource.d/heartbeat/IPaddr2
sed -i '624i\                /bin/systemctl network restart \' /usr/lib/ocf/resource.d/heartbeat/IPaddr2
sed -i '625i\        else \' /usr/lib/ocf/resource.d/heartbeat/IPaddr2
sed -i '626i\                /root/bin/oci network vnic assign-private-ip --unassign-if-already-assigned --vnic-id $vrouter3vnic  --ip-address $vnicip \' /usr/lib/ocf/resource.d/heartbeat/IPaddr2
sed -i '627i\                /root/bin/oci network vnic assign-private-ip --unassign-if-already-assigned --vnic-id $vrouter3vnicpod1  --ip-address $vnicippod1 \' /usr/lib/ocf/resource.d/heartbeat/IPaddr2
sed -i '628i\                /bin/systemctl network restart \' /usr/lib/ocf/resource.d/heartbeat/IPaddr2
sed -i '629i\        fi \' /usr/lib/ocf/resource.d/heartbeat/IPaddr2

Update IPaddr2 configuration file with VI (option 2).

  • If you want to modify the file in VI, make sure to jump to the blocks of text that start with "OCI". 
  • I use the /OCI command in VI to find the block of text with OCI in it.
  • There are two sections of the file with OCI specific commands. 
    • The first section is the Variables we define
    • The second section is the actual commands to execute

##### OCI vNIC variables
server="`hostname -s`"
##### OCI/IPaddr Integration
        if [ $server = "vrouter1" ]; then
                /root/bin/oci network vnic assign-private-ip --unassign-if-already-assigned --vnic-id $vrouter1vnic  --ip-address $vnicip
                /root/bin/oci network vnic assign-private-ip --unassign-if-already-assigned --vnic-id $vrouter1vnicpod1  --ip-address $vnicippod1
                /bin/systemctl network restart
        elif [ $server = "vrouter2" ]; then
                /root/bin/oci network vnic assign-private-ip --unassign-if-already-assigned --vnic-id $vrouter2vnic  --ip-address $vnicip
                /root/bin/oci network vnic assign-private-ip --unassign-if-already-assigned --vnic-id $vrouter2vnicpod1  --ip-address $vnicippod1
                /bin/systemctl network restart
                /root/bin/oci network vnic assign-private-ip --unassign-if-already-assigned --vnic-id $vrouter3vnic  --ip-address $vnicip
                /root/bin/oci network vnic assign-private-ip --unassign-if-already-assigned --vnic-id $vrouter3vnicpod1  --ip-address $vnicippod1
                /bin/systemctl network restart

Configure the cluster

Run on a single vRouter. Do NOT repeat on the other routers. The heartbeat will utilize the floating IP address on the vRouter subnet ( That subnet has the ports enabled for the vRouters clustered nodes to communicate over the UDP and TCP ports used by Pacemaker and Corosync.

### Execute as root user
[opc@vrouter1 ~]$ sudo bash

### Start to configure the cluster
[root@vrouter1 opc]# pcs cluster auth vrouter1 vrouter2 vrouter3 -u hacluster -p ChangeMe1234$ --force
[root@vrouter1 opc]# pcs cluster setup --force --name virtualrouter vrouter1 vrouter2 vrouter3
[root@vrouter1 opc]# pcs cluster start --all
[root@vrouter1 opc]# pcs property set stonith-enabled=false
[root@vrouter1 opc]# pcs property set no-quorum-policy=ignore
[root@vrouter1 opc]# pcs resource defaults migration-threshold=1
[root@vrouter1 opc]# pcs resource create Cluster_VIP ocf:heartbeat:IPaddr2 ip= cidr_netmask=28 op monitor interval=20s
### Configure the host to enable pacemaker and corosync
[root@vrouter1 opc]# systemctl enable pacemaker
[root@vrouter1 opc]# systemctl enable corosync

Upon successful setup you should have the following configuration files on each node:

  • /etc/corosync/corosync.conf
  • /etc/pacemaker/authkey

Validating the Pacemaker Cluster status

Now that Pacemaker has been configured, you can check the status of the Pacemaker service by executing the "pcs status" command as shown below:

[root@vrouter1 log]# pcs status
Cluster name: virtualrouter
Stack: corosync
Current DC: vrouter3 (version 1.1.20-5.el7-3c4c782f70) - partition with quorum
Last updated: Wed Oct 16 17:34:38 2019
Last change: Wed Oct 16 17:32:19 2019 by hacluster via crmd on vrouter3
3 nodes configured
1 resource configured
Online: [ vrouter1 vrouter2 vrouter3 ]
Full list of resources:
 Cluster_VIP    (ocf::heartbeat:IPaddr2):   Started vrouter1
Daemon Status:
  corosync: active/enabled
  pacemaker: active/enabled
  pcsd: active/enabled
[root@vrouter1 log]#


Closing Thoughts

At this point in the blog series you should have a few linux based virtual routers, and now have them configured for a high availability setup. Pacemaker and Corosync are complex tools and can be challenging to operate. In a future blog we will cover an alternative utilizing keepalived

Be the first to comment

Comments ( 0 )
Please enter your name.Please provide a valid email address.Please enter a comment.CAPTCHA challenge response provided was incorrect. Please try again.Captcha