This is the 4th blog, (Part 4A) in a series of blogs regarding the ISV Validated Design.
This blog series will contain the following topics
In this blog we will focus on the OCI Instances and how to configure them as clustered nodes where they can take over quickly so the ISV never loses connectivity to any of its customers. You can start with a simple setup of two virtual routers, but you can scale up to multiple virtual routers if desired. This blog will focus on the initial setup and configuration of the Pacemaker and Corosync packages for Oracle Linux.
To enable the design below we are going to utilize a Linux clustering package called Pacemaker and Corosync. These linux packages will build a cluster, with a Virtual IP (VIP) that will be monitored across the cluster with a heartbeat mechanism. If a node does not respond for some reason, Pacemaker and Corosync will make a call to the IPaddr2 library.
We will customize this library to include details of our deployment (such as the VNIC OCIDs, and IP Addresses) and it will utilize those details when it makes a call to the Oracle Command Line Interface (CLI). The CLI will do the heavy lifting of asking the OCI Console to migrate the Secondary IP Address from one node to the other. So in our Example below, if vRouter1 is the primary router, the Pacemaker and Corosync will invoke the IPaddr2 library on vRouter2. vRouter2 then will ask the OCI CLI to unassign the Floating IP Addresses (172.20.136.140, and 1.1.1.10) from vRouter1 and assign it to vRouter2.
By allowing the OCI CLI to migrate the IP addresses, we do NOT have to update any route table entries or configurations.
In our testing we can leave a ping test running through the router, and it does not drop any packets. In practice this should provide an HA solution that should work for most virtual routing workloads.
Make sure that you Pre-configure Floating IPs on each vRouter.
### Execute as root user [opc@vrouter1 ~]$ sudo bash [root@vrouter1 ~]# ip addr add 172.20.136.140/28 dev ens3 label ens3:0 [root@vrouter1 ~]# ip addr add 1.1.1.10/28 dev ens5 label ens5:0
Install the OCI Command Line Interface (CLI) to enable your virtual routers to talk with the Oracle Cloud Infrastructure control plane. These instructions are based on this document on Oracle.com.
Run the OCI CLI setup command on each virtual router to confirm connectivity
### Execute as root user [opc@vrouter1 ~]$ sudo bash ### Download the OCI CLI (URL subject to change) [root@vrouter1 opc]# bash -c "$(curl -L https://raw.githubusercontent.com/oracle/oci-cli/master/scripts/install/install.sh)" ### Create a symbolic link for scripts [root@vrouter1 opc]# sudo ln -s /home/opc/bin/oci /usr/bin/oci ### Run the setup command [root@vrouter1 opc]# oci setup config ### Verify that it works [root@vrouter1 opc]# oci iam compartment list --all
This should generate an ~/.oci/config file. A sample configuration file is shown below, but the string values have a suffix of XYZ appended for privacy reasons.
[DEFAULT] user=ocid1.user.oc1..aaaaaaaaxyzwhqyu6sf3erfl7565uie73XYZ fingerprint=31:20:88:fc:29:4a:f8:9d:b0:b5:50:7c:30:XYZ key_file=/home/opc/oci_api_key.pem tenancy=ocid1.tenancy.oc1..aaaaaaaamy3a46ljb5gdtruftfgXYZ region=ca-toronto-1
For more information on using the OCI CLI please visit: https://docs.cloud.oracle.com/iaas/Content/API/SDKDocs/cliinstall.htm
Make sure to change the cluster password from "ChangeMe1234$" to something else. If the password complexity requirements are not met, you may see errors if you try these commands.
### Execute as root user [opc@vrouter1 ~]$ sudo bash [root@vrouter1 opc]# yum -y install pacemaker pcs resource-agents [root@vrouter1 opc]# systemctl start pcsd.service [root@vrouter1 opc]# systemctl enable pcsd.service [root@vrouter1 opc]# echo ChangeMe1234$ | passwd --stdin hacluster [root@vrouter1 opc]# firewall-cmd --permanent --add-service=high-availability [root@vrouter1 opc]# firewall-cmd --reload
Make sure to back up /usr/lib/ocf/resource.d/heartbeat/IPaddr2
cp /usr/lib/ocf/resource.d/heartbeat/IPaddr2 /usr/lib/ocf/resource.d/heartbeat/IPaddr2.ORIG
In the IPaddr2 file we add two blocks of code. The first block has the following customizations:
In the second block of code we have an If/Else If/Then block to run some code depending on which virtual Router is running the IPaddr2 library. The idea is that if vRouter1 dies, then pacemaker may ask vRouter2, or vRouter3, to run the IPaddr2 library. The commands it would execute are:
There are two ways to update the IPaddr2 file. For the first time modifying the file I recommend option #1 as the commands need to be inserted in specific parts of the file.
In the example below we utilize the SED command to insert a number of lines. This could be used for the first time you modify the IPaddr2 file to insert and add the OCI specific logic into the file. For subsequent updates utilizing text editors is a more reliable approach. In the configuration below I define 3 virtual routers, and the commands to run on each router in the event of a failure. Please replace and update the OCIDs for the region you need. I included a partial OCID for privacy reasons.
### Execute as root user [opc@vrouter1 ~]$ sudo bash # Static variables (in theory you can copy this code block, and put it into a text editor to customize).... sed -i '64i\##### OCI vNIC variables\' /usr/lib/ocf/resource.d/heartbeat/IPaddr2 sed -i '65i\server="`hostname -s`"\' /usr/lib/ocf/resource.d/heartbeat/IPaddr2 # these are the VNIC OCIDs on each virtual router # vRouter1 VNICS sed -i '66i\vrouter1vnic="ocid1.vnic.oc1.ca-toronto-1.XYZ1"\' /usr/lib/ocf/resource.d/heartbeat/IPaddr2 sed -i '67i\vrouter1vnicpod1="ocid1.vnic.oc1.ca-toronto-1.XYZ2"\' /usr/lib/ocf/resource.d/heartbeat/IPaddr2 # vRouter2 VNICS sed -i '68i\vrouter2vnic="ocid1.vnic.oc1.ca-toronto-1.XYZ3"\' /usr/lib/ocf/resource.d/heartbeat/IPaddr2 sed -i '69i\vrouter2vnicpod1="ocid1.vnic.oc1.ca-toronto-1.XYZ4"\' /usr/lib/ocf/resource.d/heartbeat/IPaddr2 # vRouter3 VNICS sed -i '70i\vrouter3vnic="ocid1.vnic.oc1.ca-toronto-1.XYZ5"\' /usr/lib/ocf/resource.d/heartbeat/IPaddr2 sed -i '71i\vrouter3vnicpod1="ocid1.vnic.oc1.ca-toronto-1.XYZ6"\' /usr/lib/ocf/resource.d/heartbeat/IPaddr2 # Secondary IP addresses for each Network sed -i '72i\vnicip="172.20.136.140"\' /usr/lib/ocf/resource.d/heartbeat/IPaddr2 sed -i '73i\vnicippod1="1.1.1.10"\' /usr/lib/ocf/resource.d/heartbeat/IPaddr2 # These are the commands to execute if pacemaker/corosync triggers sed -i '616i\##### OCI/IPaddr Integration\' /usr/lib/ocf/resource.d/heartbeat/IPaddr2 sed -i '617i\ if [ $server = "vrouter1" ]; then\' /usr/lib/ocf/resource.d/heartbeat/IPaddr2 sed -i '618i\ /root/bin/oci network vnic assign-private-ip --unassign-if-already-assigned --vnic-id $vrouter1vnic --ip-address $vnicip \' /usr/lib/ocf/resource.d/heartbeat/IPaddr2 sed -i '619i\ /root/bin/oci network vnic assign-private-ip --unassign-if-already-assigned --vnic-id $vrouter1vnicpod1 --ip-address $vnicippod1 \' /usr/lib/ocf/resource.d/heartbeat/IPaddr2 sed -i '620i\ /bin/systemctl network restart \' /usr/lib/ocf/resource.d/heartbeat/IPaddr2 sed -i '621i\ elif [ $server = "vrouter2" ]; then\' /usr/lib/ocf/resource.d/heartbeat/IPaddr2 sed -i '622i\ /root/bin/oci network vnic assign-private-ip --unassign-if-already-assigned --vnic-id $vrouter2vnic --ip-address $vnicip \' /usr/lib/ocf/resource.d/heartbeat/IPaddr2 sed -i '623i\ /root/bin/oci network vnic assign-private-ip --unassign-if-already-assigned --vnic-id $vrouter2vnicpod1 --ip-address $vnicippod1 \' /usr/lib/ocf/resource.d/heartbeat/IPaddr2 sed -i '624i\ /bin/systemctl network restart \' /usr/lib/ocf/resource.d/heartbeat/IPaddr2 sed -i '625i\ else \' /usr/lib/ocf/resource.d/heartbeat/IPaddr2 sed -i '626i\ /root/bin/oci network vnic assign-private-ip --unassign-if-already-assigned --vnic-id $vrouter3vnic --ip-address $vnicip \' /usr/lib/ocf/resource.d/heartbeat/IPaddr2 sed -i '627i\ /root/bin/oci network vnic assign-private-ip --unassign-if-already-assigned --vnic-id $vrouter3vnicpod1 --ip-address $vnicippod1 \' /usr/lib/ocf/resource.d/heartbeat/IPaddr2 sed -i '628i\ /bin/systemctl network restart \' /usr/lib/ocf/resource.d/heartbeat/IPaddr2 sed -i '629i\ fi \' /usr/lib/ocf/resource.d/heartbeat/IPaddr2
##### OCI vNIC variables server="`hostname -s`" vrouter1vnic="ocid1.vnic.oc1.ca-toronto-1.XYZ1" vrouter1vnicpod1="ocid1.vnic.oc1.ca-toronto-1.XYZ2" vrouter2vnic="ocid1.vnic.oc1.ca-toronto-1.XYZ3" vrouter2vnicpod1="ocid1.vnic.oc1.ca-toronto-1.XYZ4" vrouter3vnic="ocid1.vnic.oc1.ca-toronto-1.XYZ5" vrouter3vnicpod1="ocid1.vnic.oc1.ca-toronto-1.XYZ6" vnicip="172.20.136.140" vnicippod1="1.1.1.10" ##### OCI/IPaddr Integration if [ $server = "vrouter1" ]; then /root/bin/oci network vnic assign-private-ip --unassign-if-already-assigned --vnic-id $vrouter1vnic --ip-address $vnicip /root/bin/oci network vnic assign-private-ip --unassign-if-already-assigned --vnic-id $vrouter1vnicpod1 --ip-address $vnicippod1 /bin/systemctl network restart elif [ $server = "vrouter2" ]; then /root/bin/oci network vnic assign-private-ip --unassign-if-already-assigned --vnic-id $vrouter2vnic --ip-address $vnicip /root/bin/oci network vnic assign-private-ip --unassign-if-already-assigned --vnic-id $vrouter2vnicpod1 --ip-address $vnicippod1 /bin/systemctl network restart else /root/bin/oci network vnic assign-private-ip --unassign-if-already-assigned --vnic-id $vrouter3vnic --ip-address $vnicip /root/bin/oci network vnic assign-private-ip --unassign-if-already-assigned --vnic-id $vrouter3vnicpod1 --ip-address $vnicippod1 /bin/systemctl network restart fi
Run on a single vRouter. Do NOT repeat on the other routers. The heartbeat will utilize the floating IP address on the vRouter subnet (172.20.136.140). That subnet has the ports enabled for the vRouters clustered nodes to communicate over the UDP and TCP ports used by Pacemaker and Corosync.
### Execute as root user [opc@vrouter1 ~]$ sudo bash ### Start to configure the cluster [root@vrouter1 opc]# pcs cluster auth vrouter1 vrouter2 vrouter3 -u hacluster -p ChangeMe1234$ --force [root@vrouter1 opc]# pcs cluster setup --force --name virtualrouter vrouter1 vrouter2 vrouter3 [root@vrouter1 opc]# pcs cluster start --all [root@vrouter1 opc]# pcs property set stonith-enabled=false [root@vrouter1 opc]# pcs property set no-quorum-policy=ignore [root@vrouter1 opc]# pcs resource defaults migration-threshold=1 [root@vrouter1 opc]# pcs resource create Cluster_VIP ocf:heartbeat:IPaddr2 ip=172.20.136.140 cidr_netmask=28 op monitor interval=20s ### Configure the host to enable pacemaker and corosync [root@vrouter1 opc]# systemctl enable pacemaker [root@vrouter1 opc]# systemctl enable corosync
Upon successful setup you should have the following configuration files on each node:
Now that Pacemaker has been configured, you can check the status of the Pacemaker service by executing the "pcs status" command as shown below:
[root@vrouter1 log]# pcs status Cluster name: virtualrouter Stack: corosync Current DC: vrouter3 (version 1.1.20-5.el7-3c4c782f70) - partition with quorum Last updated: Wed Oct 16 17:34:38 2019 Last change: Wed Oct 16 17:32:19 2019 by hacluster via crmd on vrouter3 3 nodes configured 1 resource configured Online: [ vrouter1 vrouter2 vrouter3 ] Full list of resources: Cluster_VIP (ocf::heartbeat:IPaddr2): Started vrouter1 Daemon Status: corosync: active/enabled pacemaker: active/enabled pcsd: active/enabled [root@vrouter1 log]#
At this point in the blog series you should have a few linux based virtual routers, and now have them configured for a high availability setup. Pacemaker and Corosync are complex tools and can be challenging to operate. In a future blog we will cover an alternative utilizing keepalived