This is the 5th blog in a series of blogs regarding the ISV Validated Design.
This blog series will contain the following topics
This blog specifically will focus on operating the design we have been focusing on. This blog will also include some operational references to key files, commands and troubleshooting suggestions. ISVs will have a few different use cases that they will need to address in this design. The use cases include:
As you might remember a POD design contains a POD VCN that has up to 20 Customer VCNs that are attached in a hub and spoke model via local peering gateways (LPGs). By default a single VCN can have up to 10 LPG peerings. With a service limit increase we can raise the limit to 20. Operationally once you start to approach the upper limit you should start plan, test and implement a new POD into your existing set of vRouters. Some customers may decide to pre-provision a number of PODs in anticipation of future customer growth as well.
Existing connectivity. In this use case, we have the following existing connectivity established.
To add a new customer to this design there are a few basic steps you have to take. Any time you create a new Customer VCN, in this example named "New Customer 2", you will need to create a peering relationship by attaching Local Peering Gateways (LPGs) between the ISV-POD and the New Customer. After your peering relationship is established here are the steps to enable end to end routing between the Customer Network and the ISV Management network:
Traffic flow and changes to the network. Make sure to re-evaluate your security lists as well to ensure that the new customer networks are able to reach the management servers.
Update the Linux host with static routes to the customer. The next hop should be the OCI default gateway that segment. The OCI route tables will push the traffic to the correct LPG....
[opc@vrouter1 ~]$ sudo ip route add 172.20.138.0/24 via 1.1.1.1 dev ens5 [opc@vrouter1 ~]$ sudo vi /etc/sysconfig/network-scripts/route-ens5 172.20.138.0/24 via 1.1.1.1 dev ens5
Steps 4 and 5 above could leverage a "summary" route if the customer networks in a given POD are non-overlapping and contiguous. For example, in POD1 perhaps the following 16 customers VCN CIDRs are implemented as follows:
So on the route tables instead of having 16+ route table entries it could be summarized into one, such as:
So the routing table now would be simple:
IP Address Management (IPAM) is a large topic, and for various reasons requires some advanced planning with customers to ensure they have a solid IPAM strategy.
Continuing with our example if you use route summarization you could use a /20 to summarize 16 customer networks at a time.
POD |
Summary Network |
Netmask |
Range of addresses |
---|---|---|---|
1 | 10.1.0.0/20 | 255.255.240.0 | 10.1.0.0 - 10.1.15.255 |
2 | 10.1.16.0/20 | 255.255.240.0 | 10.1.16.0 - 10.1.31.255 |
3 | 10.1.32.0/20 | 255.255.240.0 | 10.1.32.0 - 10.1.47.255 |
4 | 10.1.48.0/20 | 255.255.240.0 | 10.1.48.0 - 10.1.63.255 |
5 | 10.1.64.0/20 | 255.255.240.0 | 10.1.64.0 - 10.1.79.255 |
6 | 10.1.80.0/20 | 255.255.240.0 | 10.1.80.0 - 10.1.95.255 |
7 | 10.1.96.0/20 | 255.255.240.0 | 10.1.96.0 - 10.1.111.255 |
8 | 10.1.112.0/20 | 255.255.240.0 | 10.1.112.0 - 10.1.127.255 |
9 | 10.1.128.0/20 | 255.255.240.0 | 10.1.128.0 - 10.1.143.255 |
10 | 10.1.144.0/20 | 255.255.240.0 | 10.1.144.0 - 10.1.159.255 |
11 | 10.1.160.0/20 | 255.255.240.0 | 10.1.160.0 - 10.1.175.255 |
12 | 10.1.176.0/20 | 255.255.240.0 | 10.1.176.0 - 10.1.191.255 |
13* | 10.1.192.0/20 | 255.255.240.0 | 10.1.192.0 - 10.1.207.255 |
If you need help summarizing your networks check out the Visual Subnet Calculator: http://www.davidc.net/sites/default/subnets/subnets.html
In use case #2 we are transitioning from the current network topology:
Our goal is to implement a new topology such as the following:
SSH into each vRouter and Update the network configuration
wget https://docs.cloud.oracle.com/iaas/Content/Resources/Assets/secondary_vnic_all_configure.sh chmod a+x secondary_vnic_all_configure.sh ./secondary_vnic_all_configure.sh
So if the previous command shows you a new IFACE (such as ens6) you will use that going forward as the new interface to the new POD.
ip link set ens6 mtu 9000
ip addr add 1.1.1.8/28 dev ens6 label ens6 ip addr add 1.1.1.10/28 dev ens6 label ens6:0
vi /etc/sysconfig/network-scripts/ifcfg-ens6 DEVICE="ens6" BOOTPROTO=static IPADDR=1.1.1.8 NETMASK=255.255.255.240 ONBOOT=yes MTU=9000
systemctl restart network
In order to update the Pacemaker configuration you'll have to stop Corosync and pacemaker services on each box. If you edit the configuration while the services are running the cluster can act unpredictably.
systemctl stop pcsd.service systemctl stop pacemaker systemctl stop corosync
cp /usr/lib/ocf/resource.d/heartbeat/IPaddr2 /usr/lib/ocf/resource.d/heartbeat/IPaddr2.ORIG
In section 1 - ##### OCI vNIC variables, Add 3 new variables:
In section 2 - ##### OCI/IPaddr Integration, add a new command on each vRouter to move the VIP for the new POD. Make sure enter the command before the network restart commands....
/root/bin/oci network vnic assign-private-ip --unassign-if-already-assigned --vnic-id $vrouter1vnicpod2 --ip-address $vnicippod2
For vrouter2 -
/root/bin/oci network vnic assign-private-ip --unassign-if-already-assigned --vnic-id $vrouter2vnicpod2 --ip-address $vnicippod2
Full Example
##### OCI vNIC variables server="`hostname -s`" vrouter1vnic="ocid1.vnic.oc1.ca-toronto-1.ab2g6ljrzowh2sa6ucqq2wjmawi6XYZ1" vrouter1vnicpod1="ocid1.vnic.oc1.ca-toronto-1.ab2g6ljrlujfgyav4frscb7uXYZ2" vrouter1vnicpod2="ocid1.vnic.oc1.ca-toronto-1.ab2g6ljrizfu73egoxxvvixjXYZ3" vrouter2vnic="ocid1.vnic.oc1.ca-toronto-1.ab2g6ljrfqopd3j6qdm3xlqhtsghXYZ4" vrouter2vnicpod1="ocid1.vnic.oc1.ca-toronto-1.ab2g6ljr7lkic77hy4geg65cXYZ5" vrouter2vnicpod2="ocid1.vnic.oc1.ca-toronto-1.ab2g6ljrforjbxlo2kuxopzjXYZ6" vnicip="172.20.136.140" vnicippod1="1.1.1.10" vnicippod2="2.2.2.10" ##### OCI/IPaddr Integration if [ $server = "vrouter1" ]; then /root/bin/oci network vnic assign-private-ip --unassign-if-already-assigned --vnic-id $vrouter1vnic --ip-address $vnicip /root/bin/oci network vnic assign-private-ip --unassign-if-already-assigned --vnic-id $vrouter1vnicpod1 --ip-address $vnicippod1 /root/bin/oci network vnic assign-private-ip --unassign-if-already-assigned --vnic-id $vrouter1vnicpod2 --ip-address $vnicippod2 /bin/systemctl network restart else /root/bin/oci network vnic assign-private-ip --unassign-if-already-assigned --vnic-id $vrouter2vnic --ip-address $vnicip /root/bin/oci network vnic assign-private-ip --unassign-if-already-assigned --vnic-id $vrouter2vnicpod1 --ip-address $vnicippod1 /root/bin/oci network vnic assign-private-ip --unassign-if-already-assigned --vnic-id $vrouter2vnicpod2 --ip-address $vnicippod2 /bin/systemctl network restart fi
systemctl start pcsd.service systemctl start pacemaker systemctl start corosync
Verify PCS cluster status (pcs status command)
Test failover by stopping vRouter1 and verify that the secondary IP addresses move to the new active router. So if vRouter1 is active, force it to stop and see if the VIP moved to vRouter2.
pcs cluster stop vrouter1
# tcpdump -i ens5 host
cat /var/log/cluster/corosync.log
cat /var/log/pacemaker.log
systemctl stop pcsd.service systemctl stop pacemaker systemctl stop corosync mv /etc/corosync/corosync.conf /etc/corosync/corosync.bad mv /etc/pacemaker/authkey /etc/pacemaker/authkey.bad systemctl start pcsd.service
/etc/sysctl.d/98-ip-forward.conf /etc/sysctl.d/97-reverse-path-forwarding.conf secondary_vnic_all_configure.sh /etc/sysconfig/network-scripts ifcfg-ens3 ifcfg-ens3:0 route-ens3 ifcfg-ens5:0 ifcfg-ens5 route-ens5 ~/.oci/config /usr/lib/ocf/resource.d/heartbeat/IPaddr2 /usr/lib/ocf/resource.d/heartbeat/IPaddr2.ORIG /etc/corosync/corosync.conf /etc/pacemaker/authkey /var/log/cluster/corosync.log /var/log/pacemaker.log /var/log/pcsd/pcsd.log
systemctl stop pcsd.service systemctl stop pacemaker systemctl stop corosync pcs status pcs cluster stop vrouter1 pcs cluster start vrouter1 firewall-cmd --permanent --add-service=high-availability firewall-cmd --reload systemctl stop firewalld systemctl disable firewalld systemctl restart network oci setup config ip route add ip addr show ip link show ifconfig tcpdump -i ens5 icmp tcpdump -i ens5 host 1.1.1.1
Previous Post
Next Post