In the world, today the Internet can’t function without self-adapting networks. I am talking about the routing decisions that now are made by protocols and everything is changed dynamically. Remember the days when everything was static? Well, I do… hundreds of static routes configured on each server and again on each router. It was a nightmare to add something to the network.
Now think about the same in the IPSEC context: I know few customers who are not willing to route 0.0.0.0/0 to a vpn tunnel, so in this scenario maintaining static route entries for hundreds of subnets will be a challenge.
In order to make this self-adapting, a routing protocol is used. From the routing protocols list, one stands out: BGP. It is very flexible and it works on unicast.
BGP is the protocol of choice in the cloud providers because multicast and broadcast are not supported by them.
In this article, I will focus on getting the BGP working on a Linux VM that is using Libreswan for connecting to the DRG in OCI. The VM provisioning and the configuration of the Libreswan are not in the scope of this article. For more information on how to build a VPN connection between Libreswan and OCI you can follow the official documentation:
https://docs.cloud.oracle.com/iaas/Content/Network/Concepts/libreswan.htm
A prerequisite for going further is a functional IPSEC tunnel.
The topology that I will create is depicted in the following picture:
Install Quagga. I used a VM with OEL7:
yum install quagga |
Activate ip forwarding.
echo 1 >/proc/sys/net/ipv4/ip_forward |
Modify the libreswan config in order to add an IP address to the vti:
leftvti=10.10.10.1/30 leftvti=10.10.10.5/30 |
The config for the tunnels look like this:
config setup plutoopts="--perpeerlog" protostack=auto conn oracle-tunnel-1 keyexchange=ike pfs=yes ikev2=no ike=aes256-sha2_384;modp1536 phase2alg=aes256-sha1;modp1536 right=x.x.x.x # Ip Address of the DRG tunnel left=192.168.12.4 leftid=192.168.12.4 authby=secret rightsubnet=0.0.0.0/0 leftsubnet=0.0.0.0/0 ikelifetime=28800 salifetime=3600 auto=start mark=5/0xffffff # Needs to be unique across all tunnels vti-interface=vti1 vti-routing=no leftvti=10.10.10.1/30 encapsulation=yes conn oracle-tunnel-2 keyexchange=ike pfs=yes ikev2=no ike=aes256-sha2_384;modp1536 phase2alg=aes256-sha1;modp1536 right=y.y.y.y # Ip Address of the DRG tunnel left=192.168.12.4 leftid=192.168.12.4 authby=secret leftsubnet=0.0.0.0/0 rightsubnet=0.0.0.0/0 ikelifetime=28800 salifetime=3600 auto=start mark=6/0xffffff # Needs to be unique across all tunnels vti-interface=vti2 vti-routing=no leftvti=10.10.10.5/30 encapsulation=no |
Edit the zebra configuration:
zebra.conf ! ! Zebra configuration saved from vty ! 2019/07/30 13:18:58 ! hostname caandrei-vpn2-fra password zebra enable password zebra log file /var/log/quagga/quagga.log ! interface ens3 ipv6 nd suppress-ra ! interface ens5 ipv6 nd suppress-ra ! interface ip_vti0 ipv6 nd suppress-ra ! interface lo ! interface vti1 ip address 10.10.10.1/30 ipv6 nd suppress-ra ! interface vti2 ip address 10.10.10.5/30 ipv6 nd suppress-ra ! ip route 192.168.12.0/24 vti1 ip route 192.168.12.0/24 vti2 ! ip forwarding ! ! line vty ! |
Modify the bgpd config:
[root@caandrei-vpn2-fra quagga]# cat bgpd.conf hostname caandrei-vpn2-fra password zebra enable password zebra1 router bgp 64555 bgp router-id 10.10.10.1 network 10.10.10.0/30 network 10.10.10.4/30 network 192.168.12.0/24 neighbor 10.10.10.2 remote-as 31898 neighbor 10.10.10.2 ebgp-multihop 255 neighbor 10.10.10.2 next-hop-self neighbor 10.10.10.6 remote-as 31898 neighbor 10.10.10.6 ebgp-multihop 255 neighbor 10.10.10.6 next-hop-self log file bgpd.log log stdout |
Enable service and start it for zebra and bgpd:
systemctl start zebra systemctl enable zebra systemctl start bgpd systemctl enable bgpd |
Restart the ipsec service:
service ipsec restart |
Navigate to the OCI web Console and open the ipsec tunnel page that you created in the beginning and edit the tunnels. You will change the configuration from static routing to BGP and add the ASN and the ip addresses:
After a while it will look like this:
From libreswan VM ping the other side of the tunnel:
Enter in the vtysh and see the bgp summary:
It can the observed that the bgp adjacency is UP and bellow you can see the routes:
In order to test the end to end connectivity we need a test VM on the OCI side. On the subnet level there will be a routing entry that will point the on-pemise subnets to the DRG.
On-premise there is there should be a routing rule to forward traffic for the OCI subnets to the internal network interface of the Libreswan.
The on-premise setup looks like this:
First we will do a ping from the libreswan to the test VM from OCI. We will capture the traffic on the vti1 interface of the Libreswan and we will notice that the traffic is injected in the ipsec tunnel, but since the source is the vti ip address, that is not known in the OCI routing table. The traffic will reach the VM, it will respond, but the packet will not be forwarded to the DRG because there is no route.
In order to get a successful ping we need to adjust the source of the packet: