X

Best Practices from Oracle Development's A‑Team

DRGv2 Hub and Spoke: HUB NVA inspecting the traffic

Andrei Stoian
Principal Solutions Architect | A-Team - Cloud Solution Architects

In the new blog post series dedicated to the brand new DRGv2 OCI feature, we will explore the DRGv2 capabilities of sustaining a Hub and Spoke networking topology when an NVA or pair of NVAs in the HUB VCN is required to inspect the traffic passing between different Spoke VCNs and between Spoke VCN and On-Premise network.

This networking construct is also available using the DRGv2-LPG construct (previos feature) with just one difference, using the LPG there is a soft-limit of 10 LPG per VCN while using the DRGv2 capabilities we can scale up to 300 VCNs attached to a DRGv2.

Each case will be analyzed from the traffic source and destination and we will follow the configuration we need to perform at the DRGv2 Import Route Distribution and Route Tables.  At the VCN level, we need to assure that the traffic sent to different destinations is using the DRGv2 as the next-hop.

1. Networking Topology

We are trying to keep the networking topology as clean as possible to reflect the most important parts we need to take care of. Remember, you can attach up to 300 VCNs to a DRGv2.

For a full list of DRGv2 capabilities: https://docs.oracle.com/en-us/iaas/Content/Network/Tasks/managingDRGs.htm

We are using 2 Spoke VCNs and a Hub VCN which holds our NVA at 172.29.0.245 which will receive and analyze the traffic between:

- Spoke 1 <-> Spoke 2;

- Spoke 1 <-> On-Premise;

- Spoke 2 <-> On-Premise;

Spoke 1 testing VM is at 10.0.0.3, Spoke 2 testing VM is at 10.0.1.2 and On-Premise testing VM is at 172.31.0.2.

The connectivity between OCI and On-Premise can be done by using an IPSec VPN or FastConnect. In our example, we are using a BGP over IPSec tunnel to On-Premise.

2. DRGv2 Import Route Distributions and Route Tables configuration

When a DRGv2 is created, some default Import RD and RT for a default configuration which permits the communication between different VCNs attached and between VCNs and On-Premise network are also created. The default configuration will accomplish the most use cases.

In our case, we want to have a strict traffic path, and to accomplish it we need to have a different approach from the default.

Note: you can have the following approach when you want to have more control over the route distribution between different route tables attached to different attachment points.

First question: how many attachments we have? There are four attachments: 3 VCN attachments and one FC VC attachment.

Second question: how many Import RD and RT we need? There will be four Import RDs, each attached to a specific RT for an attachment:

3. Spoke 1 <-> Spoke 2 configuration for traffic flow

Traffic flow: Spoke 1 (10.0.0.3) -> NVA (172.29.0.245) -> Spoke 2 (10.0.1.2)

The response from Spoke 2 as well as the traffic originated by the 10.0.1.2 to 10.0.0.3 must follow the path: Spoke 2 (10.0.1.2) -> NVA (172.29.0.245) - Spoke 1 (10.0.0.3).

The magic is that we need to direct all the traffic from Spoke 1 to Spoke 2 and vice-versa through the HUB VCN.

For a better understanding, we will start showing what we are importing in the RD associated to the HUB VCN attachment RT:

The routes imported based on the RD above are used when the NVA sends the traffic to Spoke 1/2 and to On-Premise:

We need to enable the transit routing and make the DRGv2 to send all the traffic for 10.0.0.0/8 to the private IP address of our NVA. This will be done by attaching a VCN route table to the DRG. The table will use the HUB VCN:

RD and RT for Spoke 1 VCN:

RD and RT for Spoke 2 VCN:

Both looks the same because both Spoke VCNs are subject of the same routing policy, all the traffic to pass through the HUB VCN.

Let's test the traffic path and check if the NVA is receiving the traffic from 10.0.0.3 to 10.0.1.2. We will start a tcpdump on 172.29.0.245 matching the IP address of 10.0.0.3:

As we clearly see the traffic initiated by 10.0.0.3 and the response from 10.0.1.2 is passing through our NVA.

Let's test the traffic path and check if the NVA is receiving the traffic from 10.0.1.2 to 10.0.0.3, actually we are chaning the initiator of the traffic. We will start a tcpdump on 172.29.0.245 matching the IP address of 10.0.1.2:

The same is true for this case, our traffic is inspected by the NVA.

4. Spoke 1/2 <-> On-premise configuration for traffic flow

At this point we need to pay attention to one very important point. Which IP prefixes are we advertising to On-Premise network in order for the traffic from On-premise to Spoke 1 and 2 to flow over the NVA.

In the RT for the VC attachments we will not import anything, instead we will use just two static routes for 10.0.0.0/8 range and for NVA HUB VCN CIDR:

And the routes redistributed in BGP to On-Premise by the DRGv2:

One important note: if we will not use the static routes and instead use the import function matching the HUB VCN attachment, a routing loop on the On-Premise router might happen if a proper BGP filter is not configured. I will let you find why this might happen.

Traffic flow: Spoke 1/2 (10.0.0.3 / 10.0.1.2) -> NVA (172.29.0.245) -> On-Premise (172.31.0.2)

As we can see, our traffic to the On-Premise network is inspected by the NVA.

Be the first to comment

Comments ( 0 )
Please enter your name.Please provide a valid email address.Please enter a comment.CAPTCHA challenge response provided was incorrect. Please try again.Captcha