A-Team Chronicles

OCI DRGv2 Routing and Microsoft Azure Access

August 5, 2021
Text Size 100%:

In this new blog post, we will respond to some questions raised when we have launched the DRGv2 with respect to the OCI-Azure Interconnect. The most important ones are:

- Is the DRGv2 capable of routing the traffic from Microsoft Azure via an Interconnect to a remote OCI region when there is an RPC between OCI regions and an OCI-Azure Interconnect?

- Can our Customers access from On-Premise the Microsft Azure environment when there is an Interconnect between OCI and Azure and when the customer is connected to OCI via IPSec or FastConnect?

In the next sections, we will have the answers to the above questions.

1. Networking Topology

The networking topology reflects the two cases that we want to analyze. There is a BGP over IPSec connection from On-Premise to OCI Ashburn region. 172.31.0.2 host is located On-Premise and will be used to test the IP connectivity in a particular scenario described in the next sections.

Between the OCI regions Ashburn and Phoenix the RPC is configured together with two VMs used for IP connectivity testing: 10.0.0.3 located in VCN 1 on Ashburn region and 172.29.2.3 in VCN 2 on Phoenix region.

The Interconnect is configured between the DRGv2 from Ashburn and the Virtual Network Gateway from Microsoft Azure from US East/East2 in a redundant manner. On the Azure side, we have created a testing VM at 10.125.0.4 in VNet 1.

The blog will not cover the Microsoft Azure ExpressRoute configuration part but it is well described in the public documentation: https://docs.microsoft.com/en-us/azure/expressroute/

2. OCI Configuration and Traffic Testing

2.1 Traffic from VCN 1 to VNet 1 and vice-versa

For this use case, we will configure the DRGv2 to send to the Azure side the IP prefixes for VCN 1.

a) Create a new Route Import Distribution attached to a new Route Table used on the FastConnect VC configured with Microsft Azure:

b) Import the VCN 1 IP prefixes in the Import Route Distribution:

After this step, the DRGv2 will announce the VCN 1 IP prefixes to Microsoft Azure. Let's check the Azure route table to conclude if we have all the VCN 1 subnets received:

Azure side received the IP prefixes from the OCI side.

c) Import the VNet CIDR range in the Route Table associated with VCN 1:

d) Test the IP connectivity from Azure VNet 1 to OCI VCN 1 and vice-versa after all the VCN routing and security was configured:

The IP connectivity is established for this use case.

2.2 Traffic from VCN 2 (Phoenix region) to VNet 1 and vice-versa

a) Import the Phoenix VCN 2 IP prefixes received over the RPC to the Route Table associated with the Azure VC:

b) Check the Azure route table for Phoenix received IP prefixes:

The DRGv2 from Ashburn announced the Phoenix VCN 2 IP prefixes received over the RPC.

c) Import the Azure VNet CIDR received on the Phoenix DRGv2 via RPC and announced by the Ashburn DRGv2 in the route table associated with VCN 2:

Check if the route is imported in the route table for VCN 2:

The VNet 1 CIDR is correctly received in Phoenix VCN 2 route table.

d) Test the IP connectivity from VNet 1 to VCN 2 and vice-versa after all the security and routing configuration was created on VCN 2:

The IP connectivity is established from Microsft Azure via the Interconnect in Ashburn to a remote OCI region, where Ashburn and remote OCI region are connected using the RPC. 

2.3 Traffic from On-Premise to VNet 1 and vice-versa using OCI as a transit network

a) Over the BGP over IPSec session with the On-Premise CPE, the DRGv2 is receivig a default route. Let's import the default route received over the BGP over IPSec session with On-Premise to the route table associated with the Azure VC:

b) Let's check if the default route has been imported in the route table associated with the Azure VC:

The default route is imported.

c) Let's check if the default route has been announced by the DRGv2 to Azure:

As we can clearly see there is not default route received on the Azure side. Why? This is related to the fact the OCI Network cannot be used as a transit network to access some other external resources.

Routes imported from an IPSec tunnel or Virtual Circuit are never exported to other IPSec tunnel or Virtual Circuit attachments. This holds true regardless of how the export route distribution is configured. Packets which enter a DRG through an IPSec tunnel or Virtual Circuit attachment can never leave through an IPSec tunnel or Virtual Circuit attachment. If routing is configured using static routes such that packets originating from IPSec tunnel or Virtual Circuit attachments are sent to IPSec tunnel or virtual circuit attachments, the packets are dropped.

On the oposite direction, let's import the Azure VNet CIDR to the route table associated with the BGP over IPSec and check if the On-Premise router is receiving the VNet CIDR:

For the very same reason explained above, the DRGv2 is not sending the VNet 1 CIDR to On-Premise.

d) Traffic testing from On-Premise to VNet 1 and vice-versa will reveal that OCI is NOT acting as a transit network:

Andrei Stoian

Principal Solutions Architect | A-Team - Cloud Solution Architects


Previous Post

OIC Activity Stream to OCI Log analytics

Shreenidhi Raghuram | 4 min read

Next Post


How to Deploy Landing Zone for a Security Partner Network Appliance

Josh Hammer | 5 min read