Advertising a VCN CIDR range to an on-premises network over BGP instead of the subnets’ CIDR ranges

October 29, 2020 | 9 minute read
Sergio J Castro
Principal instructor and consultant at Oracle University
Text Size 100%:

Introduction

Currently, when you create a remote Border Gateway Protocol (BGP) connection over FastConnect or IPsec VPN, Oracle Cloud Infrastructure (OCI) advertises the virtual cloud network (VCN) subnets' CIDR ranges as routes instead of the VCN parent’s CIDR range.

The OCI engineering team made this design decision because many customers use 10.0.0.0/16 as their CIDR range, which in most cases overlaps with their on-premises network. Because VCN CIDR ranges can’t be modified, and because customers need the ability to use a large range, OCI advertises the individual subnets that are actually deployed in OCI. As a result, the specific BGP routes are sent to the on-premises network. Customers then can use features in their customer-premises equipment (CPE) devices to overwrite, summarize, or filter these routes as needed.

However, sometimes a customer would prefer for OCI to advertise the whole VCN CIDR range instead of the native advertisement of the subnets’ CIDR ranges. In this post, I describe how you can accomplish this by using the hub-and-spoke transit routing feature.

Hub-and-spoke architecture

In a hub-and-spoke architecture, you can connect multiple VCNs to an on-premises network via one dynamic routing gateway (DRG). The CIDR ranges from the hub subnets are advertised by default to the on-premises network, but you can control what is advertised from the spoke VCNs and how.

For this post, we connect a VCN to another cloud provider (AWS) over a private virtual circuit to emulate an on-premises network.

Let’s start by linking the two cloud providers by using OCI FastConnect and AWS Direct Connect via Equinix, which is a telecom partner to both OCI and AWS.

Figure 1: Connect OCI and AWS via Equinix.

For a step-by-step FastConnect configuration, see the Configure a FastConnect Direct Link with Equinix Cloud Exchange Fabric blog post. And the Equinix website provides documentation for configuring Direct Connect. We’re using the Ashburn region for OCI and the US-East-1 region (N. Virginia) for AWS.

View network and FastConnect settings in OCI

We’ve configured a VCN with two regional subnets, one private and one public. The VCN’s CIDR range is 10.20.0.0/16, and the CIDR ranges for the subnets are 10.20.30.0/24 (public) and 10.20.40.0/24 (private). The subnet CIDR ranges (10.20.30.0/24 and 10.20.40.0/24) are the routes advertised to AWS.  

Figure 2: A VCN with a private and a public subnet

Now, let’s look at the default route table for this VCN. The remote network at AWS is 192.168.0.0/16.

 Figure 3: Route rule for the DRG

The DRG, of course, hosts the FastConnect 1-Gbps circuit to AWS Direct Connect.

Figure 4: DRG with FastConnect circuit to Direct Connect

If you click the circuit name, you can view the BGP state.

Figure 5: Virtual circuit details

View network and Direct Connect settings in AWS

Now let’s view the AWS Direct Connect and network settings.

The following figure shows a virtual private gateway (VGW) attached to a Virtual Private Cloud (VPC), which is the one that we’re using to interconnect with OCI. The VGW is the AWS equivalent to the DRG in OCI. The ID string for this VPC is vpc-0fd882a938ce2c040.

Figure 6: Virtual private gateway details in AWS Direct Connect

If you click the Direct Connect gateway link, you see the virtual interface details page, which provides the details about the Direct Connect configuration, including the link status.

Figure 7: Virtual interface details in AWS Direct Connect

Now let’s look the VPC details. As stated earlier, the CIDR range of the VPC is 192.168.0.0/16.

Figure 8: VPC details in AWS

This VPC also has two subnets. One has a CIDR range of 192.168.0.0/24, and the other one has a CIDR range of 192.168.199.0/24.

Figure 9: Subnets in the VPC

Both subnets are associated with the same route table. The following figure shows the subnet routes advertised by OCI via the BGP session, indicating that they route via the VGW.

Figure 10: Route table and routes connecting OCI and AWSFigure 10: Route table and routes connecting OCI and AWSTo test the connection, let’s add a new subnet to the VCN in OCI with a CIDR range of 10.20.50/24. It should propagate to AWS.

Figure 11: New subnet added in OCI

Then if we refresh the AWS route table, the new route is displayed.

Figure 12: New route in the VPC route table

If we keep adding subnets to the OCI VCN, the AWS route table keeps growing; the soft limit on the number of subnets in a VCN is 30. The route table on the remote site can get crowded quickly.  Obviously, all these OCI subnet prefixes belong to the 10.20.0.0/16 VCN CIDR range.

Advertise the VCN CIDR range

 Now let’s configure OCI so it advertises only the VCN CIDR range instead of the CIDR range for each one of the VCN’s subnets.

Note that I’m reconfiguring a sandbox tenancy. If you perform these steps in a production environment, you might experience downtime. However, you can design your configuration from the start to advertise your VCN CIDR, creating a hub-and-spoke deployment just for this purpose.

Figure 13: Hub-and-spoke deployment

Step 1: Create the hub VCN 

The first step is to create a hub VCN on OCI. You can use any RFC 1918 CIDR range, as long as it doesn’t overlap with the ones currently used. Here, we’ll use 172.17.0.0/16 (although it’s best practice to use a smaller prefix just for the hub VCN).

Figure 14: New VCN in the compartmentThis hub VCN doesn’t have subnets, and the default route table is empty.

Now let’s detach the current VCN (labeled FastConnect) from the DRG and then attach the hub VCN we just created.

Figure 15: DRG with new hub VCN attachedAfter the attachment is complete, you should no longer see routes to OCI in AWS because this VCN doesn’t have any subnet to advertise.

Figure 16: Route table updated in AWS

Step 2: Peer the OCI VCNs 

Next we create local peering gateways (LPGs) on both OCI VCNs, and then peer them together.

The following figure shows the original VCN, which now serves as the spoke VCN.

Figure 17: Spoke VCN with an LPGWe created an LPG on it called 2HubVCN, and it’s connected to the LPG created on the hub VCN (shown in the following figure as 2SpokeVCN).

Figure 18: Hub VCN with an LPG

Step 3: Update the spoke VCN route table 

Let's modify the default route table of the spoke VCN to remove the route to the DRG and replace it with the LPG we just created. This route points to the AWS CIDR range via the LPG.

Figure 19: New route rule

Step 4: Create route tables in the hub VCN  

The next step is to create two route tables in the hub VCN. One is assigned to the LPG of the hub VCN and routes to the DRG, and one is assigned to the DRG and routes to the LPG of the hub VCN

Figure 20: Hub-and-spoke deployment with route tables associated to DRG and LPG

The following figure shows the route table assigned to the LPG. It’s called 2DRG, and it routes to the DRG.

Figure 21: Route table assigned to the LPG

The following figure shows the route table assigned to the DRG. It’s called 2Spoke, and it routes to the LPG of the hub VCN.

Figure 22: Route table assigned to the DRG

The route table routing to the DRG (2DRG) is associated with the LPG.

Figure 23: 2SpokeVCN LPG associated with the 2DRG route table

And the route table routing to the LPG (2Spoke) is associated with the DRG.

Figure 24: 2Interconnect DRG associated with the 2Spoke route table

Step 5: View the change in AWS

Now, let’s look at the route table in AWS. There is just one route through the virtual private gateway now, and it’s directly to the CIDR range of the spoke VCN in Oracle Cloud Infrastructure. 

Figure 25: Updated AWS route table

Success! The route table is now much cleaner.

Conclusion

Any route that you add to the route table associated with the DRG will be advertised to the on-premises network, this is how you control what is advertised from OCI.

 

Sergio J Castro

Principal instructor and consultant at Oracle University

Sergio joined Oracle America in 2017. He currently is a Principal Oracle Cloud Infrastructure (OCI) instructor and consultant at Oracle University. He has 8 years of cloud computing experience, and 27 years of overall IT experience. He is an OCI Certified Architect, Professional; and AWS Certified Solutions Architect, Associate. He holds a BSCS from the University of Baja California, and an MSc from Cetys University. He focuses on networking and next-generation IT services. He can be reached at Sergio.Castro@Oracle.com

 

P.S. This blog post on OCI WAF  is his too. It's a guest collaboration on the OCI A-TEAM Chronicles.


Previous Post

Tuning Oracle Cloud Guard

Ryan Davis | 6 min read

Next Post


Connecting to Oracle Analytics Private Endpoint with a Public Load Balancer

Dayne Carley | 8 min read