Oracle – Azure Interconnect Use Cases

August 20, 2020 | 16 minute read
Javier Ramirez
Principal Cloud Solution Architect
Text Size 100%:

Overview

The purpose of this blog is to guide you how to deploy the Oracle OCI – Azure Interconnect and go through some typical use cases. For general information about the Interconnect review Overview of the Interconnect Between Oracle and Microsoft and the reference architecture. This blogs goes through the following use cases to confirm what is supported

Use Case Supported
VCN to VNET (Basic Deployment) Yes
On-prem Private Connectivity to OCI using VPN Connect or FastConnect No
Local Peering Gateway (LPG) Yes
Service Gateway Yes
Remote Peering Connection (RPC) No

 

Basic Deployment

This is the basic configuration of the Interconnect from the Oracle Console and the Azure Portal to create a private path between Oracle VCN and Azure VNET. The diagram below shows the topology of the solution

Azure Portal

Log into your Azure Portal and preform the following tasks as neede

  1. Create a virtual network
  2. Create subnets as needed for your VMs. The Gateway Subnet can’t have other objects on it, and it is created automatically when the Virtual Network Gateway is created. Azure will assign a /28 subnet for it
  3. Create an ExpressRoute
    • Region – Select the region where the Interconnect is available
    • Type - Provider
    • Create New
    • Provider - Oracle Cloud FastConnect
    • Peering Location - This is the location where the Interconnect is available
    • SKU – Select the proper option for your needs
  4. IMPORTANT - Record the service key associated with the ExpressRoute you just created. You will need it when creating FastConnect in the Oracle Console
  5. Create a Virtual Network Gateway
    • Gateway type – ExpressRoute
    • SKU – Select the proper SKU based on the performance you need
    • Virtual network – Select the virtual network you will be connecting to
    • Public IP Address - Create new
    • Public Name – Give it a name

 

Oracle Console

Log into your Oracle Console and preform the following tasks as needed

  1. Create A VCN
  2. Create subnets as needed for your VMs
  3. Create Fast Connect. From the Networking menu, select Virtual Cloud Networks, select FastConnect, create FastConnect
    • Select FastConnect Partner
    • Select Microsoft Azure: ExpressRoute
    • Give it a name
    • Select Private Virtual Circuit
    • Select the DRG where FastConnect will connect to
    • Select bandwidth
    • Enter the service key you recorded in the previous section (Step 4) when ExpressRoute was created
    • Assign IPs for each BGP peering. Assign the first IP of the /30 to Oracle and the second one to customer (Azure)
  4. Go back to the Azure Portal and link the Virtual Gateway to ExpressRoute by creating a Connection
    • Select the Virtual Network Gateway
    • Select Connections
      • Give it a Name
      • Connection type – ExpressRoute
      • Select the ExpressRoute circuit you created previously
  5. Once the provisioning is complete on both sides. Go to the Azure Portal, select the ExpressRoute you just created, click Azure Private under the Peering section, click Get Route Table. You should be able to see the subnets created in you OCI VCN

As you can see above, the DRG advertises to Azure all the subnets created within the VCN. The Interconnect is configured. The next step is to test to make sure we have connectivity between the OCI VCN and the Azure VNET.

 

Connectivity Test

  1. Create a VM in Azure (VM-Azure)
  2. Create and Associate a network security group to the VM subnet and allow traffic from/to OCI
  3. Create a route table and associate it with your resource group. It might not be needed if the Virtual network gateway is already doing automatic redistribution to the subnets
    • Add a route to TEST VCN (10.0.10.0/24) pointing to the Virtual Network Gateway you created previously
  4. Create a VM in OCI (VMOCI)
  5. Update the routing table associated with the subnet and add a route rule for the Azure Virtual Network (10.20.0.0/24) pointing to the DRG
  6. Update the security list to allow traffic from/to Azure
  7. Open a terminal window for each VM and ping each other. As you can see below, the VMs can ping each other
From OCI
[opc@vmoci ~]$ ping 10.20.0.4
PING 10.20.0.4 (10.20.0.4) 56(84) bytes of data.
64 bytes from 10.20.0.4: icmp_seq=1 ttl=62 time=2.42 ms
64 bytes from 10.20.0.4: icmp_seq=2 ttl=62 time=4.92 ms
64 bytes from 10.20.0.4: icmp_seq=3 ttl=62 time=1.96 ms
64 bytes from 10.20.0.4: icmp_seq=4 ttl=62 time=2.08 ms
64 bytes from 10.20.0.4: icmp_seq=5 ttl=62 time=1.99 ms
^C
--- 10.20.0.4 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4005ms
rtt min/avg/max/mdev = 1.961/2.675/4.920/1.136 ms
From Azure
[opcuser@VM-Azure ~]$ ping 10.0.10.2
PING 10.0.10.2 (10.0.10.2) 56(84) bytes of data.
64 bytes from 10.0.10.2: icmp_seq=1 ttl=61 time=2.13 ms
64 bytes from 10.0.10.2: icmp_seq=2 ttl=61 time=2.22 ms
64 bytes from 10.0.10.2: icmp_seq=3 ttl=61 time=2.24 ms
64 bytes from 10.0.10.2: icmp_seq=4 ttl=61 time=2.19 ms
64 bytes from 10.0.10.2: icmp_seq=5 ttl=61 time=2.29 ms
^C
--- 10.0.10.2 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4005ms
rtt min/avg/max/mdev = 2.132/2.217/2.290/0.053 ms
 

On-Prem Private Connectivity to OCI

From on-prem you can connect to OCI privately via VPN Connect or FastConnect. For the purpose of this blog on-prem will connect to OCI via VPN Connect but the same concept and results apply to FastConnect.

IMPORTANT - Per design the Oracle-Azure Interconnect does NOT work as a transit network meaning that you CAN NOT connect to Azure from on-prem via OCI or vice versa connect to OCI via Azure.

From on-prem to connect to OCI you need VPN Connect or FastConnect and to connect to Azure you need ExpressRoute.

The diagram below represents the solution for this use case

On-prem is represented by the section at the bottom right with address space 10.0.0.0/24. For this test VPN Connect is using static routing as shown below

From the VCN point of view, the A-Subnet has the default route table and it has an entry to reach on-prem pointing to the DRG

To verify connectivity perform a ping test from a VM in OCI (VMOCI) towards a VM on-prem (VMOP)

From OCI
[opc@vmoci ~]$ ping 10.0.0.35
PING 10.0.0.35 (10.0.0.35) 56(84) bytes of data.
64 bytes from 10.0.0.35: icmp_seq=1 ttl=62 time=61.4 ms
64 bytes from 10.0.0.35: icmp_seq=2 ttl=62 time=60.2 ms
64 bytes from 10.0.0.35: icmp_seq=3 ttl=62 time=60.4 ms
64 bytes from 10.0.0.35: icmp_seq=4 ttl=62 time=60.1 ms
64 bytes from 10.0.0.35: icmp_seq=5 ttl=62 time=60.8 ms
^C
--- 10.0.0.35 ping statistics ---
6 packets transmitted, 5 received, 16% packet loss, time 5007ms
rtt min/avg/max/mdev = 60.130/60.626/61.402/0.552 ms
From On-prem
[opc@vmop ~]$ ping 10.0.10.2
PING 10.0.10.2 (10.0.10.2) 56(84) bytes of data.
64 bytes from 10.0.10.2: icmp_seq=1 ttl=62 time=61.3 ms
64 bytes from 10.0.10.2: icmp_seq=2 ttl=62 time=60.4 ms
64 bytes from 10.0.10.2: icmp_seq=3 ttl=62 time=60.1 ms
64 bytes from 10.0.10.2: icmp_seq=4 ttl=62 time=60.8 ms
64 bytes from 10.0.10.2: icmp_seq=5 ttl=62 time=61.0 ms
^C
--- 10.0.10.2 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4006ms
rtt min/avg/max/mdev = 60.167/60.762/61.310/0.431 ms

This shows that connectivity from OCI to on-prem is working. In the previous section we established connectivity to Azure as well. The DRG for the TEST VCN has all the routes to reach both networks. Now let’s verify if Azure is receiving the routes for on-prem. Go to the Azure Portal, select the ExpressRoute you just created, click Azure Private under the Peering section, click Get Route Table.

As you can see above, it only has routes for the TEST VCN in OCI but does NOT have a route for on-prem (10.0.0.0/24). This confirms that in this case OCI can’t be used as a transit route to reach Azure from on-prem. The same case applies if customer will have ExpressRoute from on-prem to Azure and tries to reach OCI via Azure. The Interconnect is only for VCN to VNET communication, cloud-to-cloud.

 

Local Peering Gateway

In a scenario the customer has multiple VCNs within OCI and they are peered with a Local Peering Gateway (LPG). This also applies if the customer has a hub and spoke deployment. The spokes or peered VCN can talk to Azure using the Interconnect.

The diagram below represents the solution for this use case

First create the SPOKE VCN, create the S subnet, instantiate the VMOCI-Spoke VM. When done then peer the two VCNs using an LPG. For step-by step instructions how to use an LPG refer to the public documentation. Once the configuration is done, this is what you should have:

SPOKE VCN

Local Peering Gateway (LPG-S) peered with LPG-Test (remote LPG at Test VCN). It receives a summarized route from the LPG-Test

Route Table associated with the S subnet. Note it has an entry to Azure and On-prem pointing to the local LPG-S

Security list is updated to allow traffic from the SPOKE VCN to the TEST VCN, Azure, and on-prem

 

TEST VCN

Local Peering Gateway (LPG-Test) peered with LPG-S (remote LPG at SPOKE VCN). It receives a route from the LPG-S. Note that the LPG-Test has also a route table associated with it as you need to tell the LPG how to get to Azure and to on-prem. LPGs by default only advertise prefixes for the VCN they are connected to

This is the route table associated with LPG-Test, the entries for Azure and on-prem are pointing to the DRG

Route Table associated with the A Subnet. As you can see it has an entry to the SPOKE VCN pointing to the local LPG-Test. Plus the previous routing entries

Security list is updated to allow traffic from the TEST VCN to SPOKE VCN

Now that all the infrastructure is in place for local VCN peering. Let’s verify connectivity between the VCNs

From TEST VCN
[opc@vmoci ~]$ ping 10.0.100.2
PING 10.0.100.2 (10.0.100.2) 56(84) bytes of data.
64 bytes from 10.0.100.2: icmp_seq=1 ttl=64 time=0.201 ms
64 bytes from 10.0.100.2: icmp_seq=2 ttl=64 time=0.165 ms
64 bytes from 10.0.100.2: icmp_seq=3 ttl=64 time=0.143 ms
64 bytes from 10.0.100.2: icmp_seq=4 ttl=64 time=0.144 ms
64 bytes from 10.0.100.2: icmp_seq=5 ttl=64 time=0.154 ms
^C
--- 10.0.100.2 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 7197ms
rtt min/avg/max/mdev = 0.119/0.146/0.201/0.029 ms
From SPOKE VCN
[opc@vmoci-spoke ~]$ ping 10.0.10.2
PING 10.0.10.2 (10.0.10.2) 56(84) bytes of data.
64 bytes from 10.0.10.2: icmp_seq=1 ttl=64 time=0.150 ms
64 bytes from 10.0.10.2: icmp_seq=2 ttl=64 time=0.140 ms
64 bytes from 10.0.10.2: icmp_seq=3 ttl=64 time=0.137 ms
64 bytes from 10.0.10.2: icmp_seq=4 ttl=64 time=0.126 ms
64 bytes from 10.0.10.2: icmp_seq=5 ttl=64 time=0.116 ms
^C
--- 10.0.10.2 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 6158ms
rtt min/avg/max/mdev = 0.116/0.137/0.150/0.018 ms
 

The ping test is successful, the Local peering is working correctly. In the previous steps you create a route table for LPG-Test to send traffic to the DRG when destined for Azure. Now you need a route entry in the DRG to point to LPG-Test for any traffic destined to the SPOKE VCN. If the DRG does not have a route table, then create one and assign to it

Associate the Route Table with the DRG

At this point all the infrastructure is in place. Let’s check if Azure is receiving the route for the SPOKE VCN. Go to the Azure Portal, select the ExpressRoute you just created, click Azure Private under the Peering section, click Get Route Table. As you can see 10.0.100.0/24 is on the list which belongs to the SPOKE VCN

Now let’s verify connectivity between the SPOKE VCN and Azure. Make sure your security list is allowing this traffic in both directions

From Azure
[opcuser@VM-Azure ~]$ ping 10.0.100.2
PING 10.0.100.2 (10.0.100.2) 56(84) bytes of data.
64 bytes from 10.0.100.2: icmp_seq=1 ttl=61 time=1.88 ms
64 bytes from 10.0.100.2: icmp_seq=2 ttl=61 time=2.00 ms
64 bytes from 10.0.100.2: icmp_seq=3 ttl=61 time=2.10 ms
64 bytes from 10.0.100.2: icmp_seq=4 ttl=61 time=2.02 ms
64 bytes from 10.0.100.2: icmp_seq=5 ttl=61 time=2.06 ms
^C
--- 10.0.100.2 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 9010ms
rtt min/avg/max/mdev = 1.883/2.049/2.389/0.137 ms
From SPOKE VCN
[opc@vmoci-spoke ~]$ ping 10.20.0.4
PING 10.20.0.4 (10.20.0.4) 56(84) bytes of data.
64 bytes from 10.20.0.4: icmp_seq=1 ttl=62 time=2.19 ms
64 bytes from 10.20.0.4: icmp_seq=2 ttl=62 time=2.14 ms
64 bytes from 10.20.0.4: icmp_seq=3 ttl=62 time=2.08 ms
64 bytes from 10.20.0.4: icmp_seq=4 ttl=62 time=1.99 ms
64 bytes from 10.20.0.4: icmp_seq=5 ttl=62 time=2.11 ms
^C
--- 10.20.0.4 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 14019ms
rtt min/avg/max/mdev = 1.823/2.019/2.235/0.133 ms

 

This results confirm this is a valid use case for the Interconnect

 

Service Gateway

In this use case customer has a service gateway for resources in the TEST VCN or on-prem to reach SaaS, Object Storage, and other services located in the Oracles Shared Network (OSN) privately. Ideally resources in Azure can also access resources in OSN as well.

The diagram below represents the solution for this use case

The first step will be to deploy a Service Gateway in the TEST VCN

Update the route table for the A Subnet with an entry to reach services in OSN via the TEST-SGW

At this time only the A Subnet is aware of OSN via the TEST-SGW but the DRG or Azure has no idea about it. For this reason the next step is to update the route table associated with the DRG and point to the TEST-SGW as show below

Now that the DRG has knowledge about the routes for OSN, it should advertise them to Azure via the Interconnect. Let’s check if Azure has received these routes. Go to the Azure Portal, select the ExpressRoute you just created, click Azure Private under the Peering section, click Get Route Table

As you can see above the last two entries are for Object Storage which is the route that we added to the DRG. If you modify the route table for the DRG to advertise all the services for OSN in the region by changing the previous route entry to what is shown below

Then if you check the Azure routing table you should see all the routes for OSN in that region

With this configuration now you have one-way traffic because now Azure knows how to get to OSN but OSN does not know how to get back to Azure or on-prem. The next step is to create a route table and assign it to the TEST-SGW to point back to the DRG

Next assign the route table to the service gateway

At this time from the routing perspective the configuration is done. Check the security lists in Azure and OCI to make sure traffic is allowed to reach OSN. Now try to connect to Object Storage or any service in the OSN network that is compatible with the Service Gateway.

 

Remote Peering Connection

In this use case the customer has VCNs in two different regions. The TEST VCN is peered with another VCN in a different region via a Remote Peering Connection (RPC).

The diagram below represents the solution for this use case

For this use case a REMOTE VCN is deployed in the Phoenix region, it has a subnet and a VM for testing. The security lists is updated to allow traffic from the TEST VCN and also from Azure. Also the R-subnet route table has routes for the TEST VCN and Azure pointing to the local DRG

Also update the routing table for the A-Subnet on the TEST VCN to make sure it has a route for the REMOTE VCN pointing to the DRG

In the REMOTE VCN, create the Remote Peering Connection (RPC) called RPC-TEST. For information how to perform this task, refer to the public documentation. Copy the OCID for this connection as you will need this info to establish the peering relation

In the TEST VCN, create the Remote Peering Connection (RPC) called RPC-REMOTE

The next step is to establish the connection between the two RPCs. This can be done from either VCN but you always need the OCID from the other RPC in order to complete this task

To test the RPC perform a ping test from VMOCI to VMOCI-Remote

From TEST VCN
[opc@vmoci ~]$ ping 10.0.200.2
PING 10.0.200.2 (10.0.200.2) 56(84) bytes of data.
64 bytes from 10.0.200.2: icmp_seq=1 ttl=62 time=58.0 ms
64 bytes from 10.0.200.2: icmp_seq=2 ttl=62 time=58.0 ms
64 bytes from 10.0.200.2: icmp_seq=3 ttl=62 time=58.0 ms
64 bytes from 10.0.200.2: icmp_seq=4 ttl=62 time=58.0 ms
64 bytes from 10.0.200.2: icmp_seq=5 ttl=62 time=58.0 ms
^C
--- 10.0.200.2 ping statistics ---
5 packets transmitted, 8 received, 0% packet loss, time 7007ms
rtt min/avg/max/mdev = 58.011/58.066/58.096/0.123 ms
From REMOTE VCN
[opc@vmoci-remote ~]$ ping 10.0.10.2
PING 10.0.10.2 (10.0.10.2) 56(84) bytes of data.
64 bytes from 10.0.10.2: icmp_seq=1 ttl=62 time=58.0 ms
64 bytes from 10.0.10.2: icmp_seq=2 ttl=62 time=58.0 ms
64 bytes from 10.0.10.2: icmp_seq=3 ttl=62 time=58.0 ms
64 bytes from 10.0.10.2: icmp_seq=4 ttl=62 time=58.0 ms
64 bytes from 10.0.10.2: icmp_seq=5 ttl=62 time=58.0 ms
^C
--- 10.0.10.2 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 9010ms
rtt min/avg/max/mdev = 58.060/58.082/58.131/0.021 ms

 

The RPC is working properly. The next step is to check if Azure is receiving the route for the REMOTE VCN. Note that the RPC is established between DRGs. There is no need to add any routes to the DRG routing table.

As you can see above there is NO route for 10.0.200.0/24 (REMOTE VCN) in the routing table. This use case does NOT work with the Interconnect per design.

 

Reference

Oracle - How to configure Interconnect

Azure - How to configure Interconnect

 

Javier Ramirez

Principal Cloud Solution Architect


Previous Post

Oracle Cloud Infrastructure CLI Scripting: How to Quickly Override the Default Configuration

Olaf Heimburger | 3 min read

Next Post


Create a JWT Token in Java for Oracle IDCS

Siming Mu | 5 min read