X

Best Practices from Oracle Development's A‑Team

Using IPv6 as a tool for overcoming VCN IPv4 CIDR overlaps.

Sergio J Castro
Cloud Solutions Engineer.

IPv4 is still the dominant version for layer 3 addressing in the Internet. It has been a decade since the ICANN, the Internet Corporation for Assigned Names and Numbers, announced that they had released    their last batch of IPv4 addresses. However, corporations still have plenty of IPv4 addresses today. This is the reason for which your resources, such as load balancers and compute instances, get public IPv4 addresses with no restrictions. Classless Inter-Domain Routing (CIDR), RFC-1918, and Network Address Translation (NAT) keep extending the life of IPv4. RFC 1918, for example, provides clarity for multi-tier architectures. Having a block of IPs that are routable only in the private space makes it easier to allocate private IP addresses in a tier that is supposed to be private by design. However, In IPv6, there is no need to economize on IP address allocation; and there is no need for NAT either; because all IPv6 CIDRs provided by OCI are publicly routable.

 

One problem with RFC-1918 is the overlapping of CIDR prefixes. Every single VCN that you launch with the VCN Wizard in OCI, it has 10.0.0.0/16 as the default CIDR prefix, and if you launch the VCN without the wizard, 10.0.0.0/16 is the CIDR used as the example for the CIDR field. And other cloud providers use this same CIDR prefix; if you create a VNet in Azure, the default CIDR is also 10.0.0.0/16. . Same with a VPC in AWS, if you launch with the VPC wizard, the default IPv4 CIDR prefix is 10.0.0.0/16.

 

One can always use a different range, and carefully plan the network to make sure there is no overlapping.  However, there are situations where two well-designed networks, independent from each other, need to interconnect, and the potential for IP addresses to overlap is there, either connecting two VCNs from the same tenancy, two VCNs in different OCI tenancies, or Inter-connecting a VCN with a virtual network from another cloud provider. If that is the case, then routing among them will be a problem.

 

OCI now offers tools that can help solve CIDR overlapping problems. For example, you can modify the size of your CIDR block, or you can add other CIDRs to the VCN.

 

Let’s see a scenario within an OCI tenancy.

 

Figure 1 shows two VCNs in the same region with overlapping CIDR prefixes:

 

 

The new enhancement features of the DRG allow the local peering of two VCNs in the same region via DRG attachments. You can attach them to the DRG even if they have overlapping CIDRs, as seen in figure 2. However, you will not be able to route traffic among them.

 

 

If you click on the Autogenerated Drg Route Table for VCN attachments link you immediately get the Conflict message that indicates CIDR prefix overlap, as shown in figure 3:

 

 

The main reason for inter-connecting VCNs is for the resources in them to communicate with each other. The two VCNs shown in figure 1, you can add a different IPv4 CIDR prefix to each one. For example 172.17.0.0/16 to VCNOne, and 192.168.0.0/16 to VCNTwo. However, you cannot add a second IPv4 CIDR range to a subnet. This is not convenient because in order for your existing resources in each VCN to use a new IPv4 CIDR, new subnets with these prefixes are needed. And, in the case of a Virtual Machines (VM), new vNICs will be needed for connecting to these new subnets.

 

Alternatively, you can enable IPv6 on both the VCN and the subnets, which is a cleaner option.

 

This post details the steps for enabling IPv6 on two VCNs with overlapping IPv4 CIDRs, enabling with this the possibility to route traffic to each other. For it, I preconfigured two Wizard-created VCNs. The default IPv4 CIDR prefix of 10.0.0.0/16 was accepted on both VCNs. I also launched two Compute Instances with VM.Standard.E4.Flex shapes with Oracle Linux images. They are both in the public subnet of each VCN. Figure 4 shows these two compute instances:

 

 

  1. The first step is to enable IPv6 on both  VCNs. Navigate to the main OCI menu and select Networking, and then select Virtual Cloud Networks. Open one of the VCNs and click on the Add IPv6 CIDR Block button under Resources, as show in figure 5:

 

 

  1. Confirm your selection.

 

Your IPv6 prefix will be a /56 one. Notice that this prefix is enough for 4,722,366,482,869,645,213,696 IP addresses! (This amount is strictly for information purposes, a VCN has a limit of 65,000 IP addresses).

 

  1. Now, also under Resources, click on the Subnets link and select the one that you want to enable for IPv6. In this post, we will use the public one. Click on the Edit button. Refer to figure 6:

 

 

  1. Check the ENABLE IPv6 CIDR BLOCK check box.

 

You will be presented with a /64 IPv6 prefix for your subnet. You will need to complete the prefix by entering a hexadecimal number between the 00-FF range; as shown in figure 7.

 

  1. Complete the IPv6 CIDR prefix.

 

 

  1. Repeat these steps for the other VCN and subnet.

 

If you take a second look at the Autogenerated Drg Route Table for VCN attachments you will now see the IPv6 CIDR prefixes as the next hop for each VCN attachment. Refer to figure 8. Take note of both IPv6 prefixes.

 

 

The next step is to assign IPv6 addresses to the compute instances being hosted in these two subnets.

 

  1.  Navigate to the main OCI menu and select Compute, and then Instances. Open one of the compute instances.

 

  1. Under Resources click on the Attached VNICs link

 

  1. Click on the link under the Name field for the Primary vNIC. Refer to figure 9:

 

 

  1. Under Resources, select IPv6 Addresses.

 

  1. Click on the Assign IPv6 Address button.

 

  1. Click the Assign button. Figure 10 shows the completion of this step:

 

 

  1. Repeat these steps for the other compute instance.

 

In order for a compute instance to work with IPv6, you need to configure its operating system. This Oracle document provides detailed information.

 

  1. SSH into one of the Compute Instances and run the following commands:
  • route. (NOTE: this command is for retrieving the port for the instance’s IPv4 address, which in this case is ens3)
  • sudo dhclient -6 ens3
  • sudo firewall-cmd --add-service=dhcpv6-client

 

  1. SSH into the other Compute Instance and repeat step 14 on it.

 

Now is time to route! However, the Security Lists for the VCNs only have route rules for IPv4. We need to add IPv6 specific rules. We will enable SSH and ICMP.

 

  1. Navigate to your VCNs and enter IPv6 SSH and ICMP rules to the respective Security Lists assigned to the subnets where these compute instances reside. (The Default Security List in this case). There is a special IPv6 ICMP rule that you can find in the rule picklist. Figure 11 shows a completed Security List:

 

 

  1. Also, add an IPv6 egress rule for All Protocols, as shown in figure 12.

 

 

All is left is to add routes to the respective route tables.

 

  1. In each one of the Route Tables that are associated to the subnet that hosts the Compute Instances enter a rule that directs all IPv6 traffic to the DRG. Refer to figure 13:

 

 

The resulting route table looks as depicted in figure 14:

 

 

The next step it to test our configuration. We should be able to ping from one compute instance to the other, via IPv6. For IPv6 ICMP the command is ping6. Refer to figure 15:

 

 

Success! Compute instances hosted in VCNs that have overlapping IPv4 CIDR prefixes are communicating to each other over IPv6. We also enabled port 22, let’s SSH as well. This time in the reverse path. Refer to figure 16:

 

 

Success!

 

References:

 

* OCI IPv6 Routing and Security by Andrei (Bogdan) Stoian

* Oracle Cloud Infrastructure Documentation, IPv6 Addresses

* RFC 4291

* ICANN assigns its last IPv4 addresses

 

Be the first to comment

Comments ( 0 )
Please enter your name.Please provide a valid email address.Please enter a comment.CAPTCHA challenge response provided was incorrect. Please try again.Captcha