X

Best Practices from Oracle Development's A‑Team

Provisioning Custom DNS Resolvers for FQDN Resolution

Last Validation: October 21, 2020 

Introduction

A Domain Name Server (DNS) exists to translate a Fully Qualified Domain Name (FQDN) to one or more numerical IP addresses. As I am writing the name www.oracle.com translates to an address of 23.64.153.141. The translation is produced by DNS name servers. These name servers are queried by DNS resolvers. A hybrid DNS resolver also caches the translations returned to it. 

In Oracle's Cloud Infrastructure (OCI), FQDNs are generated for some but not all service instances. Of those generated, some are publicly resolvable via DNS servers on the internet. This post only concerns FQDNs generated for service instances provisioned in a Virtual Cloud Network (VCN) configured to use DNS Hostnames. By default, these FQDNs are resolvable only by a VCN's default resolver.

If your service or device needs one of these FQDNs translated, a custom DNS resolution is a method to consider. Another method is to add an entry for the FQDN in a different name server. The most common examples are translations required from on-premise data centers, remote workstations, peered VCNs and other cloud environments.

This post is a step-by-step guide for provisioning a free lightweight hybrid DNS resolver named Dnsmasq in Linux compute instances in OCI for development purposes. It is referenced by other posts where the topic requires remote FQDN translation. The Dnsmasq official website is here

Custom resolvers are usually deployed in pairs. One resolver residing outside of a target instance's VCN recognizes the requested domain of another VCN and forwards the DNS query to the target instance's VCN's custom resolver for further resolution. 

Validations

October 21, 2020

Topics

Before You Begin

Preparing to Provision Custom DNS Resolvers

Provisioning Custom DNS Resolvers

Reconfiguring Subnets to use a Custom DNS Resolver

Configuring a Custom Resolver to Forward Queries

Validating the Custom DNS Resolvers

 Before You Begin

Determining the Need for a Custom Resolver

There are multiple reasons a device or instance may not connect to another instance e.g. security rules, routing rules, gateways.  One utility common to most devices is named nslookup

Using the OpenSSH Configuration File

For the examples in this post, backup your current configuration file and create a new one. Below is an example using the default location. The ~ character translates to your home directory.

mv ~/.ssh/config ~/.ssh/configsave; touch ~/.ssh/config

Use a text editor to place the details of the two compute instances into the new configuration file. Below are sample contents based on the information in the above diagram. Change the details to reflect your environment. To avoid copy-and-paste spacing issues a downloadable file is here.

User opc
IdentityFile /Users/dcarley/privateKey

Host Instance1
HostName 10.10.10.2

Host Instance2
HostName 10.20.20.2
ProxyJump Instance1

This configuration allows you to connect to Instance2 via Instance1 and assumes the same private key.

SSH to Instance1 and run the utility to see if the FQDN of Instance2 is resolvable.

ssh Instance1 nslookup www.oracle.com is resolvable and returns something like:

Server:        192.168.1.254
Address:    192.168.1.254#53

Non-authoritative answer:
www.oracle.com    canonical name = ds-www.oracle.com.edgekey.net.
ds-www.oracle.com.edgekey.net    canonical name = e2581.dscx.akamaiedge.net.
Name:    e2581.dscx.akamaiedge.net
Address: 104.65.165.39

ssh Instance1 nslookup I2.sn2.vcn2.oraclevcn.com is not resolvable and returns something like:

Server:        192.168.1.254
Address:    192.168.1.254#53

** server can't find I2.sn2.vcn2.oraclevcn.com: NXDOMAIN

Proceeding with the guide assumes the following is in place:

A user account in an OCI tenancy

Compartment privileges to create networking components and compute instances.

Two VCNs configured to use DNS Hostnames. Use existing VCNs or create these two examples. Refer here for documentation.

VCN1 10.10.10.0/24

VCN2 10.20.20.0/24

A Local Peering Gateway (LPG) or Dynamic Routing Gateway (DRG) used in each VCN for peering. Refer here for VCN peering documentation.

An Internet Gateway enabled in each VCN.

A public subnet in each VCN. Use existing subnets or create these two examples. 

Subnet1 10.10.10.0/27

Subnet2 10.20.20.0/27

The default security list or similar assigned to each subnet allowing ingress for SSH and ICMP traffic and egress for all traffic.

A route table assigned to each subnet directing traffic to the other VCN via an LPG or DRG and the remaining traffic to the internet gateway.

The Default DHCP Options or similar assigned to each subnet with the DNS Type set to Internet and VCN Resolver.

A Linux compute instance in each of the subnets. Use existing instances or create these two examples. Refer here for documentation.

Instance 1 ( Generate an SSH key pair while creating if you do not have one )

Instance 2 ( use the same OpenSSH public key )

An SSH utility on your client workstation with a private key to access the Linux instances.

Below is a diagram depicting this initial state using local peering.

Note the example instance names, IP addresses and FQDNs.

Validating the Initial State

Use SSH commands issued from your client workstations for validations. If you are using a Windows OS use OpenSSH ( refer here ) or puTTY (here). Note: If using puTTY, your private SSK key needs to be in puTTY's PPK format.

Issuing the Validation Command

Run the following command so that Instance1 can connect to Instance2 using IP addresses via the gateway.

ssh Instance2 or ssh -v Instance2 for debug mode

You should see something like below:

[opc@Instance2 ~]$ 

 Preparing to Provision Custom DNS Resolvers

Custom DNS Resolvers require additional subnets that must be configured to use the default VCN resolver.

Deploying Additional Subnets

Create additional public subnets, Subnet3 and Subnet4, one in each VCN. For each VCN navigate to Networking >> Virtual Cloud Networks >> Virtual Cloud Network Details >> Subnets

These may be private subnets if all access to the hosting VCN is via a DRG or an LPG. Remote workers and devices without access to either require the resolver to be in a public subnet. This guide uses public subnets.

Note: Ensure that the DNS Type in the DHCP Options is set to Internet and VCN Resolver. Unless it has been changed the Default DHCP Options is configured correctly. Refer here for subnet documentation.

Creating Custom DNS Subnet Security Lists

Security rules are necessary to allow designated traffic types to and from designated sources into and out of designated ports in the subnet. A security list contains both ingress and egress rules. The default security list provisioned with a subnet allows SSH and ICMP ingress traffic and egress traffic for all ports and all protocols. 

Navigate to Networking >> Virtual Cloud Networks >> Virtual Cloud Network Details >> Security Lists Create an additional security list for each new DNS subnet allowing access via the UDP protocol to the DNS listening port (default is 53). For development purposes, choose from the rules below. Refer here for documentation.

Least Restrictive Rule

Moderately Restrictive Rules

 

Creating Custom DNS Subnet Routing Tables

Routing rules are necessary to direct VCN egress traffic to an appropriate gateway. The default route table provisioned with a subnet is empty allowing no traffic out of a subnet.

Navigate to Networking >> Virtual Cloud Networks >> Virtual Cloud Network Details >> Route Tables Create and associate an additional route table for each new DNS subnet. Refer here for documentation. At least one rule directs traffic to either a DRG or LPG depending on how the VCNs are peered with the remains traffic directed to the internet gateway.

Moderately Restrictive Rules

Updating the Custom DNS Subnets

For each VCN navigate to Networking >> Virtual Cloud Networks >> Virtual Cloud Network Details >> Subnets Edit each new DNS subnet to use the new security list and route table.

 Provisioning Custom DNS Resolvers

Provisioning Custom DNS Resolvers requires installing the Dnsmasq software on each Linux compute instance.

About Open SSH Key Pairs on Linux Instances

Instances launched using Oracle Linux use an SSH key pair instead of a password to authenticate. A key pair consists of a private key and public key. You keep the private key on your computer and provide the public key (in OpenSSH format) when you create an instance. If you need a new key pair OCI can generate one for you when you create the first instance. This post uses the same key pair for both instances. When you connect to the instance using SSH, you provide the path to the private key in the SSH command or in the SSH configuration file. Refer here for key pair documentation.

Provisioning Linux Compute Instances

Navigate to Compute >> Instances Create two new Linux compute instances, one in each new DNS subnet Use the default Oracle Linux image and shape. Ensure to select Assign a public IP address. Refer here for Compute documentation.

Provide an OpenSSH public key from your key pair. If you don't have a pair, before clicking Create, select the button for Generate SSH key pair and click and Save both the public and private key as shown below.

When the provisioning completes note the FQDN, public IP address and private IP address.

Below is a diagram depicting the new DNS instances using local peering.

Adding the DNS Servers to your SSH Configuration

Use a text editor to place the details of the two new DNS compute instances into the SSH configuration file (~/.ssh/config) Below are sample contents. For final validation purposes, add an additional entry for the Instance2 that uses the FQDN instead of the IP Address. Change the details to reflect your environment. To avoid copy-and-paste spacing issues a downloadable file is here.

User opc
IdentityFile /Users/dcarley/privateKey

Host Instance1
HostName 172.58.48.10
ProxyJump Instance2

Host Instance2
HostName 172.58.48.20
ProxyJump Instance1

Host DNS-1
HostName 172.58.48.30

Host DNS-2
HostName 172.58.48.40

Host Instance2-FQDN
HostName I2.sn2.vcn2.oraclevcn.com
ProxyJump Instance1

 

Validating your SSH Configuration

SSH connections are used to install Dnsmasq. Use these commands to validate that DNS-1 and DNS-2 are reachable. DNS-1 is used as an example.

ssh DNS-1 or ssh -v DNS-1 for debug mode

You should see something like below:

[opc@DNS-1 ~]$ 

Installing and Configuring Dnsmasq

Installing and configuring Dnsmasq entails connecting to each instance using SSH and running commands. These commands do the following. DNS-1 is used as an example..

Install the Dnsmasq software

Install the netcat (nc) utility for validations

Open the DNS port within the Linux instance

Disable caching within Dnsmasq for development purposes

Validate the Dnsmasq configuration file

Enable Dnsmasq to be started after reboots

Restart Dnsmasq

Display the Dnsmasq status

SSH to the DNS instance

Connect using the SSH command below:

ssh DNS-1

Switch to the Superuser

Switch to the root user using the command below:

sudo su -

Run the Commands

Run the commands below. To avoid copy-and-paste spacing issues a downloadable file is here.

yum install dnsmasq -y
yum install nc -y
firewall-cmd --add-port=53/udp
firewall-cmd --permanent --add-port=53/udp
echo "log-queries" >>/etc/dnsmasq.conf
echo "log-facility=/var/log/dnsmasq.log" >>/etc/dnsmasq.conf
echo "cache-size=0" >>/etc/dnsmasq.conf
dnsmasq --test
systemctl enable dnsmasq
systemctl restart dnsmasq
systemctl status dnsmasq

 

The last status command should show the following:

Validating the UDP Configuration

UDP is the protocol used for DNS connections between instances and DNS resolvers. Use the nc utility to validate the UDP connections. Instance 1 and DNS-1 are used as examples.

SSH to the Instance1

Instance1 in this guide requires the resolution of Instance2's FQDN. To do that, Instance1's subnet forwards the request to DNS-1. Perform the following to validate the connectivity.

ssh Instance-1

Check if the nc utility exists

which nc -- if it exists you should see something like this:

[opc@podi-pub-lin ~]$ which nc

/usr/bin/nc

If necessary, install nc

sudo yum install nc -y

Validate connectivity to DNS-1

This command opens a UDP session to port 53 on DNS-1.

nc -v -u -i 2 10.10.10.34 53 

With the -v option You see something like:

Ncat: Version 7.50 ( https://nmap.org/ncat )
Ncat: Connected to 10.10.10.34:53.
Ncat: Idle timeout expired (2000 ms).

SSH to DNS-1

DNS-1 in this guide forwards the query from Instance1 to DNS-2 in the same VCN as Instance2. Perform the following to validate the connectivity.

ssh DNS-1

Validate connectivity to DNS-2

This command opens a UDP session to port 53 on DNS-2.

nc -v -u -i 2 10.20.20.34 53 

With the -v option You see something like:

Ncat: Version 7.50 ( https://nmap.org/ncat )
Ncat: Connected to 10.20.20.34:53.
Ncat: Idle timeout expired (2000 ms).

 Reconfiguring the Original Subnets to use Custom DNS Resolvers

Subnets hosting instances requiring the use of custom resolvers must be reconfigured to use them.

Adding an Additional Security Rule

Each instance receives replies from the custom DNS resolver. A security rule must be in place to allow this UDP ingress traffic on port 53 (the default DNS port). For each VCN navigate to Networking >> Virtual Cloud Networks >> Virtual Cloud Network Details >> Subnets

Click on the Subnet hosting the original instance to view the associated security lists.

Click on a Security List then click Add Ingress Rules.

The following example for Instance2's subnet is moderately restrictive and grants ingress from the DNS subnet. Add the details and click Add Ingress Rules.

Creating Custom DNS DHCP Options

A custom DNS DHCP option directs all DNS queries originating in the subnet to the customer resolver. For each VCN navigate to Networking >> Virtual Cloud Networks >> Virtual Cloud Network Details >> DHCP Options. Click Create DHCP Options. The following example use Instance1's subnet.

Enter a Name, select Custom Resolver and enter the Private IP address of the VCN's custom DNS server

Modifying the Original Subnet's DHCP Options

For each VCN navigate to Networking >> Virtual Cloud Networks >> Virtual Cloud Network Details >> Subnets. Click Instance1's subnet and click Edit.

Select the new DHCP Option from the dropdown and click Save Changes.

Restarting the Original Instances

Restart the original instances to use the new DNS instance for resolutions. This changes the value of the nameserver in the instance's /etc/resolv.conf file to match the value in the subnet's Custom DNS DHCP Options.

For each original instance navigate to Compute >> Instances >> Instance Details and click Reboot.

 Configuring Custom Resolvers to Forward Queries

Resolving an FQDN of an instance residing in a different VCN requires the custom DNS resolver in the origination VCN to forward the DNS query to the custom DNS resolver in the target VCN. 

For each custom resolver requiring DNS forwarding, SSH to the resolver's instance and append a server statement to the DNS resolver's configuration file. DNS-1 is used as an example.

Connecting to the Instance

ssh DNS-1

Switching to the Superuser

Switch to the root user using the command below:

sudo su -

Obtaining the Target VCN

Navigate to Networking >> Virtual Cloud Networks >> Virtual Cloud Network Details and copy the DNS Domain Name

Appending the Server Statement

Use the echo command to append the DNS domain name for VCN-2 and the IP address of DNS-2. 

echo "server=/vcn2.oraclevcn.com/10.228.10.130" >>/etc/dnsmasq.conf 

Restarting Dnsmasq

Run these commands to restart Dnsmasq and show a status

systemctl restart dnsmasq

systemctl status dnsmasq

 Validating the Custom DNS Resolution

Rerunning the nslookup Utility

Rerun the command used in the Before You Begin section.

ssh Instance1 nslookup I2.sn2.vcn2.oraclevcn.com is now resolvable and returns:

Server:        10.10.10.34
Address:    10.10.10.34#53

Non-authoritative answer:
Name:    I2.sn2.vcn2.oraclevcn.com
Address: 10.20.20.2

 

Using SSH to Establish a Connection using the FQDN 

As specified in the SSH configuration file above, connecting to the FQDN of Instance2 requires Instance1 to have the FQDN translated and to connect via the translated IP address.

ssh Instanc2-FQDN succeeds and returns

[opc@Instance2 ~]$ 

The diagram below shows the final state with the traffic flows. 

You have now resolved the FQDN of an instance in a different VCN and successfully connected. In addition to the example in this guide of resolving the FQDN of a compute instance, this method is also used to translate FQDNs of databases, service URLs, and other OCI components.

 Summary

This post described a generic method of connecting from an instance to another instance in a different VCN using custom DNS resolvers to translate the FQDN of the target instance.

For other posts relating to analytics and data integration visit http://www.ateam-oracle.com/dayne-carley

Be the first to comment

Comments ( 0 )
Please enter your name.Please provide a valid email address.Please enter a comment.CAPTCHA challenge response provided was incorrect. Please try again.Captcha