The figure above illustrates our two VCN’s where we have OKE running in compartment A in Virtual Cloud Network VCN1 and ATP-DB running in compartment B in its own VCN 2 – both in same tenancy in region Ashburn. For connecting from a different VCN to ATP-D, it is critical that your CIDR ranges do not overlap. In the below diagrams I have examples of the ATP VCN and OKE VCN’s.
CIDR Block: 10.0.0.0/16
CIDR Block: 10.1.0.0/16
The network path to an Autonomous Transaction Processing dedicated database is through a VCN (virtual cloud network) and subnet defined by the dedicated infrastucture hosting the database. Usually, the subnet is defined as private, meaning that there is no public Internet access to databases.
For easier development, I created a bastion server in the same VCN as the ATP-D instance – which has a public IP address and is reachable over public internet. On this bastion, I installed graphics UI, SQLDeveloper and enabled a VNC connection. Start a connection to this bastion using the private key in Putty and create an SSH tunnel to allow the VCN client connection.
The next step is to copy the DB wallet to the bastion, and create a connection in SQL Developer on the bastion:
Using this environment, you can now connect using ADMIN, create the schema for the usersvc application, connect with the usersvc user and finally create the DB table using the SQL statements from GitHub (see References below).
The next step is to create a new OKE Kubernetes cluster in the Developer Services section of the OCI Console. Alternatively you can use Cloud Shell or Terraform scripts. You can choose the Quickstart Create option which includes creation of a VCN and associated route tables and security lists. However, make sure that the VCN CIDRs must not overlap with the CIDRs used in the ATP-D VCN.
In my case, the ATP-D VCN used already the default CIDR range of 10.0.0.0/16, so I had to first create a custom VCN with all its sub-components and use the "Custom Create" option of OKE to create a new cluster based on this custom VCN.
If you are not too familiar with CIDR ranges, then you can use one of the various CIDR to IP address calculators and check that both IP ranges do not overlap. Otherwise the next step will fail.
Because we have two separate VCNs for ATP-D and for OKE, we need a VCN peering which allows connections from one VCN to the other.
There are two types of VCN peering connections: local and remote. Because our two VCNs are in the same tenancy and region, a local VCN peering is sufficient.
Details can be found in the VCN documentation: https://docs.cloud.oracle.com/en-us/iaas/Content/Network/Tasks/localVCNpeering.htm
First you have to create a local peering gateway on both sides.
On ATP-D side:
And on OKE side:
Then establish the local peering connection by selecting “Establish peering connection” on the OKE-side local peering gateway – entering the details of the target peering gateway on ATP-D side.
If everything worked well, both peering gateways now show the status “ Peered - Connected to a peer.”
We will still need one more step to connect from OKE VCN to ATP-D VCN:
On ATP-D side we need to add a Route Table entry and a Security List rule to allow Ingress traffic coming from the local peering gateway:
Finally, on OKE side we need the similar Route Table entry / Security List rule to allow Egress traffic to the ATP-D gateway.
If everything has been setup correctly, then you can do the same connection test to the database from the OKE VCN (for example by using also a bastion server).
Now that the network connection between OKE VCN and ATP-D is working, we can deploy the usersvc app to OKE, and connect it to ATP-D.
One remaining issue which we need to solve is the DNS resolution of the ATP-D database service.
The tnsnames.ora entry in the DB wallet looks like this:
The scan listener DNS in the ATP-D VCN resolves to:
However, DNS name resolution for this target only works in the own (OKE) VCN, not for DNS names in the peered ATP-D VCN. As a result, we cannot simply use the DB wallet in the OKE VCN – because the scan listener hostname “host-xxxx-scan.subxxxx.atpdvcn.oraclevcn.com” will not be found.
There are different solutions to this – like configuring CoreDNS or setting up dnsmasq – for simplicity we will describe the easiest approach in this blog:
With Kubernetes, you can use a HostAlias to define an entry in /etc/hosts on the worker nodes.
Add this to usersvc deployment:
- ip: 10.0.0.6
Add this to the app.yaml in the usersvc git project. After deploying the modified usersvc app to your cluster, you can check for the new entry in /etc/hosts:
kubectl exec user-svc-helidon-77d94cd885-rzx77 -c user-svc-helidon -- cat /etc/hosts
# Kubernetes-managed hosts file.
::1 localhost ip6-localhost ip6-loopback
# Entries added by HostAliases.
In the usersvc example, the DB wallet is copied with commands in the Dockerfile to path /helidon/wallet in the container.
The credentials for the DB connection to ATP-D are then passed as environment variables after reading it from a K8s secret and doing base64-decoding.
As an improvement, the secret should be stored instead in and retrieved from OCI Vault Service. This can be done similarly as described for Oracle Functions here: https://blogs.oracle.com/developers/oracle-functions-connecting-to-an-atp-database-with-a-wallet-stored-as-secrets
You should now be able to execute the REST calls on the user service and this should insert the entries or get back the entries from the database table in your ATP-D instance.