PaaS customers, in the past, have been able to provision their instances only over the flat network of Oracle Public Cloud (OPC). However, at present, support for PaaS computes with IP networks has been released. As a result, customers now have a choice to provision PaaS instances, e.g. SOA Cloud Service (SOACS), MFT Cloud Service (MFTCS) or Database Cloud Service (DBCS), using IP networks.
This option clearly comes with 2 distinct advantages. Firstly, there is no need to configure GRE tunnels between PaaS compute and the VPN Gateway running in OPC. Secondly, the customers have the flexibility of defining their own network subnet and topology for their PaaS instances running within OPC.
Since this feature is recently released, this blog is an attempt to provide guidance with the basic setup and configuration of PaaS computes using IP networks.
To demonstrate the use case, a 2-node SOACS cluster, with an Oracle Traffic Director (OTD) serving as the load balancer, is provisioned first with IP networks. The steps outlined here are conceptually the same for an MFTCS cluster as well. The cluster is then connected to an on-premises private network over VPN.
The overall solution architecture with the network topology and VPN connectivity is shown in Fig. 1 here.
Fig. 1 PaaS Computes over IP Network with VPN connectivity to on-premises network
Before provisioning of the SOACS cluster, it is necessary to create a DBCS instance for the SOA Infrastructure repository.
Therefore, in summary, 5 compute instances are provisioned in the identity domain within OPC, as listed below.:
To demonstrate the end-to-end VPN connectivity using the IP network subnets, a customer's corporate network has been simulated by using 2 virtual machines in a laptop. Their functionality is described below.
As seen in Fig. 1 above, the OPC setup consists of provisioning the DBCS instance and then SOACS cluster over IP networks. So, the prerequisite for this is to have access to a region, which has IP networks enabled. Over time, this should not be a concern but during the roll-out phases, this setup can only be carried out in region, which have IP networks enabled.
The corporate network has been simulated by running multiple VirtualBox (VBox) virtual machines in a laptop. One VBox virtual machine (VM) serves as the VPN Gateway and a second VBox VM serves as the on-premise, corporate Linux server sitting behind the VPN Gateway.
The 7 distinct machines used in our test environment are listed below
The setup and configuration of the PaaS computes with IP network within OPC are described in the first part. Next, the setup of corporate private network simulation with VirtualBox images in a laptop are described. Finally, the setup of IPSec tunnel between the 2 VPN gateways to test the end-to-end VPN connectivity is described.
The key tasks for the setup within OPC are listed below.
An IP network reserves a pool of IP addresses to be allocated to the computes provisioned later. In this case, an IP network with the subnet of 192.168.1.0/24 is created for association with all the PaaS computes to be created in this exercise.
The window for creating an IP network is shown in Fig.2 below.
To create a new IP network subnet, follow the navigation path outlined below.
The parameters and values provided below are entered for creation of the IP Network.
The IP network subnet pool, 192.168.1.0/24 will be available for PaaS computes in the identity domain now. Next, each of the required PaaS services are provisioned using this newly created IP network.
The primary window for provisioning the DBCS instance with IP network is shown in Fig. 3 below.
Fig. 3 Provision DBCS instance with IP network
To create the DBCS instance, follow the navigation path outlined below.
In the first screen, the Region has the value No Preference.in the beginning. Keeping this value creates the DBCS instance in the Shared Network. Selecting an appropriate value for the Compute Region automatically exposes the next field to populate the relevant IP network.
The parameters and values provided below are entered for creation of the DBCS instance.
Rest of the entries for fields within different screens of the wizard are standard for DBCS provisioning and hence skipped here. As specified earlier, Enterprise Edition Release 12.2.0.1 of database was selected in the wizard for creating this database instance to serve as the SOACS infrastructure repository.
Next, the 2-node SOACS cluster with Load Balancer (OTD) is provisioned. Similar concept of specifying the appropriate IP network after selection of appropriate Compute Region, as shown earlier for DBCS provisioning, is followed.
The primary window for provisioning the SOACS instance with IP network is shown in Fig. 4 below.
Fig. 4 Provision SOACS cluster with IP network
To create the SOACS cluster, follow the navigation path outlined below.
As noted earlier for DBCS, the Region has the value No Preference.in the beginning. Keeping this value creates the SOACS instance in the Shared Network. Selecting an appropriate value for the Compute Region automatically exposes the next field to populate the relevant IP network.
The parameters and values provided below are entered for creation of the SOACS cluster.
Rest of the entries are standard for SOACS provisioning and not relevant for IP networks. So the details are being skipped here and can be obtained from Oracle SOACS Product documentation [1].
For this exercise, the software release chosen is 12.2.1.2.0 and the cluster size is 2. In the next screen, the load balancer option was selected to provision the cluster with OTD and Service Type is chosen as SOA, SB & B2B from the drop-down list.
For an MFTCS cluster, the Service Type should be chosen as MFT Cluster from the drop-down list. Rest of the process will be identical as outlined in this blog.
Before creating the CSG in OPC, a reserved Public IP address should be made available for this compute instance.
The window for creating a public IP reservation is shown in Fig.5 below.
Fig. 5 Create IP Reservation for CSG in OPC
To create the Public IP reservation, follow the navigation path outlined below.
The parameters and values provided below are entered for creation of the public IP reservation.
Note that the IP Reservation option exists under both menus for IP Network and Shared Network. However, we need to reserve the public IP for the CSG via the Shared Network menu.
Next, we create the CSG using this IP reservation within IP network. The window for creating a VPN Gateway is shown in Fig. 6 below.
To create the VPN Gateway, follow the navigation path outlined below.
The parameters and values provided below are entered for creation of the VPN gateway.
A NAT network for VBox reserves a pool of IP addresses to be allocated to the VMs running within a physical host. In this case, a NAT network with the subnet of 10.9.8.0/24 is created for association with all the VMs used to simulate the private corporate network in this exercise.
A terminal session transcript creating a NAT network and validating the creation within a Linux host is shown below.
slahiri@slahiri-lnx:~$ VBoxManage natnetwork add --netname MyNatNetwork --network 10.9.8.0/24
slahiri@slahiri-lnx:~$
slahiri@slahiri-lnx:~$ VBoxManage natnetwork list
NAT Networks:
Name: NatNetwork
Network: 10.0.2.0/24
Gateway: 10.0.2.1
IPv6: No
Enabled: Yes
Name: MyNatNetwork
Network: 10.9.8.0/24
Gateway: 10.9.8.1
IPv6: No
Enabled: Yes
2 networks found
slahiri@slahiri-lnx:~$
To provision VBox VMs in this custom NAT network, the network adapter for each VBox image is associated to this custom NAT network, MyNatNetwork.
The NAT network subnet pool, 10.9.8.0/24 will be available for VBox VMs within the laptop host now. Next, each of the required VMs are created using this newly created NAT network.
The setup of the on-premises network with VBox VMs is not within the primary scope of the blog. Hence, a lot of details are skipped here. However, this process is described thoroughly elsewhere in product manuals and other published documents. Some relevant details while setting up the on-premises network are discussed below.
We create the metadata for on-premises VPN gateway location in design-time GUI tool, App Net Manager using the Location wizard and save the configuration. Some of the key parts of the process are outlined below.
To create the VPN Gateway, follow the navigation path outlined below.
The key parameters and values provided below are entered during creation of the VPN gateway for the on-premises side..
An ISO image for Corente Service Gateway is available in Oracle Technology Network portal.
Next, we download this ISO image to a local machine. It is used as a boot device to create a new VBox VM for running the CSG within the on-prem side. The network adapter for this VM is configured to use the NAT Network, MyNatNetwork..
During the startup sequence of the VBox VM, the CSG configuration metadata is downloaded from Corente Service Control Point (SCP) at www.corente.com by matching the location name entered in last step, for a successful startup sequence.
Any existing VBox VM running Oracle Linux or any OS can be used for this purpose. As mentioned in the last step, this is achieved by associating the Network Adapter of this VBox VM with the custom NAT Network.
A VPN connection is an IPSec tunnel between the 2 CSGs in OPC and the local VBox VM to transmit encrypted data securely. OPC Cloud UI can only create connections between CSG in OPC and a 3rd Party hardware VPN device in on-premises side. Most of the customers, in real life, have a 3rd Party hardware VPN device. But, for our exercise, we are simulating the on-premises VPN gateway with another CSG running in a VBox VM. Hence, the connection between these 2 CSGs are created using the App Net Manager tool [2].
To create the VPN Gateway, follow the navigation path outlined below.
The key parameters and values provided below are entered during creation of the VPN gateway for the on-premises side..
The wizard can be completed by selecting most of the default values and adding the default User Groups associated with the 2 CSGs.
At the end of the exercise, in ANM tool, a line can be seen to connect the 2 CSGs, which will eventually turn from yellow to green. This will signify that the IPsec tunnel between the 2 VPN gateways is operational, as shown in Fig. 7.
Fig. 7 VPN Tunnel up and running between CSG OPC and CSG on-premises (App Net Manager view)
As cited earlier, most real life scenarios will have the VPN connection established with a 3rd Party device for the on-premises side. In those situations, the VPN connection can be created from the OPC Console and the basic routing is automatically created.
However, since the VPN tunnel in this setup is created between 2 CSGs using the App Net Manager tool and not the Cloud UI, the routing has to be created manually via the Cloud UI. This task is completed in 3 parts.
A vNICset is created that contains the vNIC of the CSG in OPC. This vNICset will be, in turn, used to define the routing.
The window for creating a vNICset is shown in Fig.8 below.
To create the vNICset, follow the navigation path outlined below.
The parameters and values provided below are entered for creation of the VPN gateway.
In our simplistic use case, the default vNICset is updated to include the vNICs of all the 5 OPC compute VMs created. There are other ways to setup the vNICsets and routes for a production system. But for the purposes of our simple use case, the configuration is kept simple and basic here.
The update window for default vNICset is shown in Fig. 9.
To create the vNICset, follow the navigation path outlined below.
The parameters and values provided below are entered for creation of the VPN gateway.
VI.C Create Route
A Route entry will transfer the traffic for on-prem subnets through the vNICset created in Step VI. A. The window to create the route is shown in Fig.10
To create the vNICset, follow the navigation path outlined below.
The parameters and values provided below are entered for creation of the VPN gateway.
A routing rule to allow communication with IP network subnet in OPC has to be added in the VBox VM running in the on-premises NAT network. The transcript of terminal session below shows the addition of the route.
[root@soa-training ~]# route add -net 192.168.1.0 netmask 255.255.255.0 gw 10.9.8.8
[root@soa-training ~]#
The IP subnet is 192.168.1.0/24 and the NAT Network IP address of the CSG in VBox in 10.9.8.8.
At this point all the setup and configuration is complete and the connectivity from IP network to on-premise servers over VPN can be tested. To confirm 2 sample terminal sessions are provided below. First one is from one of the SOACS cluster nodes and the other is from the on-premise VBox VM running Oracle Linux.
As can be recalled from Fig. 1, listed below are the IP addresses in IP network and NAT network that will be used for testing connectivity.
[root@soacswipn-wls-1 ~]# ifconfig
eth0 Link encap:Ethernet HWaddr 02:78:6A:DA:ED:DD
inet addr:192.168.1.5 Bcast:192.168.1.255 Mask:255.255.255.0
inet6 addr: fe80::78:6aff:feda:eddd/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:8900 Metric:1
RX packets:11232969 errors:0 dropped:0 overruns:0 frame:0
TX packets:14533869 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:2351171724 (2.1 GiB) TX bytes:3862274975 (3.5 GiB)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:1976694 errors:0 dropped:0 overruns:0 frame:0
TX packets:1976694 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:878045475 (837.3 MiB) TX bytes:878045475 (837.3 MiB)
[root@soacswipn-wls-1 ~]# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 192.168.1.1 0.0.0.0 UG 0 0 0 eth0
192.168.1.0 0.0.0.0 255.255.255.0 U 1 0 0 eth0
[root@soacswipn-wls-1 ~]#
[root@soacswipn-wls-1 ~]# ping 192.168.1.8
PING 192.168.1.8 (192.168.1.8) 56(84) bytes of data.
64 bytes from 192.168.1.8: icmp_seq=1 ttl=64 time=1.58 ms
64 bytes from 192.168.1.8: icmp_seq=2 ttl=64 time=0.493 ms
^C
--- 192.168.1.8 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1922ms
rtt min/avg/max/mdev = 0.493/1.041/1.589/0.548 ms
[root@soacswipn-wls-1 ~]# ping 10.9.8.8
PING 10.9.8.8 (10.9.8.8) 56(84) bytes of data.
64 bytes from 10.9.8.8: icmp_seq=1 ttl=63 time=36.2 ms
^C
--- 10.9.8.8 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 975ms
rtt min/avg/max/mdev = 36.279/36.279/36.279/0.000 ms
[root@soacswipn-wls-1 ~]# ping 10.9.8.6
PING 10.9.8.6 (10.9.8.6) 56(84) bytes of data.
64 bytes from 10.9.8.6: icmp_seq=1 ttl=62 time=35.6 ms
64 bytes from 10.9.8.6: icmp_seq=2 ttl=62 time=34.7 ms
^C
--- 10.9.8.6 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1373ms
rtt min/avg/max/mdev = 34.778/35.202/35.626/0.424 ms
[root@soacswipn-wls-1 ~]#
[root@soa-training ~]# ifconfig
eth7 Link encap:Ethernet HWaddr 08:00:27:F7:4D:DE
inet addr:10.9.8.6 Bcast:10.9.8.255 Mask:255.255.255.0
inet6 addr: fe80::a00:27ff:fef7:4dde/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:63 errors:0 dropped:0 overruns:0 frame:0
TX packets:92 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:8917 (8.7 KiB) TX bytes:8857 (8.6 KiB)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:314 errors:0 dropped:0 overruns:0 frame:0
TX packets:314 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:41943 (40.9 KiB) TX bytes:41943 (40.9 KiB)
[root@soa-training ~]# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 10.9.8.1 0.0.0.0 UG 0 0 0 eth7
10.9.8.0 0.0.0.0 255.255.255.0 U 1 0 0 eth7
192.168.1.0 10.9.8.8 255.255.255.0 UG 0 0 0 eth7
[root@soa-training ~]#
[root@soa-training ~]# ping 10.9.8.8
PING 10.9.8.8 (10.9.8.8) 56(84) bytes of data.
64 bytes from 10.9.8.8: icmp_seq=1 ttl=64 time=0.619 ms
64 bytes from 10.9.8.8: icmp_seq=2 ttl=64 time=0.594 ms
^C
--- 10.9.8.8 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1693ms
rtt min/avg/max/mdev = 0.594/0.606/0.619/0.027 ms
[root@soa-training ~]# ping 192.168.1.8
PING 192.168.1.8 (192.168.1.8) 56(84) bytes of data.
64 bytes from 192.168.1.8: icmp_seq=1 ttl=63 time=35.2 ms
64 bytes from 192.168.1.8: icmp_seq=2 ttl=63 time=33.8 ms
^C
--- 192.168.1.8 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1621ms
rtt min/avg/max/mdev = 33.841/34.558/35.275/0.717 ms
[root@soa-training ~]#
[root@soa-training ~]# ping 192.168.1.5
PING 192.168.1.5 (192.168.1.5) 56(84) bytes of data.
64 bytes from 192.168.1.5: icmp_seq=1 ttl=62 time=36.3 ms
64 bytes from 192.168.1.5: icmp_seq=2 ttl=62 time=36.6 ms
^C
--- 192.168.1.5 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1471ms
rtt min/avg/max/mdev = 36.316/36.500/36.684/0.184 ms
[root@soa-training ~]#
The test case described here is one way to demonstrate VPN connectivity of PaaS Computes using IP networks. As mentioned before, we have used a CSG in the on-premises side but in real life, most customers will have a 3rd Party hardware VPN device. But the process of configuring the PaaS computes with IP networks should mostly remain unaffected.
The other caveat is some of the simplifying assumptions undertaken while setting up this test case, specially with the vNICsets and the routing rules. In real life scenario, they may be complex based on actual customer requirements.
For further details, please contact the VPN Product Management team or the IaaS group within A-Team.
SOACS and MFTCS Product Management teams have been providing extensive support in the development of this solution for many months. It would not have been possible to deliver such a solution to the customers without their valuable contribution.
Finally, a big thanks to my team mates, Kumar Allamraju and Andrei Stoian, who help me on a regular basis with complexities of the VPN solution stack.
VPNaaS can also be used now to establish VPN connectivity between PaaS Computes within an IP network and a 3rd Party VPN Device for the on-prem side. The setup will be very similar except that there is no need to create en external compute for CSG in OPC. It will be created automatically along with necessary vNICsets and routing rules. More details can be found in the Oracle VPN Product Documentation [3].
Previous Post