Hybrid Cloud Integration with MFT Cloud Service and VPN Gateway in Oracle Public Cloud

October 23, 2017 | 14 minute read
Text Size 100%:

Executive Overview

MFT customers often receive files via SFTP in the MFT Cloud Service (MFTCS) and have a need to send those file across to an on-premise server. This can be achieved by using a secure VPN tunnel between the MFTCS Compute VM and the on-premise server. This blog describes a possible way to setup such a transfer so that files received in embedded SFTP server within MFTCS can be transferred directly to an on-premise file server via a secure IPSec tunnel using the VPN Gateway in Oracle Public Cloud (OPC).

Solution Approach

Use Case Basic Requirements

The overall use case can be described as follows and is also exemplified in Fig.1 below.

  • An external SFTP client sends a file via SFTP to the embedded SFTP server running in MFT Cloud Service (MFTCS) within Oracle Public Cloud (OPC)
  • MFT Server, upon receipt of the file, transfers it to a file server on premise, hosted within a private corporate network behind the corporate firewall.

Fig.1 High-Level Use Case Diagram

Solution Architecture

The configuration of MFT to receive files via SFTP has been discussed in one of my earlier blogs[1]. In that post, we had shown how MFT can receive files via its embedded SFTP server and save them in a local file system. In this article, we extend the use case by modifying the file system of the target endpoint to point to an on-premise file system, accessible via NFS. The NFS mount, in turn, is enabled by establishing a secure VPN tunnel between Oracle Public Cloud and the on-premise private network.

The key components in the solution architecture are listed below and are also shown in Fig.2 thereafter.

  • Embedded SFTP Server running within MFT Cloud Service (MFTCS) hosted in Oracle Public Cloud (OPC)
  • Oracle Traffic Director is used a Load Balancer in front of the MFT Cloud Service
  • Customer's Corporate Private Network sits behind a VPN Gateway which can be a 3rd party VPN hardware device
  • Customer's on-premise file server (can be a network storage device) is located in the private network behind the on-premise VPN Gateway and corporate firewall
  • Corente Services Gateway (CSG) running within OPC serves as the VPN Gateway for the Oracle Cloud compute nodes
  • A secure VPN tunnel is established between the CSG and the VPN Gateway

Fig. 2 Solution Architecture

Implementation Details

Fig. 3 below shows how the solution architecture has been implemented in our test environment. The corporate network has been simulated by running multiple VBox images in a laptop. One VBox image serves as the VPN Gateway and a second VBox image serves as the on-premise file server sitting behind the VPN Gateway.

Fig. 3 Test Case Implementation Details

The 4 distinct machines used in our test environment are listed below

  • OPC PaaS Compute running MFTCS Release 12.1.3.0.160719
  • OPC Compute running Corente Services Gateway (CSG) in Oracle Public Cloud (OPC)
  • VirtualBox image running Corente Services Gateway (CSG) for the on premise side
  • VirtualBox image running generic Linux OS to server as the on premise file server

It should be noted here that the IP addresses mentioned in Fig. 3 are all private IP addresses and are used in configuring the end points for the MFT transfer. This ensures that the file transmission is carried out using secure IPSec communication channel between Oracle Public Cloud and the on premises private network.

Key Tasks and Activities

As seen from the solution architecture, the implementation tasks are in 2 primary areas, namely, Corente VPN solution and MFT Cloud Service (MFTCS) configuration. The basic MFTCS configuration process has been discussed earlier in A-Team blog[1] and the VPN setup between OPC and on-premise has also been discussed in another A-team blog[3].This blog will extend the concepts discussed earlier and combine them to present a  solution for hybrid integration between Oracle Public Cloud and on-premise corporate network.

The key tasks for the entire exercise are listed below.

  • Configure VPN Gateway in Oracle Public Cloud (OPC), e.g. Corente Services Gateway (CSG)
  • Configure VPN Gateway for on premise network, e.g. CSG running on a VBox image in a laptop
  • Establish a VPN tunnel between the 2 gateways in OPC and the local VBox image
  • Setup GRE tunnels between MFT Cloud Service PaaS node and CSG in OPC for VPN traffic
  • Setup NAT networking between the second VBox image and the VBox image running CSG to simulate private corporate network traffic using private IPs.
  • Establish connectivity between MFT Cloud compute and local VBox image using private IPs
  • Setup the NFS mounts in MFT Cloud compute pointing to a file system in the local VBox image
  • Configure an MFT transfer using embedded SFTP source and local file target. The target endpoint will  use a directory under the NFS mount point.
  • Deploy and test

I. Configure VPN Gateway in OPC

A VPN gateway within OPC can be created using a shared network or IP network. The details about the setup of VPN gateway for IP networks has already been discussed in another blog within A-Team Chronicles collection[3]. The VPN gateway in this blog uses a shared network and is referred to as the Corente Services Gateway (CSG). The details of creating a CSG can also be found in Oracle product documentation[2].

It should be kept in mind that the CSG should be created in the same zone as the PaaS compute nodes, if the identity domain in OPC has multiple zones.

I.A Create IP Reservation in Compute Console

Prior to creating the VPN Gateway, an IP reservation is created to reserve a public IP address that can be associated with this VPN instance.

mft_vpn_screenshot5Fig. 4 Create IP Reservation for CSG in OPC

Navigation

To create the Public IP reservation, follow the navigation path outlined below.

  • Tool: Oracle Cloud UI in browser
  • Console: Compute Cloud Service
  • Top Tab: Network
  • Left Side Menu: Shared Network
  • Sub-Menu: IP Reservation
  • Click on Button: Create IP Reservation
Parameter Entry

The parameters and values provided below are entered for creation of the public IP reservation.

  • Name:sl-csgsoa-opc-ip
I.B Create VPN Gateway in Compute Console

Next, a VPN Gateway is created using this IP reservation as shown below in Fig. 5.

mft_vpn_screenshot4Fig. 5 Create CSG in OPC

Navigation

To create the VPN Gateway, follow the navigation path outlined below.

  • Tool: Oracle Cloud UI in browser
  • Console: Compute Cloud Service
  • Top Tab: Network
  • Left Side Menu: VPN
  • Sub-Menu: VPN Gateway
  • Click on Button: Create VPN Gateway
Parameter Entry

The parameters and values provided below are entered for creation of the VPN gateway.

  • Name: sl-csgsoa-opc (Any meaningful name for the CSG, free format)
  • IP Reservation: sl-csgsoa-opc-ip (From drop-down list, select the IP reservation created earlier)
  • Image: corente_gateway_images-9.4.141a (latest version available at the time)
  • Interface Type: Single-homed (Single homed is for CSG in Shared Network)
  • Subnets: 10.9.8.0/24 (This is the subnet of on-premises network for which the CSG will act as a VPN Gateway)
I.C Configure GRE Tunnels for CSG in AppNetManager

Although, the CSG is created via the Cloud Compute Console, a GRE tunnel cannot be associated with it in the Compute Cloud Console at present. We will use the GUI design-time tool, AppNetManager to configure a GRE tunnel for the CSG using the pop-up window shown in Fig. 6. The process is also documented in the Corente product documentation[2].

  1. mft_vpn_screenshot3Fig. 6 Configure GRE Tunnel for CSG in OPC with AppNetManager

Navigation

To configure GRE tunnel for the VPN Gateway, follow the navigation path outlined below.

  • Tool: Oracle AppNetManager
  • Left Menu: Domains -> Locations -> sl-csgsoa-opc (CSG created earlier in Step I.B) -> Network interface -> WAN/LAN Interface
  • Double-Click on Option: WAN/LAN Interface
Parameter Entry

The key parameters and values provided below are used to enable the GRE tunnel for CSG in OPC in the pop-up window.

  • Check box: Use GRE Tunnel
  • GRE Tunnel IP: 172.16.254.1

II. Configure VPN Gateway for on premises

The VPN gateway for the on-premise network is set up by using a VBox image on a laptop and running a local version of the CSG. Details of the configuration process has been described in another blog[3].

III. Establish a VPN tunnel between the 2 gateways in OPC and local VBox image

A VPN connection is an IPSec tunnel between the 2 CSGs in OPC and the local VBox VM to transmit encrypted data securely. OPC Cloud UI can only create connections between CSG in OPC and a 3rd Party hardware VPN device in the on-premises side. Most of the customers, in real life, have a 3rd Party hardware VPN device. But, for our exercise, we are simulating the on-premises VPN gateway with another CSG running in a VBox VM. Hence, the connection between these 2 CSGs are created using the App Net Manager (ANM) tool [2].

Navigation

To create the VPN Gateway, follow the navigation path outlined below.

  • Tool: Oracle AppNetManager
  • Top Menu: File
  • Sub-Menu: Wizards
  • Click on Option: Partner Location
Parameter Entry

The key parameters and values provided below are entered during creation of the VPN gateway for the on-premises side..

  • First location: sl-csgsoa-opc
  • Second location: sl-csg-laptop

The wizard can be completed by selecting most of the default values and adding the default User Groups associated with the 2 CSGs.

At the end of the exercise, in ANM tool, a line can be seen to connect the 2 CSGs, which will eventually turn from yellow to green. This will signify that the IPsec tunnel between the 2 VPN gateways is operational, as shown in Fig. 7.

mft_vpn_screenshot1Fig. 7 VPN Tunnel up and running between CSG in OPC and CSG on-premise (AppNetManager view)

IV. Setup GRE tunnel between MFTCS PaaS node and CSG in OPC for VPN traffic

The communication between PaaS compute nodes and CSG in a shared network within OPC can only be established over a GRE tunnel at present. This is, however, not necessary in IP networks, as seen in the other blog[3], showing configuration of PaaS compute nodes (SOACS) with IP networks and VPN.

The following sections show the key tasks in configuration of a GRE tunnel between MFTCS compute node and CSG.

IV.A Associate internal security group of CSG with PaaS compute

The MFTCS comnpute node must include the internal group list of CSG to enable access using the private IP. The configuration is updated using the pop-up window as shown in Fig. 8.

mft_vpn_screenshot6

Fig. 8 Addition of Security List to MFTCS instance

Navigation

To create the security list, follow the navigation path outlined below.

  • Tool: Oracle Cloud UI in browser
  • Console: Compute Cloud Service
  • Top Tab: Instances
  • Left Side Menu: Instances
  • Click on: Server icon to the left of PaaS Compute, MFTTest-jcs
  • In the next screen that appears, follow the navigation outlined below.
  • Left Side Menu: Overview
  • Center Panel Section: Security Lists
  • Click on Button: Add Security Lists
Parameter Entry

The parameters and values provided below are entered for creation of the VPN gateway.

  • Security List: SL-CSGSOA-internal (From drop-down list, select entry for internal security list, associated with CSG)

The newly added entry should now show up in the table of security lists associated with the PaaS compute node, as shown in Fig. 9.

mft_vpn_screenshot7Fig. 9 Security List Added to MFTCS instance

IV.B Create GRE tunnnel in PaaS Compute

Next  the GRE tunnel is created by running a shell script in nohup mode as root in the MFTCS node. This is done by executing the following steps as root user in an ssh terminal session on the PaaS compute node:

  • Copy GRE tunnel script to /usr/bin: ls /usr/bin/oc-config-corente-tunnel - available on OTN
  • Make it executable: chmod +x /usr/bin/oc-config-corente-tunnel
  • Install bind-utils: yum install bind-utils
  • Create log directory: mkdir -p /var/log/opc-compute
  • Test connectivity to CSG: ping sl-csgsoa.compute-ateamemea.oraclecloud.internal
  • Start GRE tunnel: /usr/bin/oc-config-corente-tunnel --local-tunnel-address=172.16.254.2 --csg-hostname=sl-csgsoa.compute-ateamemea.oraclecloud.internal --csg-tunnel-address=172.16.254.1 --onprem-subnets=10.9.8.0/24,ww.xx.yy.zz/32 &

A terminal session transcript for the key steps are shown below.

MFTCS Compute VM:
[root@mfttest-jcs-wls-1 ~]# nohup /usr/bin/oc-config-corente-tunnel --local-tunnel-address=172.16.254.2 --csg-hostname=sl-csgsoa.compute-ateamemea.oraclecloud.internal --csg-tunnel-address=172.16.254.1 --onprem-subnets=10.9.8.0/24,ww.xx.yy.zz/32 &
[1] 970
[root@mfttest-jcs-wls-1 ~]# nohup: ignoring input and appending output to `nohup.out'

[root@mfttest-jcs-wls-1 ~]# ps -fade | grep corente
root       970   821  0 16:48 pts/0    00:00:00 /bin/sh /usr/bin/oc-config-corente-tunnel --local-tunnel-address=172.16.254.2 --csg-hostname=sl-csgsoa.compute-ateamemea.oraclecloud.internal --csg-tunnel-address=172.16.254.1 --onprem-subnets=10.9.8.0/24,ww.xx.yy.zz/32
root       979   821  0 16:48 pts/0    00:00:00 grep corente
[root@mfttest-jcs-wls-1 ~]#

[root@mfttest-jcs-wls-1 ~]# ifconfig
eth0      Link encap:Ethernet  HWaddr C6:B0:51:1A:2C:70
inet addr:10.196.198.158  Bcast:10.196.198.159  Mask:255.255.255.252
inet6 addr: fe80::c4b0:51ff:fe1a:2c70/64 Scope:Link
UP BROADCAST RUNNING MULTICAST  MTU:9000  Metric:1
RX packets:539577767 errors:0 dropped:0 overruns:0 frame:0
TX packets:627246269 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:126222508187 (117.5 GiB)  TX bytes:202329509697 (188.4 GiB)

gre1      Link encap:UNSPEC  HWaddr 0A-C4-C6-9E-FF-FF-80-AB-00-00-00-00-00-00-00-00
inet addr:172.16.254.2  Mask:255.255.255.255
inet6 addr: fe80::5efe:ac4:c69e/64 Scope:Link
UP RUNNING NOARP  MTU:1472  Metric:1
RX packets:975 errors:0 dropped:0 overruns:0 frame:0
TX packets:975 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:81900 (79.9 KiB)  TX bytes:89700 (87.5 KiB)

lo        Link encap:Local Loopback
inet addr:127.0.0.1  Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING  MTU:65536  Metric:1
RX packets:328199125 errors:0 dropped:0 overruns:0 frame:0
TX packets:328199125 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:561141115845 (522.6 GiB)  TX bytes:561141115845 (522.6 GiB)

[root@mfttest-jcs-wls-1 ~]#

V. Setup NAT networking between VBox running local CSG and a second VBox image to simulate private corporate network traffic using private IPs.

A NAT network for VBox reserves a pool of IP addresses to be allocated to the VMs running within a physical host. In this case, a NAT network with the subnet of 10.9.8.0/24 is created for association with all the VMs used to simulate the private corporate network in this exercise.

The details of the NATNetwork setup is described in a separate blog[3]. So, we will assume that a second VBox image is available at this stage in the same NATNetwork subnet pool of 10.9.8.0/24. In our example, its private NAT IP address is 10.9.8.6.

VI. Verify connectivity between MFTCS compute node and local VBox image using private IPs

At this stage, the MFTCS compute node and the VBox image in the laptop should be able to ping each other using the GRE IP (172.16.254.2) and the private NAT IP (10.9.8.6). They should also be able to ping the VPN gateways in OPC (172.16.254.1) and on-premise network (10.9.8.5). We verify that as shown in the terminal session transcript below.

MFTCS Compute VM:

[root@mfttest-jcs-wls-1 ~]# route -n
Kernel IP routing table
Destination            Gateway              Genmask             Flags Metric Ref Use Iface
0.0.0.0                   10.196.198.157    0.0.0.0                  UG     0         0     0     eth0
10.9.8.0                 172.16.254.1       255.255.255.0       UG     0         0     0     gre1
10.196.198.156     0.0.0.0                 255.255.255.252    U       1         0     0     eth0
66.77.134.249       172.16.254.1       255.255.255.255    UGH  0         0     0      gre1
172.16.254.1 0.0.0.0 255.255.255.255 UH 0 0 0 gre1
[root@mfttest-jcs-wls-1 ~]# ping 10.9.8.5
PING 10.9.8.5 (10.9.8.5) 56(84) bytes of data.
64 bytes from 10.9.8.5: icmp_seq=1 ttl=63 time=159 ms
64 bytes from 10.9.8.5: icmp_seq=2 ttl=63 time=156 ms
^C
--- 10.9.8.5 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1448ms
rtt min/avg/max/mdev = 156.049/157.761/159.474/1.757 ms
[root@mfttest-jcs-wls-1 ~]# ping 10.9.8.6
PING 10.9.8.6 (10.9.8.6) 56(84) bytes of data.
64 bytes from 10.9.8.6: icmp_seq=1 ttl=62 time=161 ms
64 bytes from 10.9.8.6: icmp_seq=2 ttl=62 time=158 ms
64 bytes from 10.9.8.6: icmp_seq=3 ttl=62 time=157 ms
^C
--- 10.9.8.6 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2245ms
rtt min/avg/max/mdev = 157.832/159.516/161.784/1.727 ms
[root@mfttest-jcs-wls-1 ~]#

Local VBox image:

[root@soa-training ~]# route add -net 172.16.254.0 netmask 255.255.255.0 gw 10.9.8.5
[root@soa-training ~]# route -n

Kernel IP routing table
Destination               Gateway            Genmask            Flags     Metric Ref Use Iface
0.0.0.0                      10.9.8.1              0.0.0.0                UG         0         0     0     eth7
10.9.8.0                     0.0.0.0               255.255.255.0    U            1         0     0     eth7
172.16.254.0            10.9.8.5              255.255.255.0    UG         0         0     0     eth7
[root@soa-training ~]# ping 172.16.254.1
PING 172.16.254.1 (172.16.254.1) 56(84) bytes of data.
64 bytes from 172.16.254.1: icmp_seq=1 ttl=63 time=155 ms
64 bytes from 172.16.254.1: icmp_seq=2 ttl=63 time=156 ms
^C
--- 172.16.254.1 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1181ms
rtt min/avg/max/mdev = 155.204/155.656/156.109/0.600 ms
[root@soa-training ~]# ping 172.16.254.2
PING 172.16.254.2 (172.16.254.2) 56(84) bytes of data.
64 bytes from 172.16.254.2: icmp_seq=1 ttl=62 time=154 ms
64 bytes from 172.16.254.2: icmp_seq=2 ttl=62 time=157 ms
64 bytes from 172.16.254.2: icmp_seq=3 ttl=62 time=160 ms
^C
--- 172.16.254.2 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2411ms
rtt min/avg/max/mdev = 154.192/157.401/160.097/2.459 ms
[root@soa-training ~]#

VII. Configure NFS mounts in MFT CS node pointing to a file system in the local VBox image

Configuration of NFT mounts are standard Linux administrative tasks. We set up a basic NFS mount file system that is broadcast from the local VBox image and mounted from the MFTCS compute node. In doing this, the private NAT IP of the VBox image and the GRE IP of the MFTCS node are used for communication.

Transcripts of sample terminal sessions from MFTCS Compute VM and local VBox image are shown below.

Local VBox Image:

[root@soa-training ~]# mkdir /onpremshare
[root@soa-training ~]# chown oracle:oinstall /onpremshare
[root@soa-training ~]# chmod 777 /onpremshare
[root@soa-training ~]# cat /etc/exports

/onpremshare 172.16.254.0/255.255.255.0(rw,sync)
[root@soa-training ~]# service nfs start
Starting NFS services: [ OK ]
Starting NFS mountd: [ OK ]
Starting NFS daemon: [ OK ]
Starting RPC idmapd: [ OK ]
[root@soa-training ~]#

[root@soa-training ~]# showmount -e localhost
Export list for localhost:
/onpremshare 172.16.254.0/255.255.255.0
[root@soa-training ~]#

MFTCS Compute VM:

[root@mfttest-jcs-wls-1 ~]# showmount -e 10.9.8.6
Export list for 10.9.8.6:
/onpremshare 172.16.254.0/255.255.255.0
[root@mfttest-jcs-wls-1 ~]# mount -t nfs4 -v 10.9.8.6:/onpremshare /vpnshare
mount.nfs4: timeout set for Tue Mar 28 15:59:21 2017
mount.nfs4: trying text-based options 'addr=10.9.8.6,clientaddr=172.16.254.2'
10.9.8.6:/onpremshare on /vpnshare type nfs4 (rw)
[root@mfttest-jcs-wls-1 ~]# df -h
Filesystem                                            Size    Used  Avail   Use%  Mounted on
/dev/mapper/vg_main-lv_root               16G    12G    3.8G   75%     /
tmpfs                                                     7.4G   80K    7.4G    1%      /dev/shm
/dev/xvdb1                                             477M  62M    386M   14%    /boot
/dev/mapper/vg_binaries-lv_tools         9.8G   2.3G    7.0G   25%    /u01/app/oracle/tools
/dev/mapper/vg_backup-lv_backup      20G    226M    19G   2%      /u01/data/backup
/dev/mapper/vg_domains-lv_domains  9.8G   1.5G     7.8G 16%    /u01/data/domains
/dev/mapper/vg_binaries-lv_mw           9.8G   3.2G     6.1G  35%   /u01/app/oracle/middleware
/dev/mapper/vg_binaries-lv_jdk             2.0G   303M    1.6G  17%   /u01/jdk
10.9.8.6:/onpremshare                         19G    4.9G    13G   29%   /vpnshare
[root@mfttest-jcs-wls-1 ~]# ls -l /vpnshare
total 0
[root@mfttest-jcs-wls-1 ~]#

VIII. Configure an MFT transfer using embedded SFTP source and local file target.

The task about setting up a MFT  transfer with Embedded SFTP Source and local file target has been covered in a blog earlier[1]. The process will be identical with the difference that the local file target will have an endpoint defined as a directory under NFS mount point, i.e. /vpnshare.

As a result of this, although MFT treats the target endpoint as a local filesystem directory, the NFS mount will propagate the file via VPN tunnel to the remote on-prem server.

IX. Deploy and Test

After deploying the MFT transfer, we are ready to test the entire flow.

We initiate the test by starting a simple, command-line SFTP client from a remote machine (slahiri-lnx) and connecting to the embedded SFTP server running within MFTCS. After we login with a pre-configured userid and password, we transfer a file in the sftp session. The terminal session transcript is shown below.

Any machine in public internet:

slahiri@slahiri-lnx:~/stage/cloud/sftptest$ sftp -oPort=7522 sftpuser@CloudSFTP
sftpuser@cloudsftp's password:
Connected to CloudSFTP.
sftp> put sftpfile0.txt
Uploading sftpfile0.txt to /sftpuser/sftpfile0.txt
sftpfile0.txt 100% 1002 1.0KB/s 00:00
sftp> quit
slahiri@slahiri-lnx:~/stage/cloud/sftptest$

After the SFTP operation is completed, the MFT transfer takes over. MFT picks up the file from the embedded SFTP source and places it in the directory defined as target. Example screenshot from Monitoring Tab of MFT UI is shown below.

MFTVPNtest

Fig. 10 Successful MFT Transfer Flow

Finally, we verify that our test file is saved, not in a local directory, but in a directory residing inside the on-premise server mapped via NFS mount from the MFTCS compute node.

Local VBox Image:

[oracle@soa-training onpremshare]$ pwd
/onpremshare
[oracle@soa-training onpremshare]$ ls -ltr
total 4
-rw-r-----. 1 1001 1001 1002 Mar 28 09:13 sftpfile0.txt
[oracle@soa-training onpremshare]$

Summary

The test case described here is one way to achieve hybrid cloud integration with secure transfers using MFTCS and VPN tunnels. There are other use cases, where MFTCS can be effectively used for hybrid integration with Oracle PaaS/SaaS services in OPC and on-premise servers in a corporate private network..

For further details, please contact the MFT Product Management team or SOACS/MFTCS group within A-Team.

Acknowledgements

MFTCS Product Management and Engineering teams have been actively involved in the development of this solution for many months. It would not have been possible to deliver such a solution to the customers without their valuable contributions.

References

  1. 1. MFT - Setting up SFTP Transfers using Key-based Authentication - Oracle A-Team Blog
  2. 2. Setting Up Corente Services Gateway in Oracle Cloud - Oracle Product Documentation
  3. 3. Setup of PaaS Computes (SOACS/MFTCS/DBCS) over IP Network for VPN Connectivity - Oracle A-Team Blog

Appendix

Linux package installed to provide the GRE functionality in a standard PaaS compute node is listed below.

  • bind-utils-9.8.2-0.47.rc1.el6.x86_64

Shub Lahiri


Previous Post

Transport Layer Security (TLS) and Web Service Connections in SaaS Integrations

Bill Jacobs | 9 min read

Next Post


Identity cloud service : Mobile clients and PKCE support

Manasi Vaishampayan | 6 min read