MFT Cloud Service clusters in Oracle Cloud Infrastructure Classic (OCI-C) are provisioned with database file storage system (DBFS) for shared storage as discussed in one of our earlier blogs<_sup>. In Oracle Cloud Infrastructure (OCI), customers also have the option of using File Storage Service (FSS) for shared storage. FSS can be used for high throughput use cases where a large number of large files have to be processed within MFTCS. But this alternative of high performance comes at a cost of resiliency. The backup and recovery of the DBFS is automatically achieved by the backup of the database. Although, the backup and recovery recommendations for FSS are well-documented, the implementation has to be managed in a custom layer.
Fig. 1 Solution Architecture
This blog describes a way to setup a high-volume file transfer process within MFTCS in OCI, where files are received in embedded SFTP server and then transferred to a remote Object Storage endpoint in OCI-Classic within Oracle Public Cloud (OPC).
The overall use case can be described as follows and is also exemplified in Fig.2 below.
The configuration of MFT to receive files via SFTP has been discussed in one of my earlier blogs<_sup>. In that post, we had shown how MFT can receive files via its embedded SFTP server and save them in a local file system. In this article, we extend the use case by modifying the file system of the target endpoint to point to an object storage service endpoint within an OCI-Classic domain. The shared storage layer of DBFS is replaced with FSS. Apache jMeter is used to simulate the concurrent upload and download traffic volume, comprising of files in different sizes.
Fig. 2 Use Case Architecture
The key components in the hybrid solution architecture listed below are also shown in Fig.2.
Fig. 3 shows how the solution architecture has been implemented in our test environment. A laptop in the public internet is used to run one JMeter session for uploading files to the embedded SFTP server within MFTCS and a second OCI compute in a different AD withing the same region is used to host the second JMeter session for downloading files from the MFTCS server.
Fig. 3 Use Case Implementation
Thus, the 3 distinct machines used in our test environment are listed below
It should be noted here that end points for the MFTCS transfer spans across OCI and OCI-Classic regions. This is intentionally configured to include various elements in a typical hybrid solution architecture implementation. To summarize, the flow of files can be described as shown in Fig. 3.
Based on the solution architecture cited, the key tasks for the entire exercise are listed below.
A File Storage Service exposes a shared storage mount point, that can be accessed via NFS from any server within the same AD. The details of creating a FSS can be found in Oracle product documentation<_sup>.
Fig. 4 Create File System in OCI
The File System creation screen can be launched by following the navigation path outlined below (Fig. 4).
The parameters and values provided below are entered for creation of the File Storage System.
After all the parameters are entered in the pop-up window, scroll down to the bottom of the pop-up window.
At this stage, we will have a NFS mount point available in availability domain, AD-1 with a private IP of 10.0.2.x and export path of _, as shown below in Fig. 5.
Next, we configure a Storage Container with Object Storage Service in OCI Classic. As mentioned earlier, we could have used any other type of endpoint for the target of the MFT transfer, but this selection adds a dimension of OCI to OCI-Classic transfer in our MFT transfers. Moreover, we are able to highlight the support for an Object Storage Service endpoint within MFTCS. This feature was added in a later version of release 220.127.116.11.
Fig. 6 Create Object Storage Container in OCI-C
To create the Storage Classic container, we follow the navigation path outlined below (Fig.6).
This selection pops up the Create Storage Container window, where the object storage container is created
The parameters and values provided below are entered for creation of the VPN gateway.
The container thus created can contain a hierarchical directory structure with files like any standard file system. As described later, the subsequent directory structures and the files in this container can be directly created by MFTCS, only if the top-level root directory for the container is configured in the target definition of MFTCS. The empty Object Storage Container created here is shown in Fig. 7.
Fig. 7 Empty Object Storage Container created
Provisioning an MFTCS cluster within OCI is a standard process and hence not covered in details here. Individual steps are described in the Oracle product documentation for MFTCS<_sup>. However, the only restriction here is that the MFTCS cluster must be created within the same AD and region as that of the FSS, used in Step I.
For our purposes, the key features of the MFTCS cluster are listed below.
The database instance associated with the MFTCS has the following features:
Each of the servers in the MFTCS cluster will have to mount the NFS mount point exposed by FSS. A local directory (_mnt_fss) is created in each of the servers that will be used for the NFS mount point.
FSS was configured in Step 1 and during that step, a private IP and mount point was generated. We use the same IP address and mount point to configure an NFS shared storage filesystem in each of the MFTCS servers. After creating the NFS mount, we create 4 directories under it, that will be used by MFTCS later. The 4 directories are listed below:
A typical Linux terminal session from an MFTCS server is shown below.
slahiri@slahiri-lap:~_stage_cloud$ ssh -i ._shubsoa_key opc@ocimft1<_span>
Last login: Fri Jun 1 14:07:48 2018 from www-xxx-yyy-zzz<_span>
[opc@mftsftp-wls-1 ~]$ sudo mount -v 10.0.2.8:_ _mnt_fss<_span>
mount: no type was given - I'll assume nfs because of the colon<_span>
mount.nfs: timeout set for Fri Jun 1 19:52:31 2018<_span>
mount.nfs: trying text-based options 'vers=4,addr=10.0.2.8,clientaddr=10.0.2.5'<_span>
mount.nfs: mount(2): Protocol not supported<_span>
mount.nfs: trying text-based options 'addr=10.0.2.8'<_span>
mount.nfs: prog 100003, trying vers=3, prot=6<_span>
mount.nfs: trying 10.0.2.8 prog 100003 vers 3 prot TCP port 2049<_span>
mount.nfs: prog 100005, trying vers=3, prot=17<_span>
mount.nfs: trying 10.0.2.8 prog 100005 vers 3 prot UDP port 2048<_span>
10.0.2.8:_ on _mnt_fss type nfs (rw)<_span>
[opc@mftsftp-wls-1 ~]$ sudo su - oracle<_span>
[oracle@mftsftp-wls-1 ~]$ df -h<_span>
Filesystem Size Used Avail Use% Mounted on<_span>
_dev_sda3 38G 5.5G 31G 16% _<_span>
tmpfs 6.8G 0 6.8G 0% _dev_shm<_span>
_dev_sda1 512M 280K 512M 1% _boot_efi<_span>
_dev_sdc1 22G 1.3G 20G 7% _u01_app_oracle_tools<_span>
_dev_mapper_vg_backup-lv_backup <_span>50G 263M 47G 1% _u01_data_backup<_span>
_dev_mapper_vg_domain-lv_domain <_span>50G 1.7G 45G 4% _u01_data_domains<_span>
_dev_mapper_vg_middleware-lv_middleware <_span>22G 3.4G 18G 17% _u01_app_oracle_middleware<_span>
_dev_mapper_vg_jdk-lv_jdk <_span>5.8G 385M 5.1G 7% _u01_jdk<_span>
dbfs-@ORCL:_ 6.3G 143M 6.1G 3% _u01_soacs_dbfs_directi<_span>
dbfs-@ORCL:_ 6.3G 143M 6.1G 3% _u01_soacs_dbfs<_span>
10.0.2.8:_ 200G 6.0M 200G 46% _mnt_fss<_span>
[oracle@mftsftp-wls-1 ~]$ mkdir -p _mnt_fss_mftroot_mft_callout<_span>
[oracle@mftsftp-wls-1 ~]$ mkdir -p _mnt_fss_mftroot_mft_control_dir
[oracle@mftsftp-wls-1 ~]$ mkdir -p _mnt_fss_mftroot_mft_ftp_root
[oracle@mftsftp-wls-1 ~]$ mkdir -p _mnt_fss_mftroot_mft_storage<_span>
An MFTCS cluster, by default, uses DBFS for shared storage. After provisioning, the configuration has to be manually changed to replace DBFS by FSS for shared storage. There are 4 places where this change needs to occur. The steps to achieve this configuration change are described below.
To replace DBFS with FSS, follow the navigation path outlined below.
The key parameters and values provided below are entered for the 3 directories replacing the corresponding values for DBFS.
Click on Save button to save your changes.
The key parameter and value provided below is entered for the FTP_sFTP root directory replacing the corresponding value for DBFS.
Click on Save button to save your changes.
Restart MFTCS servers for the changes to take effect.
The activation of SFTP server within an MFTCS instance is a standard task. The steps are described in Oracle product documentation<_sup>. Hence, the details are skipped here.
At the end of this part, we should have the sFTP server enabled and an sFTP user created, sftpuser with home directory as _sftpuser as shown in Fig. 10 below. It should be noted that besides the home directory, there is another directory (_downloads) created in FSS mount point under ftp_root, which is granted access to the sftpuser,
Fig.10 sftpuser Access Setup
The MFTCS product documentation can be used to design and deploy an MFT transfer with the following elements:
A compute instance is created from OCI console to install Apache JMeter that will work as the sFTP client for downloading files as shown in Fig. 2. The creation process is well documented in the OCI product documentation and hence skipped here.
To make the test case a bit generic, we chose to create the compute instance in a different Availability Domain (e.g. AD2) from that of our MFTCS instance within the same region.
Apache documentation is easily available in public domain. So we will skip the details for the install and configuration. Instead, we highlight the files and directories setup for download during our test on a laptop residing on the public internet.
During the test, a total of 500 files were uploaded in different directories of varying sizes ranging from 10 MB to 950 MB. A summary of the file distribution and corresponding directories is listed below in Table 1.
Table 1. Directories and Files for SFTP Upload
An SSH plugin for Apache JMeter was installed to provide support for SFTP protocol. Finally a simple script was developed to automate the upload of the files listed in Table 1.
We added a separate block volume to our OCI compute for saving the downloaded files. It was sized at 500 GB. A separate mount point was created within this block volume (_mnt_vol1) to stage the downloaded files. The files and directories configured for download via sFTP are listed below in Table 2. A simple JMeter script was developed to automate the download of the files listed in Table 2.
Table 2. Directories and Files for SFTP Download
We wanted to create a baseline with minimum tuning efforts. As a result of that, our configuration changes were few and are listed below.
To increase the concurrent processing capacity within the MFTCS engine, we allocated more processor threads at source, target and instance pools. We also eliminated the processing overhead due to computation of checksum for our test. The steps to achieve this are described below.
The key parameters and values provided below are entered
Click on Save button to save your changes.
Before running the tests. we start the JFR recording in both the MFTCS servers in the cluster to capture the JVM behavior during the test. The command to achieve this is well-documented and hence the details are skipped.
The 2 JMeter scripts in command-line mode are kicked off at the same time as listed below:
The system is monitored for any errors. The tests are repeated after troubleshooting, if any errors were encountered. The tests were concluded when a number of runs produced consistent results. The target directories were inspected to confirm the successful completion of the transfer of all files listed in Tables 1 and 2.
Results were collected from various sources as listed below:
As can be seen from the list above, there is a lot of data captured from the above areas and the key findings from them are summarized below.
The breakdown of upload times for files of different size is listed below in Table 3.
Table 3. SFTP Upload Performance in MFTCS
Count of member files, Minimum, Average and Maximum upload times for SFTP transfer of each file size group are plotted in a bar chart and shown below in Fig. 11.
Fig. 11 Average SFTP Upload Time by File Size
Network bandwidth was measured by using Linux utility, iperf3 during the test cycle and the results are listed here.
JFR Recordings from both managed servers did not reveal any unusual bottlenecks in the JVM. Key indicators recorded during the test interval are listed below.
The test results described here demonstrate the fact that MFTCS is capable of handling SFTP file transfers of large data files with high degree of concurrency, if the cluster is configured with FSS as the underlying shared storage.
For further details, please contact the MFTCS Product Management team or the SOACS_MFTCS group within A-Team.
MFTCS Product Management and Engineering teams have been actively involved in the setup of this test case for many months. It would not have been possible to complete our studies without their valuable contributions.
Primary configuration parameters in Apache JMeter script are listed below:
Commands used to invoke JMeter scripts are provided below for reference: