Performance of MFT Cloud Service (MFTCS) with File Storage Service (FSS) using a Hybrid Solution Architecture in Oracle Cloud Infrastructure (OCI)

Executive Overview

MFT Cloud Service clusters in Oracle Cloud Infrastructure Classic (OCI-C) are provisioned with database file storage system (DBFS) for shared storage as discussed in one of our earlier blogs[1]. In Oracle Cloud Infrastructure (OCI), customers also have the option of using File Storage Service (FSS) for shared storage. FSS can be used for high throughput use cases where a large number of large files have to be processed within MFTCS. But this alternative of high performance comes at a cost of resiliency. The backup and recovery of the DBFS is automatically achieved by the backup of the database. Although, the backup and recovery recommendations for FSS are well-documented, the implementation has to be managed in a custom layer.

This blog shows the usage of FSS for shared storage in an MFTCS cluster but the same concepts can be applied to meet the shared storage requirements of SOACS as well.

Fig. 1 Solution Architecture

This blog describes a way to setup a high-volume file transfer process within MFTCS in OCI, where files are received in embedded SFTP server and then transferred to a remote Object Storage endpoint in OCI-Classic within Oracle Public Cloud (OPC).

Solution Approach

Use Case Basic Requirements

The overall use case can be described as follows and is also exemplified in Fig.2 below.

  • An external SFTP client sends multiple files of different sizes concurrently via SFTP to the embedded SFTP server running in MFT Cloud Service (MFTCS) within OCI.
  • MFT Server, upon receipt of the files, transfers it to an object file storage service in OCI-Classic domain URL.
  • As the MFT transfers are being executed, multiple concurrent file downloads are also processed by the SFTP server, embedded within MFTCS.

Solution Architecture

The configuration of MFT to receive files via SFTP has been discussed in one of my earlier blogs[2]. In that post, we had shown how MFT can receive files via its embedded SFTP server and save them in a local file system. In this article, we extend the use case by modifying the file system of the target endpoint to point to an object storage service endpoint within an OCI-Classic domain. The shared storage layer of DBFS is replaced with FSS. Apache jMeter is used to simulate the concurrent upload and download traffic volume, comprising of files in different sizes.

Fig. 2 Use Case Architecture

 

The key components in the hybrid solution architecture listed below are also shown in Fig.2.

  • Embedded SFTP Server running within MFT Cloud Service (MFTCS) hosted in OCI
  • Oracle Traffic Director is used a Load Balancer in front of the MFT Cloud Service
  • File Storage Service (FSS) is used with NFS mounts as shared storage filesystem for MFTCS
  • Object Storage Service (OSS) within OCI-C is used as the target endpoint to deliver the files in different directories
  • Apache JMeter is used to simulate the high volume of load used for test runs

Implementation Details

Fig. 3 shows how the solution architecture has been implemented in our test environment. A laptop in the public internet is used to run one JMeter session for uploading files to the embedded SFTP server within MFTCS and a second OCI compute in a different AD withing the same region is used to host the second JMeter session for downloading files from the MFTCS server.

Fig. 3 Use Case Implementation

Thus, the 3 distinct machines used in our test environment are listed below

  • OCI PaaS Compute running MFTCS Release 12.2.1.2.0
  • OCI Compute with Oracle-Linux-7.5-2018.05.09-1 of shape VM.Standard1.1 running Apache JMeter V4.0 r1823414
  • Laptop with Ubuntu 17.04 LTS running Apache JMeter V4.0 r1823414, simulating an on-prem endpoint in a customer environment

It should be noted here that end points for the MFTCS transfer spans across OCI and OCI-Classic regions. This is intentionally configured to include various elements in a typical hybrid solution architecture implementation. To summarize, the flow of files can be described as shown in Fig. 3.

Key Tasks and Activities

Based on the solution architecture cited, the key tasks for the entire exercise are listed below.

  • Configure File Storage Service (FSS) in OCI
  • Configure Storage Container with Storage Classic Service in OCI Classic
  • Provision MFTCS cluster in OCI
  • Configure MFTCS servers to attach FSS for shared storage
  • Configure MFTCS instance to replace DBFS with FSS
  • Configure and Activate SFTP server in MFTCS cluster
  • Configure MFT transfer with SFTP embedded source and Object Storage Service target
  • Provision OCI compute for SFTP download client
  • Install and Configure Apache JMeter in Linux laptop for upload of files
  • Install and Configure Apache JMeter in OCI compute for download of files
  • Tune MFTCS Cluster
  • Start test capture scripts in MFTCS cluster
  • Run JMeter scripts for upload and download of files
  • Collect and analyze test data

I. Configure Shared File Storage Service in OCI

A File Storage Service exposes a shared storage mount point, that can be accessed via NFS from any server within the same AD. The details of creating a FSS can be found in Oracle product documentation[3].

Fig. 4 Create File System in OCI

Navigation

The File System creation screen can be launched by following the navigation path outlined below (Fig. 4).

  • Tool: OCI UI in browser
  • Console: OCI main console
  • Click on Menu: Top Left Hamburger
  • Left Side Menu: Storage
  • Select/Click Sub-Menu Item: File Systems
    This selection pops up the File Systems screen, where the File System needs to be created.
  • Click on Button: Create File System
Parameter Entry

The parameters and values provided below are entered for creation of the File Storage System.

  • Create in Compartment: MyCompartmemt (From drop-down list, select any compartment created earlier)
  • Name: mftfs (Any meaningful name for the file system, free format – optional)
  • Availability Domain: VVcZ-US-ASHBURN-AD3 (From drop-down list, select any availability domain configured earlier)
  • Select Radio Button: Create Mount Target (this section will define the storage mount point to be used in remote machines)
  • Name: mftmt (Any meaningful name for the mount target, free format)
  • Virtual Cloud Network: PaaSVCN (From drop-down list, select any VCN configured earlier)
  • Subnet: Public Subnet VVcZ-US-ASHBURN-AD3 (From drop-down list, select any subnet configured earlier within the selected VCN and selected AD)
  • Path: / (enter the mount point path that will be used for remote mounts, free format but should start with /)
  • Maximum Free Space (in GiB): 200 GB (From the drop-down list, select any suitable pre-determined value or enter a custom value for the size to be allocated) – custom value chosen for our exercise
Navigation

After all the parameters are entered in the pop-up window, scroll down to the bottom of the pop-up window.

  • Click on Button: Create File System

At this stage, we will have a NFS mount point available in availability domain, AD-1 with a private IP of 10.0.2.x and export path of /, as shown below in Fig. 5.

Fig. 5 File Storage System created with Private IP and Mount Point Export Path

II. Configure Storage Container with Object Storage Service in OCI Classic

Next, we configure a Storage Container with Object Storage Service in OCI Classic. As mentioned earlier, we could have used any other type of endpoint for the target of the MFT transfer, but this selection adds a dimension of OCI to OCI-Classic transfer in our MFT transfers. Moreover, we are able to highlight the support for an Object Storage Service endpoint within MFTCS. This feature was added in a later version of release 12.2.1.2.

Fig. 6 Create Object Storage Container in OCI-C

Navigation

To create the Storage Classic container, we follow the navigation path outlined below (Fig.6).

  • Tool: OCI Classic UI in browser
  • Console: Dashboard
  • Click on Menu: Top Left Hamburger
  • Left Side Menu: OCI Object Storage Classic

This selection pops up the Create Storage Container window, where the object storage container is created

  • Click on Button: Create Container
Parameter Entry

The parameters and values provided below are entered for creation of the VPN gateway.

  • Name: MFTTargetContainer (Any meaningful name for the container, free format)
  • Click on Button: Create

The container thus created can contain a hierarchical directory structure with files like any standard file system. As described later, the subsequent directory structures and the files in this container can be directly created by MFTCS, only if the top-level root directory for the container is configured in the target definition of MFTCS. The empty Object Storage Container created here is shown in Fig. 7.

Fig. 7 Empty Object Storage Container created

III. Provision MFTCS cluster in OCI

Provisioning an MFTCS cluster within OCI is a standard process and hence not covered in details here. Individual steps are described in the Oracle product documentation for MFTCS[4]. However, the only restriction here is that the MFTCS cluster must be created within the same AD and region as that of the FSS, used in Step I.

For our purposes, the key features of the MFTCS cluster are listed below.

  • MFTCS Release: 12.2.1.2.0
  • Cluster Size: 2
  • Compute Shape: 2 OCPUs, 14 GB memory
  • Load Balancer Shape: 1 OCPU, 7 GB memory
  • Load Balancer Algorithm: Least Connection Count

The database instance associated with the MFTCS has the following features:

  • DBCS Release: 12.1.0.2
  • Compute Shape: 1 OCPUs, 7 GB memory

IV. Configure MFTCS servers to attach FSS for Shared Storage

Each of the servers in the MFTCS cluster will have to mount the NFS mount point exposed by FSS. A local directory (/mnt/fss) is created in each of the servers that will be used for the NFS mount point.

FSS was configured in Step 1 and during that step, a private IP and mount point was generated. We use the same IP address and mount point to configure an NFS shared storage filesystem in each of the MFTCS servers. After creating the NFS mount, we create 4 directories under it, that will be used by MFTCS later. The 4 directories are listed below:

  • /mnt/fss/mftroot/mft/callout
  • /mnt/fss/mftroot/mft/control_dir
  • /mnt/fss/mftroot/mft/ftp_root
  • /mnt/fss/mftroot/mft/storage

A typical Linux terminal session from an MFTCS server is shown below.

slahiri@slahiri-lap:~/stage/cloud$ ssh -i ./shubsoa_key opc@ocimft1
Last login: Fri Jun 1 14:07:48 2018 from www-xxx-yyy-zzz
[opc@mftsftp-wls-1 ~]$ sudo mount -v 10.0.2.8:/ /mnt/fss
mount: no type was given – I’ll assume nfs because of the colon
mount.nfs: timeout set for Fri Jun 1 19:52:31 2018
mount.nfs: trying text-based options ‘vers=4,addr=10.0.2.8,clientaddr=10.0.2.5’
mount.nfs: mount(2): Protocol not supported
mount.nfs: trying text-based options ‘addr=10.0.2.8’
mount.nfs: prog 100003, trying vers=3, prot=6
mount.nfs: trying 10.0.2.8 prog 100003 vers 3 prot TCP port 2049
mount.nfs: prog 100005, trying vers=3, prot=17
mount.nfs: trying 10.0.2.8 prog 100005 vers 3 prot UDP port 2048
10.0.2.8:/ on /mnt/fss type nfs (rw)
[opc@mftsftp-wls-1 ~]$ sudo su – oracle
[oracle@mftsftp-wls-1 ~]$ df -h
Filesystem                                                     Size   Used    Avail   Use%    Mounted on
/dev/sda3                                                       38G   5.5G    31G      16%      /
tmpfs                                                             6.8G        0   6.8G        0%      /dev/shm
/dev/sda1                                                     512M  280K  512M        1%     /boot/efi
/dev/sdc1                                                       22G   1.3G    20G        7%     /u01/app/oracle/tools
/dev/mapper/vg_backup-lv_backup               50G  263M    47G        1%     /u01/data/backup
/dev/mapper/vg_domain-lv_domain               50G   1.7G    45G       4%     /u01/data/domains
/dev/mapper/vg_middleware-lv_middleware  22G   3.4G    18G      17%    /u01/app/oracle/middleware
/dev/mapper/vg_jdk-lv_jdk                             5.8G  385M   5.1G      7%     /u01/jdk
dbfs-@ORCL:/                                               6.3G  143M   6.1G      3%     /u01/soacs/dbfs_directi
dbfs-@ORCL:/                                               6.3G  143M   6.1G      3%     /u01/soacs/dbfs
10.0.2.8:/                                                       200G   6.0M  200G    46%     /mnt/fss
[oracle@mftsftp-wls-1 ~]$ mkdir -p /mnt/fss/mftroot/mft/callout
[oracle@mftsftp-wls-1 ~]$ mkdir -p /mnt/fss/mftroot/mft/control_dir
[oracle@mftsftp-wls-1 ~]$ mkdir -p /mnt/fss/mftroot/mft/ftp_root
[oracle@mftsftp-wls-1 ~]$ mkdir -p /mnt/fss/mftroot/mft/storage

V. Configure MFTCS instance to replace DBFS with FSS

An MFTCS cluster, by default, uses DBFS for shared storage. After provisioning, the configuration has to be manually changed to replace DBFS by FSS for shared storage. There are 4 places where this change needs to occur. The steps to achieve this configuration change are described below.

Navigation

To replace DBFS with FSS, follow the navigation path outlined below.

  • Tool: MFT Console in browser
  • Top Tab on Right: Administration
  • Navigation Menu on Left: Server Properties
Parameter Entry

The key parameters and values provided below are entered for the 3 directories replacing the corresponding values for DBFS.

  • Payload Storage Directory: /mnt/fss/mftroot/mft/storage
  • Callout Directory: /mnt/fss/mftroot/mft/callout
  • Control Directory: /mnt/fss/mftroot/mft/control_dir

Click on Save button to save your changes.

Fig. 8 FSS setting for storage, callout and control directories

Navigation
  • Navigation Menu on Left: Embedded Servers
Parameter Entry

The key parameter and value provided below is entered for the FTP/sFTP root directory replacing the corresponding value for DBFS.

  • Root Directory: /mnt/fss/mftroot/mft/ftp_root

Click on Save button to save your changes.

Fig. 9 FSS setting for sFTP root directory

Restart MFTCS servers for the changes to take effect.

VI. Configure and Activate SFTP server in MFTCS cluster

The activation of SFTP server within an MFTCS instance is a standard task. The steps are described in Oracle product documentation[4]. Hence, the details are skipped here.

At the end of this part, we should have the sFTP server enabled and an sFTP user created, sftpuser with home directory as /sftpuser as shown in Fig. 10 below. It should be noted that besides the home directory, there is another directory (/downloads) created in FSS mount point under ftp_root, which is granted access to the sftpuser,

Fig.10 sftpuser Access Setup

VII. Configure MFT transfer with SFTP embedded source and Object Storage Service target

The MFTCS product documentation can be used to design and deploy an MFT transfer with the following elements:

  • Source Type: Embedded sFTP
  • Location: /sftpuser
  • Target Type: Storage Cloud Service
  • Content Folder: /MFTFolder
  • Username: <Username for OCI-Classic Account>
  • Password: <Password for OCI-Classic account>
  • Confirm Password: <Password for OCI-Classic account>
  • Location: https://<OCI-tenancy>.storage.oraclecloud.com
  • Service Name: Storage-<OCI-C tenancy>
  • Container Name: MFTTargetContainer

VIII. Provision OCI compute for SFTP download client

A compute instance is created from OCI console to install Apache JMeter that will work as the sFTP client for downloading files as shown in Fig. 2. The creation process is well documented in the OCI product documentation and hence skipped here.

To make the test case a bit generic, we chose to create the compute instance in a different Availability Domain (e.g. AD2) from that of our MFTCS instance within the same region.

IX. Install and Configure Apache JMeter in a laptop for upload of files

Apache documentation is easily available in public domain. So we will skip the details for the install and configuration. Instead, we highlight the files and directories setup for download during our test on a laptop residing on the public internet.

During the test, a total of 500 files were uploaded in different directories of varying sizes ranging from 10 MB to 950 MB. A summary of the file distribution and corresponding directories is listed below in Table 1.

Table 1. Directories and Files for SFTP Upload

An SSH plugin for Apache JMeter was installed to provide support for SFTP protocol. Finally a simple script was developed to automate the upload of the files listed in Table 1.

X. Install and Configure Apache JMeter in OCI compute for download of files

We added a separate block volume to our OCI compute for saving the downloaded files. It was sized at 500 GB. A separate mount point was created within this block volume (/mnt/vol1) to stage the downloaded files. The files and directories configured for download via sFTP are listed below in Table 2. A simple JMeter script was developed to automate the download of the files listed in Table 2.

 

Table 2. Directories and Files for SFTP Download

XI. Tune MFTCS Cluster

We wanted to create a baseline with minimum tuning efforts. As a result of that, our configuration changes were few and are listed below.

Adjust number of MFTCS processor threads

To increase the concurrent processing capacity within the MFTCS engine, we allocated more  processor threads at source, target and instance pools. We also eliminated the processing overhead due to computation of checksum for our test. The steps to achieve this are described below.

Navigation
  • Tool: MFT Console in browser
  • Top Tab on Right: Administration
  • Navigation Menu on Left: Server Properties
Parameter Entry

The key parameters and values provided below are entered

  • Source Processors: 25
  • Target Processors: 25
  • Instance Processors: 25
  • Generate Checksum Button: Uncheck

Click on Save button to save your changes.

XII. Start test capture scripts in MFTCS cluster

Before running the tests. we start the JFR recording in both the MFTCS servers in the cluster to capture the JVM behavior during the test. The command to achieve this is well-documented and hence the details are skipped.

XIV. Run JMeter scripts for upload and download of files

The 2 JMeter scripts in command-line mode are kicked off at the same time as listed below:

  • Apache JMeter script from laptop in public internet for upload of files via SFTP
  • Apache Jmeter script from OCI compute for download of files via SFTP (invoked twice serially after the first run completed)

The system is monitored for any errors. The tests are repeated after troubleshooting, if any errors were encountered. The tests were concluded when a number of runs produced consistent results. The target directories were inspected to confirm the successful completion of the transfer of all files listed in Tables 1 and 2.

XV. Collection and Analysis of Results

Results were collected from various sources as listed below:

  • JFR recordings for JVM behavior
  • MFTCS Repository Database for transfer times recorded in the server
  • JMeter reports to tally the transfer times seen from the client side
  • REST API to compare and validate the creation timestamps for file uploaded to Object Storage Endpoint in OCI-Classic
  • Linux utilities to compare and validate the creation timestamps for files downloaded in OCI compute
  • Simple network tests to estimate the network latency/bandwidth
Overall Test Results (Completion Time)

As can be seen from the list above, there is a lot of data captured from the above areas and the key findings from them are summarized below.

  • Total time taken to complete the upload JMeter script: ~35 minutes
  • Total time taken to complete the download JMeter script: ~24 minutes (2 times: ~12 minutes each)
Breakdown of Test Completion Time by File Size

The breakdown of upload times for files of different size is listed below in Table 3.

Table 3. SFTP Upload Performance in MFTCS

Count of member files, Minimum, Average and Maximum upload times for SFTP transfer of each file size group are plotted in a bar chart and shown below in Fig. 11.

Fig. 11 Average SFTP Upload Time by File Size

Network Bandwidth Test Results

Network bandwidth was measured by using Linux utility, iperf3 during the test cycle and the results are listed here.

  • On-prem SFTP Upload Client to MFTCS Load Balancer: 298 MBits/sec (Send),  296 MBits/sec (Receive)
  • SFTP Download Client in OCI Compute to MFTCS Load Balancer: 629 MBits/sec (Send), 628 MBits/sec (Receive)
JFR Data Analysis Overview

JFR Recordings from both managed servers did not reveal any unusual bottlenecks in the JVM. Key indicators recorded during the test interval are listed below.

  • Avg Heap Usage: 4.53 GB (MS1), 4.43 GB (MS2)
  • Avg CPU Usage: 79.2% (MS1), 81% (MS2)
  • Avg  GC Pause Time: 47 ms (MS1), 46 ms (MS2)

Summary

The test results described here demonstrate the fact that MFTCS is capable of handling SFTP file transfers of large data files with high degree of concurrency, if the cluster is configured with FSS as the underlying shared storage.

For further details, please contact the MFTCS Product Management team or the SOACS/MFTCS group within A-Team.

Acknowledgements

MFTCS Product Management and Engineering teams have been actively involved in the setup of this test case for many months. It would not have been possible to complete our studies without their valuable contributions.

References

  1. 1. Remounting DBFS Shared Storage in SOACS and MFTCS Clusters – Oracle A-Team Blog
  2. 2. MFT – Setting up SFTP Transfers using Key-based Authentication – Oracle A-Team Blog
  3. 3. FSS – File Storage Service Oracle Product Documentation
  4. 4. MFTCS – Managed File Transfer Cloud Service – Oracle Product Documentation

Appendix

Primary configuration parameters in Apache JMeter script are listed below:

  • Number of Threads: 500 (Upload), 100 (Download)
  • Ramp-Up Period: 0 seconds (Upload), 0 seconds (Download)
  • Loop Count: 1 (Upload), 2 (Download)

Commands used to invoke JMeter scripts are provided below for reference:

  • ./jmeter -n -t SFTPOCI.jmx -l ./results/ocifss_combo_run1.out -e -o ./results/ocifss_combo_run1_web (On-prem machine)
  • ./jmeter -n -t SFTPOCID.jmx -l ./results/ocifss_download_combo_run2.out -e -o ./results/ocifss_download_combo_run2_web (OCI Compute)

 

Comments

  1. Shashank T says:

    Can FSS be used even for 1 node MFT setup. Instead of storing files on local file system, will it be able to store of FSS

    • Shub Lahiri says:

      Hi Shashank,

      Any size MFTCS cluster, including single-node, has DBFS configured during provisioning. So, the possibility of FSS replacing DBFS is surely applicable to single-node MFTCS clusters.

      In our test case, if you notice, we saved the files, to be downloaded from the MFTCS cluster, in an FSS file system.

      Hope this helps.
      – Shub

Add Your Comment