In a previous article I discussed the use of the Enterprise Scheduler Service (ESS) to poll for files, on a scheduled basis, to read from MFT. In that article we discussed how to process many files that have been posted to the SFTP server. At the end of that article I mentioned the use of the push pattern for file processing.
This article will cover how to implement that push pattern with Managed-File Transfer (MFT) and the Integration Cloud Service (ICS). We’ll walk through the configuration of MFT, creating the connections in ICS, and developing the integration in ICS.
The following figure is a high-level diagram of this file-based integration using MFT, ICS, and an Oracle SaaS application.
Create the Integration Cloud Service Flow
This integration will be a basic integration with an orchestrated flow. The purpose is to demonstrate how the integration is invoked and the processing of the message as it enters the ICS application. For this implementation we only need to create two endpoints. The first is a SOAP connection that MFT will invoke, and the second connection will be to the MFT to write the file to an output directory.
The flow could include other endpoints but for this discussion additional endpoints will not add any benefits to understanding the push model.
Create the Connections
The first thing to do is the create the connections to the endpoints required for the integration. For this integration we will create two required connections.
- SOAP connection. This connection is what will be used by the MFT to trigger the integration as soon as the file arrives in the specified directory within the MFT (This will be covered in the MFT section of this article).
- SFTP connection: This connection will be used to write the file to an output directory within the FTP server. This second connection is only to demonstrate the flow and the processing of the file and then writing the file to an endpoint. This endpoint could have been any endpoint, to invoke another operation. For instance, we could have used the input file to invoke a REST, SOAP, or one of many other endpoints.
Let’s define the SOAP connection.
Identifier: Provide a name for the connection
Adapter: When selecting the adapter type choose the SOAP Adapter
Connection Role: There are three choices for the connection role; Trigger, Invoke, and Trigger and Invoke. We will use a role of Trigger, since the MFT will be triggering our integration.
Figure 2 shows the properties that define the endpoint. The WSDL URL may be added by specifying the actual WSDL as shown above, or the WSDL can be consumed by specifying the host:port/uri/?WSDL.
In this connection the WSDL was retrieved from the MFT embedded server. This can be found at $MW_HOME/mft/integration/wsdl/MFTSOAService.wsdl.
The suppression of the time stamp is specified as true, since the policy being used at MFT does not require the time stamp to be passed.
For this scenario we will be using the Basic Authentication token policy. The policy specified on this connection needs to match the policy that is specified for the MFT SOAP invocation.
The second connection, as mentioned previously, is for the purpose of demonstrating an end-to-end flow. This connection is not important for the purpose of demonstrating the push pattern. The connection is a connection back to the MFT server.
Identifier: Provide a unique name for the connection
Adapter: When selecting the adapter type choose the FTP Adapter
Connection Role: For this connection we will specify “Trigger and Invoke”.
FTP Server Host Address: The IP address of the FTP server.
FTP Server Port: The listening port of the FTP Server
SFTP Connection: Specify “Yes”, since the invocation will be over sFTP
FTP Server Time Zone: The time zone where the FTP server is located.
Security Policy: FTP Server Access Policy
User Name: The name of the user that has been created in the MFT environment.
Password: The password for the specified user.
As a side note, it is recommended to use the host key for sftp connectivity. For this sample, it is not important for the purpose of this demonstration. In order to better understand the use of the host key implementation refer to this blog.
Create the Integration
Now that the connections have been created we can begin to create the integration flow. When the flow is triggered by the MFT SOAP request the file will be passed by reference. The file contents are not passed, but rather a reference to the file is passed in the SOAP request. When the integration is triggered the first step is to capture the size of the file. The file size is used to determine the path to traverse through the flow. A file size of greater than one megabyte is the determining factor.
The selected path is determined by the incoming file size. When MFT passes the file reference it also passes the size of the file. We can then use this file size to determine the path to take. Why do we want to do this?
If the file is of significant size then reading the entire file into memory could cause an out-of-memory condition. Keep in mind that memory requirements are not just about reading the file but also the XML objects that are created and the supporting objects needed to complete any required transformations.
ICS product provides a feature to prevent an OOM condition when reading large files. The top path shown in Figure 7 demonstrates how to handle the processing of large files. When processing a file of significant size it is best to process the file by downloading the file to ICS (This is an option provided by the FTP adapter when configuring the work flow). After downloading the file to ICS it is processed by using a “stage” action. The stage action is able to chunk the large file and read the file across multiple threads. This article will not provide an in-depth discussion on the stage action. To better understand the “stage” action, refer to the Oracle ICS documentation.
The “otherwise” path is the execution flow above is taken when the file size is less than the configured maximum file size. For the scenario in this blog, I set the maximum size to one megabyte.
The use case being demonstrated involves passing the file by reference. Therefore, in order to read or download the file we must obtain the reference location from MFT. The incoming request provides the reference location. We must provide this reference location and the target filename to the read or download operation. This is done with the XSLT mapping shown in figure 8.
The result mapping is shown in Figure 9.
The mapping of the fields is provided below.
Headers.SOAPHeaders.MFTHeader.TargetFilename -> DownFileToICS.DownloadRequest.filename.
InboundSOAPRequestDocument.Headers.SOAPHeaders.MFTHeader.TargetFilename) -> DownloadFileToICS.DownloadRequest.directory
Since the scenario is doing a pass-by-reference, MFT will pass the location of the file as something similar to the following: sftp://<hostname>:7522/payloads/ref/172/52/<filename>. The location being passed is not the location of the directory where the file was placed by the source system. Since the reference directory is determined by MFT, the name of the directory must be derived as demonstrated by the XSLT mapping shown above.
As previously stated, this is a basic scenario intended to demonstrate the push process. The integration flow may be as simple or complex as necessary to satisfy your specific use case.
Now that the integration has been completed it is time to implement the MFT transfer and configure the SOAP request for the callout. We will first configure the MFT Source.
Create the Source
The source specifies the location of the incoming file. For our scenario the directory we place our file in will be /users/zern/in. The directory location is your choice but it must be relative to the embedded FTP server and one must have permissions to read from that directory. Figure 10 shows the configuration for the MFT Source.
As soon as the file is placed in the directory an “event” is triggered for the MFT target to perform the specified action.
Create the Target
The MFT target specifies the endpoint of the service to invoke. In figure 11, the URL has been specified to the ICS integration that was implemented above.
The next step to specify is the security policy. This policy must match what was specified by the connection defined in the ICS platform. We are specifying the username_token_over_ssl_policy as seen in Figure 12.
Besides specifying the security policy we must also specify to ignore the timestamp in the response. Since the policy is the username_token policy the request must also specify the credentials in the request. The credentials are retrieved from the keystore by providing the csf-key value.
Create the Transfer
The last step in this process is to bring the source and target together which is the transfer. It is within the transfer configuration that we specify the delivery preferences. In this example we set the “Delivery Method” to “Reference” and the Reference Type to be “sFTP”.
Putting it all together
- A “*.csv” file is dropped at the source location, /users/zern/in.
- MFT invokes the ICS integration via a SOAP request.
- The integration is triggered.
- The integration determines the size of the incoming file and determines the path of execution
- The file is either downloaded to ICS or read into memory. This is determined by the path of execution.
- The file is transformed and then written back to the output directory specified by the FTP write operation.
- The integration is completed.
Push versus Polling
There is no right or wrong when choosing either a push or poll pattern. Each pattern has its benefits. I’ve listed a couple of points to consider for each pattern.
- The file gets processed as soon as it arrives in the input directory.
- You need to create two connections; one SOAP connection and one FTP connection.
- Normally used to process only one file.
- The files can arrive at any time and there is no need to setup a schedule.
- You must create a schedule to consume the file(s). The polling schedule can be at either specific intervals or at a given time.
- You only create one connection for the file consumption.
- Many files can be placed in the input directory and the scheduler will make sure each file is consumed by the integration flow.
- The file processing is delayed upwards to the maximum time of the polling schedule.
Oracle offers many SaaS cloud applications such as Fusion ERP and several of these SaaS solutions provide file-based interfaces. These products require the input files to be in a specific format for each interface. The Integration Cloud Service is an integration gateway that can enrich and/or transform these files and then pass them along directly to an application or an intermediate storage location like UCM where the file is staged as input to SaaS applications like Fusion ERP HCM.
With potentially many source systems interacting with Oracle SaaS applications it is beneficial to provide a set of common patterns to enable successful integrations. The Integration Cloud Service offers a wide range of features, functionality, and flexibility and is instrumental in assisting with the implementation of these common patterns.
All site content is the property of Oracle Corp. Redistribution not allowed without written permission