Content validated on 9/31/2022 with
Autonomous Data Warehouse - Oracle Database 19c Enterprise Edition Release - Production Version 19.11.0.0.0
Oracle Cloud Application 22C (11.13.22.07.0)
Fusion Analytics Warehouse Application Version 22.R2R4
If you which to evaluate the solution described in this blog, you can download the ODI Project SmartExport here.
Oracle Fusion Analytics (FAW) provides analytics for Oracle Cloud applications, powered by Autonomous Data Warehouse (ADW) and Oracle Analytics (OAC). More specifically:
Fusion Accounting Hub (FAH) is a cloud application that unifies data from different financial systems, giving customer finance teams a complete view of financial data to increase forecasting accuracy, shorten reporting cycles, and simplify decision-making.
This blog is part of a series of blogs that details the Reference Architecture leveraging the FAW extensibility framework to use FAW as :
Figure 1: FAW and FAH Integration
This blog will focus on the Inbound Data Flows and Orchestration to FAH Reference Architecture.
Suggested Prerequisite Reading Material:
* Using Oracle Cloud Marketplace
Overall prerequisites
* Have access to an Fusion Analytics Instance (OAC, ADW) with Administrative privileges
* Have access to a Oracle Cloud Application instance with
* Have provision an Oracle Data Integrator Marketplace instance
* Follow the steps in the first blog for instructions on:
Policies Assignment (Step 1)
SSH key-pair Generation (Step 4-b and 4-c)
ODI Marketplace Provisioning (Step 2)
ADW Configuration (Step 5-h, 5-i, 5-j and 6-d)
Banks feed transactions from many upstream source systems to the Fusion Accounting Hub. These could be trading systems, loans, retail banking, foreign exchange deals, hedges, etc. Key characteristics of the upstream feeds:
Customers have requirements to make source transactions available in a central, persistent and robust reporting platform, aka Transaction Data Store (TDS).
The source systems feeds must be prepared prior to the upload to the Accounting Hub. Transformations include:
In addition to the existing FAW value proposition (reporting tools, extensibility, pre-defined subject areas, etc.), the FAW FAH Integration architecture offers s a single platform for pre-processing, enrichment and reporting. It eliminates data migration from the TDS to the enrichment and / or reporting platform and its reconciliation. The end to end process is faster, more reliable and cheaper.
FAW to FAH Architecture Blueprint is depicted in Figure 2 below. This architecture has a set of pre requisites :
Figure 2: FAW FAH Integration Reference Architecture Blueprint
The high level data flow steps are as follow:
In this blog, we will leverage Oracle Data Integration on the Marketplace and the AHC_Loan FAH Application. In a subsequent blog, we will use the OCI Native Data Integration serverless Service to implement this inbound architecture.
Figure 3: FAW FAH Integration Reference Architecture Blueprint
The first step in the Inbound data flow is to onboard the Source System data into OCI. The best route to do so it so copy the files to an OCI Object Storage bucket.
This document describes how to put files in object storage. The example in this blog covers :
The following picture shows the two files in a bucket named bucket-faw-fah :
Figure 4: Source System Files in OCI Object Storage Bucket
Once the data is available in OCI Object Storage bucket, the cleansing, validation and transformation process can start. Oracle Data Integrator (ODI) has tools to get files from an object storage bucket and copy them locally to the ODI compute instance file system. Please refer to this document to land the object storage file to a local folder on the ODI compute instance.
A sample command to download the files from the bucket-faw-fah to the local file system (LS_FAH logical schema in ODI topology)
OdiObjectStorageDownload "-SRC_LOGICAL_SCHEMA=bucket-faw-fah" "-TRG_LOGICAL_SCHEMA=LS_FAH" "-FILE_NAMES_FILTER=SRC_IN_XlaTrx*" "-OVERWRITE=YES"
Figure 5: OdiObjectStorageDownload Command
Data quality usually is covering two processes:
The steps below heavily rely on the master data definition of the customer : having all these functional rules available in database objects (stored procedures, mapping and lookup tables,...) is a driver to allow Oracle Data Integrator to apply the rules to profile and cleanse the source system data.
All the database objects will be created in a custom schema named FAW_FAH_STG in the FAW ADW instance.
This section describes a simple transformation step where we are mapping the product from the source system data to products as they are expected by FAH
a) In the ODI Studio / Designer, we have defined a FAW-FAH-Integration project. Within this project, under the 01.Load and Transform SRC Data to FAW_FAH_STG folder, the package FAH_PKG_LOAD_TRANSFROM_SRC_TO_FAW_FAH_STG will run two ODI Mappings :
Figure 6: Load and Transform to FAW_FAH_STG schema package
b) 00_MAP_LOAD_XLA_TRX_H_STG ODI Mapping will load (FIle to Oracle LKM and Oracle Insert IKM) the Header Source file data SRC_IN_XlaTrxH.csv from the local file system into a temporary table XLA_TRX_H_TMP_IN in the FAW_STG. Then join this temp table with a product lookup table XLA_FAH_PRD_LKP ot map the source products with FAH product (here LOAN_TYPE) and load all other columns to the table XLA_TRX_H_STG.
Figure 7: Load Headers to FAW_FAH_STG schema Mapping
c) 00_MAP_LOAD_XLA_TRX_L_STG ODI Mapping will load (FIle to Oracle LKM and Oracle Insert IKM) the Lines Source file data SRC_IN_XlaTrxL.csv from the local file system into XLA_TRX_H_STG table without any transformation - for the simplicity of the example
Figure 8: Load Lines to FAW_FAH_STG schema Mapping
d) Once run, the tables are popuplated in FAW_FAH_STG schema
Figure 9: FAW Staging tables for Headers and Lines
Now that the Source System Data is loaded, cleansed and transformed into FAW ADW tables , we need to extract them into flat files and package them so they can be loaded into FAH.
Extracting the AH Input Analyses (Header and Lines) to comma separated value files (CSV) is done by the ODI package FAH_PKG_EXTRACT_HEADER_LINES :
a) Within the FAW-FAH-Integration project, under the 02.Extract From FAW_FAH_STG and Load to FAH folder, the package FAH_PKG_EXTRACT_HEADER_LINES will run two ODI Mappings :
Figure 10: Extracting AH Input Analyses - ODI Package
b) Both Mappings are extracting the Header and Lines tables to their corresponding csv files. Both Mappings are using SQL to File LKM.
Figure 11: Extracting AH Input Analyses - ODI Mappings
b) When these two steps are executed, the csv files are generated in our case under the /u01/oracle/fah folder: the headers file is XlaTrxH.csv and the lines file is XlaTrxH.csv. We will see the operator logs in the next sub section as the FAH files preparation once extracted step is part of the same package as the FAH files extraction.
Figure 12: Extracting AH Input Analyses - ODI target files
When loading data into FAH, the file needs to adhere to a certain set of rules and requirements:
Figure 13: Extracting AH Input Analyses - compressed zip file
To download a template file for FAH data upload from Fusion, follow the steps below:
a) In Fusion, navigate to Home > Others > Setup and Maintenance > Fusion Accounting Hub and under Accounting Transformation > Manage Subledger application select the AHC_Loan Application
Figure 14: Manage AHC Loan
b) Click Apply and Go to Task
Figure 15: AHC Loan apply and go
c) Click on Download template
Figure 16: Extracting AH Input Analyses - Download template
Back in ODI,
a) Within the FAW-FAH-Integration project, under the 02.Extract From FAW_FAH_STG and Load to FAH folder, the package FAH_PKG_EXTRACT_HEADER_LINES after runnning the two mappings will run a bash script via an OdiOsCommand that compresses the two header and line csv file along with the metadata defintion file and convert the zip file to base64 so it can be used in a subsequent step of this blog:
Figure 17: Generate FAH Compressed Zip and b64 file from ODI
b) When running the package from ODI, we can review the logs in Operator. Figure 13 above shows the zip and b64 files.
Figure 18: Extracting AH Input Analyses - Operator logs in ODI
Now that the FAH zip file is prepared with cleansed and formatted data, it is ready to be uploaded to FAH after which the Accounting Engine can be invoked so the subledger journal can be updated. This is achieved via three steps and three sequential REST API Calls against /fscmRestApi/resources/11.13.18.05/erpintegrations endpoint:
The below examples of REST API Calls are :
The user calling these REST API needs to have the following duties:
For more details about the ERP Integrations REST APIs, please refer to this document.
The first step of the process is to load the FAH file in b64 format to UCM. This step is divided into three sub steps:
a) First ODI will generate the Python Script to load the file to UCM. Below is a code snippet describes a sample script with the required payload, where :
import requests import json url = "https://<your saas pod url>/fscmRestApi/resources/11.13.18.05/erpintegrations" payload = json.dumps({ "OperationName": "uploadFileToUCM", "DocumentContent": "UEsDBBQAAAAIAKU+J1UNeUIQlgEAAG8HAAALAAAAWGxhVHJ4SC5jc3bNlM1OwzAMx+88RR8gSFnaruvRS7MS1iRbmgDtpSow0CTYpIkLb0+yDzYUCYnDxE517PzjX+zURoOsgRquZGeaGUMVK0qmOwmCIXMSLMD8dEgrxkyjMcgpAkqVlebgOyyp1ZpJ2iClecklVN0YKpCUHRzHkwUYq7lpdqsZNGoy2edkWiAu3YfVptPeVSnY01JbGyVcyoPRUbehVLrZbXKncOrClaLbZFc77zE7snU02yzf+81nVC2eXxcbRDAh1zi/JgTVbUO5l2A07ldP66hYRrB6XbytEUmTNHXqAg0wPtU4M9ua2ak5wChGE3eTnTyq+9VHL/qXZe/uVvA7X", "DocumentAccount": "fin$/fusionAccountingHub$/import$", "ContentType": "zip", "FileName": "XlaTransaction_AHC_Loan20220914_69.zip", "DocumentId": None }) headers = { 'Authorization': 'Basic RklOX0lNUEw6c0FMOFZeND8===', 'Content-Type': 'application/json' }
response = requests.request("POST", url, headers=headers, data=payload) print(response.text)
b) Then ODI will excute the Python script to POST the REST request to Fusion. Below is a sample response :
{ "OperationName": "uploadFileToUCM", "DocumentId": "3845042", "DocumentContent": "UEsDBBQAAAAIAKU+J1UNeUIQlgEAAG8HAAALAAAAWGxhVHJ4SC5jc3bNlM1OwzAMx+88RR8gSFnaruvRS7MS1iRbmgDtpSow0CTYpIkLb0+yDzYUCYnDxE517PzjX+zURoOsgRquZGeaGUMVK0qmOwmCIXMSLMD8dEgrxkyjMcgpAkqVlebgOyyp1ZpJ2iClecklVN0YKpCUHRzHkwUYq7lpdqsZNGoy2edkWiAu3YfVptPeVSnY01JbGyVcyoPRUbehVLrZbXKncOrClaLbZFc77zE7snU02yzf+81nVC2eXxcbRDAh1zi/JgTVbUO5l2A07ldP66hYRrB6XbytEUmTNHXqAg0wPtU4M9ua2ak5wChGE3eTnTyq+9VHL/qXZe/uVvA7X", "FileName": "XlaTransaction_AHC_Loan20220914_69.zip", "ContentType": "zip", "FileType": null, "DocumentAccount": "fin$/fusionAccountingHub$/import$", "Comments": null, "ProcessName": null, "LoadRequestId": null, "JobPackageName": null, "JobDefName": null, "ReqstId": null, "RequestStatus": null, "JobName": null, "ParameterList": null, "NotificationCode": null, "CallbackURL": null, "JobOptions": null, "StatusCode": null, "ESSParameters": null, "links": [ { "rel": "self", "href": "https://<saas pod>:443/fscmRestApi/resources/11.13.18.05/erpintegrations/uploadFileToUCM", "name": "erpintegrations", "kind": "item" }, { "rel": "canonical", "href": "https://<saas pod>/fscmRestApi/resources/11.13.18.05/erpintegrations/uploadFileToUCM", "name": "erpintegrations", "kind": "item" } ] }
c) Finally ODI will parse, using jq, the REST call response and persist the DocumentId as it will be required for the next REST call.
c) Below Fusion screenshot shows the files in UCM, after navigating to Home > Tools > File Import and Export
Figure 19: Extracting AH Input Analyses - File uploaded to UCM
The second step of the process is to submit the ESS job to import the file from UCM to the FAH and invoke the Accounting Engine. This is achieved by three sub steps:
a) With the DocumentId from the previous response, ODI will generate the python script to submit the ESS job. Below is a snippet of the script where the payload contains:
import requests import json url = "https://<saas pod url>/fscmRestApi/resources/11.13.18.05/erpintegrations" payload = json.dumps({ "OperationName": "submitESSJobRequest", "JobPackageName": "/oracle/apps/ess/financials/subledgerAccounting/shared/", "JobDefName": "XLATXNIMPORT", "ESSParameters": "3845042,XlaTransaction_AHC_Loan20220914_69.zip" }) headers = { 'Authorization': 'Basic RklOX0lNUEw6c0FMOFZeND8====', 'Content-Type': 'application/json' } response = requests.request("POST", url, headers=headers, data=payload) print(response.text)
b) Then ODI will excute the Python script to POST the REST request to Fusion. Below is a sample response :
{ "OperationName": "submitESSJobRequest", "DocumentId": null, "DocumentContent": null, "FileName": null, "ContentType": null, "FileType": null, "DocumentAccount": null, "Comments": null, "ProcessName": null, "LoadRequestId": null, "JobPackageName": "/oracle/apps/ess/financials/subledgerAccounting/shared/", "JobDefName": "XLATXNIMPORT", "ReqstId": "5182576", "RequestStatus": null, "JobName": null, "ParameterList": null, "NotificationCode": null, "CallbackURL": null, "JobOptions": null, "StatusCode": null, "ESSParameters": "3845042,XlaTransaction_AHC_Loan20220914_61.zip", "links": [ { "rel": "self", "href": "https://<saas pod>:443/fscmRestApi/resources/11.13.18.05/erpintegrations/submitESSJobRequest", "name": "erpintegrations", "kind": "item" }, { "rel": "canonical", "href": "https://<saas pod>:443/fscmRestApi/resources/11.13.18.05/erpintegrations/submitESSJobRequest", "name": "erpintegrations", "kind": "item" } ] }
c) Finally ODI will parse, using jq, the REST call response and persist the RequestId as it will be required for the next REST call.
The last step is to monitor the ESS job status and validate the transactions have been posted to the subledger for AHC_Loan.
a) To monitor the status of the ESS job, ODI will generate and execute a Python script that will GET the status of the job and poll it until it is completed. Below is a snippet of the script:
import requests url = "https://<saas pod url>/fscmRestApi/resources/11.13.18.05/erpintegrations?finder=ESSExecutionDetailsRF;requestId=5182357" payload={} headers = { 'Authorization': 'Basic RklOX0lNUEw6alRYM2leOD8====' } response = requests.request("GET", url, headers=headers, data=payload) print(response.text)
b) Then ODI will excute the Python script to post the REST request to Fusion. Below is a sample response :
{ "items": [ { "OperationName": "getESSExecutionDetails", "DocumentId": null, "DocumentContent": null, "FileName": null, "ContentType": null, "FileType": null, "DocumentAccount": null, "Comments": null, "ProcessName": null, "LoadRequestId": null, "JobPackageName": null, "JobDefName": null, "ReqstId": "5182576", "RequestStatus": "{\"JOBS\":{\"JOBNAME\":\"Import Accounting Transactions\",\"JOBPATH\":\"/oracle/apps/ess/financials/subledgerAccounting/shared\",\"REQUESTID\":\"5182576\",\"STATUS\":\"SUCCEEDED\"}}", "JobName": null, "ParameterList": null, "NotificationCode": null, "CallbackURL": null, "JobOptions": null, "StatusCode": null, "ESSParameters": null, "links": [ { "rel": "self", "href": "https://<saas pod>:443/fscmRestApi/resources/11.13.18.05/erpintegrations/getESSExecutionDetails", "name": "erpintegrations", "kind": "item" }, { "rel": "canonical", "href": "https://saas pod>:443/fscmRestApi/resources/11.13.18.05/erpintegrations/getESSExecutionDetails", "name": "erpintegrations", "kind": "item" } ] } ], "count": 1, "hasMore": false, "limit": 25, "offset": 0, "links": [ { "rel": "self", "href": "https://<saas pod>:443/fscmRestApi/resources/11.13.18.05/erpintegrations", "name": "erpintegrations", "kind": "collection" } ] }
c) When executing the ODI package that calls the sequential REST endpoint above, we can review the status in Operator
Figure 20: Importing AH Input Analyses - REST Calls from ODI
d) When review the Status of the scheduled job in Fusion, we can review and confirm that the Import Accounting Transactions is successful. We can also review any accounting warning\
Figure 21: Importing AH Input Analyses - Status in ESS
d) One can then validate that the Journal Entries are actually posted in the Subledger module
Figure 22: Importing AH Input Analyses - Subldegder Journal Entries
This concludes the activities in the blog.
Click here to sign up to the RSS feed to receive notifications for when new A-team blogs are published.
In this blog, we leveraged Oracle Data Integration on the Marketplace to prepare and load data to the AHC_Loan FAH Application. The high level data flow steps were as follow:
In a subsequent blog, we will use the OCI Native Data Integration serverless Service to implement this inbound architecture and also demonstrate the Outbound Architecture from within FAW
Bookmark this post to stay up-to-date on changes made to this blog as our products evolve.
The Oracle A-Team is a central, outbound, highly technical team of enterprise architects, solution specialists, and software engineers.
The Oracle A-Team works with external customers and Oracle partners around the globe to provide guidance on implementation best practices, architecture design reviews, troubleshooting, and how to use Oracle products to solve customer business challenges.
I focus on data integration, data warehousing, Big Data, cloud services, and analytics (BI) products. My role included acting as the subject-matter expert on Oracle Data Integration and Analytics products and cloud services such as Oracle Data Integrator (ODI), and Oracle Analytics Cloud (OAC, OA For Fusion Apps, OAX).
Next Post