Fusion Analytics - Fusion Accounting Hub Cloud Integration - Reference Architecture For Inbound Data Flows

September 30, 2022 | 21 minute read
Matthieu Lombard
Consulting Solution Architect
Text Size 100%:

Co-author

Neil Ramsey, Senior Director, Cloud ERP Development

Angelo Santaga, Architect, A-Team

Manoj Shetty, Cloud Solutions Architect, A-Team

Validation

Content validated on 9/31/2022 with

  • Autonomous Data Warehouse - Oracle Database 19c Enterprise Edition Release - Production Version 19.11.0.0.0

  • Oracle Cloud Application 22C (11.13.22.07.0) 

  • Fusion Analytics Warehouse Application Version  22.R2R4

Code Asset

If you which to evaluate the solution described in this blog, you can download the ODI Project SmartExport here.

 

 Background

Oracle Fusion Analytics (FAW) provides analytics for Oracle Cloud applications, powered by Autonomous Data Warehouse (ADW) and Oracle Analytics (OAC). More specifically:

  • Fusion ERP Analytics provides accounting data sourced from ERP Cloud in a warehouse designed to support broad set of analytics use cases. 
  • Fusion HCM Analytics provides comprehensive workforce, turnover data sourced from Fusion HCM Cloud.
  • Fusion SCM Analytics give customers the ability to gain visibility into Supply Chain performance, correlate Supply Chain processes with business goals and detect, understand and predict Supply Chain issues.

Fusion Accounting Hub (FAH) is a cloud application that unifies data from different financial systems, giving customer finance teams a complete view of financial data to increase forecasting accuracy, shorten reporting cycles, and simplify decision-making.

This blog is part of a series of blogs that details the Reference Architecture leveraging the FAW extensibility framework to use FAW as :

  • an Inbound Data Store layer to
    • Standarize and group customer source financial systems data
    • Transform customer source financial systems data into FAH data file format
    • Upload tranformed FAH data file to Fusion
    • Run the Accounting Engine to generate Subldeger Accouting entries in FAH
  • an Outbound Data Store layer to:
    • Enable analytics for the FAH data foundation 
    • Enable enrichement of the FAH data foundation via the AI / Machine Learning capabilities of the FAW platform
    • Allow fine grained sub-ledgers analysis, general ledger balances and supporting references drill-downs with end to end traceability for operational and financial insights.

FAW - FAH

Figure 1: FAW and FAH Integration

This blog will focus on the Inbound Data Flows and Orchestration to FAH Reference Architecture.

Suggested Prerequisite Reading Material:

* Using Oracle Cloud Marketplace

Overall prerequisites

* Have access to an Fusion Analytics Instance (OAC, ADW) with Administrative privileges

* Have access to a Oracle Cloud Application instance with  

* Have provision an Oracle Data Integrator Marketplace instance

* Follow the steps in the first blog for instructions on:

Policies Assignment (Step 1)

SSH key-pair Generation (Step 4-b and 4-c)

ODI Marketplace Provisioning (Step 2)

ADW Configuration (Step 5-h, 5-i, 5-j and 6-d)

 Reference Architecture Overview

Rationale

Banks feed transactions from many upstream source systems to the Fusion Accounting Hub. These could be trading systems, loans, retail banking, foreign exchange deals, hedges, etc. Key characteristics of the upstream feeds:

  • Very high volumes at a level of detail unnecessary for accounting purposes. 
  • Acquisitions often mean that source systems are duplicated. That is, transactions from different source systems warrant a similar accounting treatment.
  • Large number of source system transaction attributes that are irrelevant to accounting. But they must be made available to downstream operational reporting.
  • Source systems are often not transactional in nature in the sense that a current status can be extracted at end of day but, no coherent / robust history of transaction status is possible.

Customers have requirements to make source transactions available in a central, persistent and robust reporting platform, aka Transaction Data Store (TDS).

The source systems feeds must be prepared prior to the upload to the Accounting Hub. Transformations include:

  • standardization - mapping source system attributes to a common list of values. For example, 0 and N = No; and 1 and Y = Yes.
  • summarization - merging many source system lines using a subset of attributes and facts (i.e. amounts and counts).
  • derived "records", derived fields, sequences, etc.

Value Proposition

In addition to the existing FAW value proposition (reporting tools, extensibility, pre-defined subject areas, etc.), the FAW FAH Integration architecture offers s a single platform for pre-processing, enrichment and reporting. It eliminates data migration from the TDS to the enrichment and / or reporting platform and its reconciliation. The end to end process is faster, more reliable and cheaper.

Architecture Blueprint

FAW to FAH Architecture Blueprint is depicted in Figure 2 below. This architecture has a set of pre requisites :

  • Having a clear Master Data Management strategy
  • Having derived the Metadata Object Model for Financials products from the Master Data Management strategy, including mapping from source systems data elements to FAH data elements (such as Accounts, Products, Party, Events , FX rates lookups and mappings)
  • Having defined the functional configuration in FAH applications according to best practices 

 

FAW - FAH ref arhc bvlueprint

Figure 2: FAW FAH Integration Reference Architecture Blueprint

The high level data flow steps are as follow:

  1. Loading source transactions and instrument balances into FAW tables (staging tables and instrument balances).
  2. Based on the Metadata Object Model, invoking the process to cleanse, validate and re-format the source transactions and instrument balances in preparation for their upload to the Accounting Hub. The results will be "stored" in AH Input Analyses. The architecture technical components will capture and process any errors that occur.
  3. Extracting AH Input Analyses from FAW Staging tables, generating Accounting Hub input files and invoking the Accounting Engine.
  4. The Accounting Engine will process the transactions to generate detailed journal entries and financial balances. The architecture technical components will invoke SR balance calculations.
  5. FAW Data Pipeline will load the Accounting Hub results (detailed journal entries and financial balances) into FAW ADW tables.
  6. FAW Data Pipeline will combine Accounting Hub detailed journal entries and financial balances with source transactions (staging tables and instrument balances). This process is known as enrichment. The resulting enriched source system transactions and financial balances will be available for further processing by Financial Services Industry specific analytical applications and operational reporting (FAW OAC).

In this blog, we will leverage Oracle Data Integration on the Marketplace and the AHC_Loan FAH Application. In a subsequent blog, we will use the OCI Native Data Integration serverless Service to implement this inbound architecture.

AHC loan app

Figure 3: FAW FAH Integration Reference Architecture Blueprint

 Standardization and Grouping of Source Systems Data in FAW ADW

Loading Source Systems Data to OCI

The first step in the Inbound data flow is to onboard the Source System data into OCI. The best route to do so it so copy the files to an OCI Object Storage bucket.

This document describes how to put files in object storage. The example in this blog covers :

  • a transactions header file named SRC_IN_XlaTrxH.csv
  • a transactions lines file named SRC_IN_XlaTrxL.csv

The following picture shows the two files in a bucket named bucket-faw-fah :

VM-702-03 -  Source Files in OS bucket

Figure 4: Source System Files in OCI Object Storage Bucket

Cleansing, Validating and re-formatting Source Systems Data into FAH Data Format

Once the data is available in OCI Object Storage bucket, the cleansing, validation and transformation process can start. Oracle Data Integrator (ODI) has tools to get files from an object storage bucket and copy them locally to the ODI compute instance file system. Please refer to this document to land the object storage file to a local folder on the ODI compute instance.

A sample command to download the files from the bucket-faw-fah to the local file system (LS_FAH logical schema in ODI topology)

OdiObjectStorageDownload "-SRC_LOGICAL_SCHEMA=bucket-faw-fah" "-TRG_LOGICAL_SCHEMA=LS_FAH" "-FILE_NAMES_FILTER=SRC_IN_XlaTrx*" "-OVERWRITE=YES"

VM-702-04 odi tool

Figure 5: OdiObjectStorageDownload Command

Data quality usually is covering two processes:

  1. Data profiling : analyzes the frequency of data values and patterns in your existing data based on predefined rules and rules you define.
  2. Data Cleansing: validates and modifies data based on predefined rules and rules you define, deleting duplicate, invalid, incorrectly formatted or incomplete data based on the data profiling process ouput

The steps below heavily rely on the master data definition of the customer : having all these functional rules available in database objects (stored procedures, mapping and lookup tables,...) is a driver to allow Oracle Data Integrator to apply the rules to profile and cleanse the source system data.

All the database objects will be created in a custom schema named FAW_FAH_STG in the FAW ADW instance.

This section describes a simple transformation step where we are mapping the product from the source system data to products as they are expected by FAH

 a) In the ODI Studio / Designer, we have defined a FAW-FAH-Integration project. Within this project, under the 01.Load and Transform SRC Data to FAW_FAH_STG folder, the package FAH_PKG_LOAD_TRANSFROM_SRC_TO_FAW_FAH_STG will run two ODI Mappings :

  1. 00_MAP_LOAD_XAL_TRX_H_STG 
  2. 00_MAP_LOAD_XLA_TRX_L_STG

Load to ST

Figure 6: Load and Transform to FAW_FAH_STG schema package

b) 00_MAP_LOAD_XLA_TRX_H_STG ODI Mapping will load (FIle to Oracle LKM and Oracle Insert IKM) the Header Source file data SRC_IN_XlaTrxH.csv from the local file system into a temporary table XLA_TRX_H_TMP_IN in the FAW_STG. Then join this temp table with a product lookup table XLA_FAH_PRD_LKP ot map the source products with FAH product (here LOAN_TYPE) and load all other columns to the table XLA_TRX_H_STG.

Load headers to STG

Figure 7: Load Headers to FAW_FAH_STG schema Mapping

 c) 00_MAP_LOAD_XLA_TRX_L_STG ODI Mapping will load (FIle to Oracle LKM and Oracle Insert IKM) the Lines Source file data SRC_IN_XlaTrxL.csv from the local file system into XLA_TRX_H_STG table without any transformation - for the simplicity of the example

Load lines to STG

Figure 8: Load Lines to FAW_FAH_STG schema Mapping

d) Once run, the tables are popuplated in FAW_FAH_STG schema

ODI Operator

(FIle to Oracle LKM and Oracle Insert IKM)

Figure 9: FAW Staging tables for Headers and Lines

 Preparing Standardized Data to load to FAH

Now that the Source System Data is loaded, cleansed and transformed into FAW ADW tables , we need to extract them into flat files and package them so they can be loaded into FAH.

Extracting AH Input Analyses from FAW Staging tables

Extracting the AH Input Analyses (Header and Lines) to comma separated value files (CSV) is done by the ODI package FAH_PKG_EXTRACT_HEADER_LINES :

a)  Within the FAW-FAH-Integration project, under the 02.Extract From FAW_FAH_STG and Load to FAH folder, the package FAH_PKG_EXTRACT_HEADER_LINES will run two ODI Mappings :

  1. 00_Extract_XLA_TRX_H_STG 
  2. 00_Extract_XLA_TRX_L_STG

Extracting AH Input Analyses

Figure 10: Extracting AH Input Analyses - ODI Package

 

b) Both Mappings are extracting the Header and Lines tables to their corresponding csv files. Both Mappings are using SQL to File LKM.

Extracting AH Input Analyses Mappings

Figure 11: Extracting AH Input Analyses - ODI Mappings 

b) When these two steps are executed, the csv files are generated in our case under the /u01/oracle/fah folder: the headers file is XlaTrxH.csv and the lines file is XlaTrxH.csv. We will see the operator logs in the next sub section as the FAH files preparation once extracted step is part of the same package as the FAH files extraction.

Extracting AH Input Analyses csv files

Figure 12: Extracting AH Input Analyses - ODI target files

Preparing Accounting Hub input files

When loading data into FAH, the file needs to adhere to a certain set of rules and requirements:

  • The final file to upload to FAH needs to be a compressed zip file
  • The final file to upload to FAH name needs to staret with XlaTransaction
  • The final file to upload to FAH needs to contain three files:
    • The transaction header file named XlaTrxH.csv
    • The transaction lines file named XlaTrxL.csv
    • The metadata definition file named Metadata_LOAN.txt. The picture below details the metadata definition file content

Zip

Figure 13: Extracting AH Input Analyses - compressed zip file

To download a template file for FAH data upload from Fusion, follow the steps below:

a)  In Fusion, navigate to Home > Others > Setup and Maintenance > Fusion Accounting Hub and under Accounting Transformation > Manage Subledger application select the AHC_Loan Application

Manage AHC Loan

Figure 14: Manage AHC Loan

b)  Click Apply and Go to Task

Apply and Go

Figure 15: AHC Loan apply and go

 

c)  Click on Download template 

Download template

Figure 16: Extracting AH Input Analyses - Download template

 

Back in ODI, 

a)  Within the FAW-FAH-Integration project, under the 02.Extract From FAW_FAH_STG and Load to FAH folder, the package FAH_PKG_EXTRACT_HEADER_LINES after runnning the two mappings will run a bash script via an OdiOsCommand  that compresses the two header and line csv file along with the metadata defintion file and convert the zip file to base64 so it can be used in a subsequent step of this blog:

Generate zip

Figure 17: Generate FAH Compressed Zip and b64 file from ODI

b)  When running the package from ODI, we can review the logs in Operator. Figure 13 above shows the zip and b64 files.

Operator Lofs

Figure 18: Extracting AH Input Analyses - Operator logs in ODI

 Loading AH Input Analyses and Invoking the Accounting Engine

Now that the FAH zip file is prepared with cleansed and formatted data, it is ready to be uploaded to FAH after which the Accounting Engine can be invoked so the subledger journal can be updated. This is achieved via three steps and three sequential REST API Calls against /fscmRestApi/resources/11.13.18.05/erpintegrations endpoint:

  1. Loading the FAH prepared file to the Universal Content Management (UCM)
  2. Submitting the Enterprise Scheduler Service (ESS) job to import the file to Accounting Hub and running the Accounting Engine. The job is named “Import accounting Transaction” .
  3. Monitoring the ESS job request status for completion

The below examples of REST API Calls are :

  • being coded with Python language 
  • using Basic Authentication
  • being called by ODI via the Python scripts and OdiOsCommand 

The user calling these REST API needs to have the following duties:

  • ORA_XLA_ACCOUNTING_HUB_INTEGRATION_DUTY
  • ORA_XLA_ACCOUNTING_HUB_INTEGRATION_DUTY_OBI
  • FIN_FUSIONACCOUNTINGHUB_IMPORT_RWD

For more details about the ERP Integrations REST APIs, please refer to this document.

Loading AH Input Analyses to FAH

The first step of the process is to load the FAH file in b64 format to UCM. This step is divided into three sub steps:

a)  First ODI will generate the Python Script to load the file to UCM. Below is a code snippet describes a sample script with the required payload, where :

  • The OperationName is uploadFileTo
  • The DocumentContent is the zip file converted to base64
  • The DocumentAccount is fin$/fusionAccountingHub$/import$
  • The ContentType has to be zip
  • The FileName is the file generated by ODI
import requests
import json

url = "https://<your saas pod url>/fscmRestApi/resources/11.13.18.05/erpintegrations"

payload = json.dumps({
"OperationName": "uploadFileToUCM",
"DocumentContent": "UEsDBBQAAAAIAKU+J1UNeUIQlgEAAG8HAAALAAAAWGxhVHJ4SC5jc3bNlM1OwzAMx+88RR8gSFnaruvRS7MS1iRbmgDtpSow0CTYpIkLb0+yDzYUCYnDxE517PzjX+zURoOsgRquZGeaGUMVK0qmOwmCIXMSLMD8dEgrxkyjMcgpAkqVlebgOyyp1ZpJ2iClecklVN0YKpCUHRzHkwUYq7lpdqsZNGoy2edkWiAu3YfVptPeVSnY01JbGyVcyoPRUbehVLrZbXKncOrClaLbZFc77zE7snU02yzf+81nVC2eXxcbRDAh1zi/JgTVbUO5l2A07ldP66hYRrB6XbytEUmTNHXqAg0wPtU4M9ua2ak5wChGE3eTnTyq+9VHL/qXZe/uVvA7X",
"DocumentAccount": "fin$/fusionAccountingHub$/import$",
"ContentType": "zip",
"FileName": "XlaTransaction_AHC_Loan20220914_69.zip",
"DocumentId": None
})

headers = {
'Authorization': 'Basic RklOX0lNUEw6c0FMOFZeND8===',
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)

b) Then ODI will excute the Python script to POST the REST request to Fusion. Below is a sample response :

{
  "OperationName": "uploadFileToUCM",
  "DocumentId": "3845042",
  "DocumentContent": "UEsDBBQAAAAIAKU+J1UNeUIQlgEAAG8HAAALAAAAWGxhVHJ4SC5jc3bNlM1OwzAMx+88RR8gSFnaruvRS7MS1iRbmgDtpSow0CTYpIkLb0+yDzYUCYnDxE517PzjX+zURoOsgRquZGeaGUMVK0qmOwmCIXMSLMD8dEgrxkyjMcgpAkqVlebgOyyp1ZpJ2iClecklVN0YKpCUHRzHkwUYq7lpdqsZNGoy2edkWiAu3YfVptPeVSnY01JbGyVcyoPRUbehVLrZbXKncOrClaLbZFc77zE7snU02yzf+81nVC2eXxcbRDAh1zi/JgTVbUO5l2A07ldP66hYRrB6XbytEUmTNHXqAg0wPtU4M9ua2ak5wChGE3eTnTyq+9VHL/qXZe/uVvA7X",
  "FileName": "XlaTransaction_AHC_Loan20220914_69.zip",
  "ContentType": "zip",
  "FileType": null,
  "DocumentAccount": "fin$/fusionAccountingHub$/import$",
  "Comments": null,
  "ProcessName": null,
  "LoadRequestId": null,
  "JobPackageName": null,
  "JobDefName": null,
  "ReqstId": null,
  "RequestStatus": null,
  "JobName": null,
  "ParameterList": null,
  "NotificationCode": null,
  "CallbackURL": null,
  "JobOptions": null,
  "StatusCode": null,
  "ESSParameters": null,
  "links": [
    {
      "rel": "self",
      "href": "https://<saas pod>:443/fscmRestApi/resources/11.13.18.05/erpintegrations/uploadFileToUCM",
      "name": "erpintegrations",
      "kind": "item"
    },
    {
      "rel": "canonical",
      "href": "https://<saas pod>/fscmRestApi/resources/11.13.18.05/erpintegrations/uploadFileToUCM",
      "name": "erpintegrations",
      "kind": "item"
    }
  ]
}

c) Finally ODI will parse, using jq, the REST call response and persist the DocumentId as it will be required for the next REST call.

c) Below Fusion screenshot shows the files in UCM, after navigating to Home > Tools > File Import and Export

File uploaded to UCM

Figure 19: Extracting AH Input Analyses - File uploaded to UCM

Invoking the Accounting Engine

The second step of the process is to submit the ESS job to import the file from UCM to the FAH and invoke the Accounting Engine. This is achieved by three sub steps:

a)  With the DocumentId from the previous response, ODI will generate the python script to submit the ESS job. Below is a snippet of the script where the payload contains:

  • The OperationName is submitESSJobRequest
  • The JobPackageName is oracle/apps/ess/financials/subledgerAccounting/shared/
  • The JobDefName is XLATXNIMPORT
  • The ESSParameters are the DocumentId from previsous REST call response and the file name
import requests
import json

url = "https://<saas pod url>/fscmRestApi/resources/11.13.18.05/erpintegrations"
payload = json.dumps({
"OperationName": "submitESSJobRequest",
"JobPackageName": "/oracle/apps/ess/financials/subledgerAccounting/shared/",
"JobDefName": "XLATXNIMPORT",
"ESSParameters": "3845042,XlaTransaction_AHC_Loan20220914_69.zip"

})
headers = {
'Authorization': 'Basic RklOX0lNUEw6c0FMOFZeND8====',
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)

b) Then ODI will excute the Python script to POST the REST request to Fusion. Below is a sample response :

{
  "OperationName": "submitESSJobRequest",
  "DocumentId": null,
  "DocumentContent": null,
  "FileName": null,
  "ContentType": null,
  "FileType": null,
  "DocumentAccount": null,
  "Comments": null,
  "ProcessName": null,
  "LoadRequestId": null,
  "JobPackageName": "/oracle/apps/ess/financials/subledgerAccounting/shared/",
  "JobDefName": "XLATXNIMPORT",
  "ReqstId": "5182576",
  "RequestStatus": null,
  "JobName": null,
  "ParameterList": null,
  "NotificationCode": null,
  "CallbackURL": null,
  "JobOptions": null,
  "StatusCode": null,
  "ESSParameters": "3845042,XlaTransaction_AHC_Loan20220914_61.zip",
  "links": [
    {
      "rel": "self",
      "href": "https://<saas pod>:443/fscmRestApi/resources/11.13.18.05/erpintegrations/submitESSJobRequest",
      "name": "erpintegrations",
      "kind": "item"
    },
    {
      "rel": "canonical",
      "href": "https://<saas pod>:443/fscmRestApi/resources/11.13.18.05/erpintegrations/submitESSJobRequest",
      "name": "erpintegrations",
      "kind": "item"
    }
  ]
}

c) Finally ODI will parse, using jq, the REST call response and persist the RequestId as it will be required for the next REST call. 

Reviewing AH Input Analyses in Fusion Accounting Hub UI

The last step is to monitor the ESS job status and validate the transactions have been posted to the subledger for AHC_Loan.

a)  To monitor the status of the ESS job, ODI will generate and execute a Python script that will GET the status of the job and poll it until it is completed. Below is a snippet of the script:

import requests
url = "https://<saas pod url>/fscmRestApi/resources/11.13.18.05/erpintegrations?finder=ESSExecutionDetailsRF;requestId=5182357"
payload={}
headers = {
  'Authorization': 'Basic RklOX0lNUEw6alRYM2leOD8===='
}
response = requests.request("GET", url, headers=headers, data=payload)
print(response.text)

 

b) Then ODI will excute the Python script to post the REST request to Fusion. Below is a sample response :

{
  "items": [
    {
      "OperationName": "getESSExecutionDetails",
      "DocumentId": null,
      "DocumentContent": null,
      "FileName": null,
      "ContentType": null,
      "FileType": null,
      "DocumentAccount": null,
      "Comments": null,
      "ProcessName": null,
      "LoadRequestId": null,
      "JobPackageName": null,
      "JobDefName": null,
      "ReqstId": "5182576",
      "RequestStatus": "{\"JOBS\":{\"JOBNAME\":\"Import Accounting Transactions\",\"JOBPATH\":\"/oracle/apps/ess/financials/subledgerAccounting/shared\",\"REQUESTID\":\"5182576\",\"STATUS\":\"SUCCEEDED\"}}",
      "JobName": null,
      "ParameterList": null,
      "NotificationCode": null,
      "CallbackURL": null,
      "JobOptions": null,
      "StatusCode": null,
      "ESSParameters": null,
      "links": [
        {
          "rel": "self",
          "href": "https://<saas pod>:443/fscmRestApi/resources/11.13.18.05/erpintegrations/getESSExecutionDetails",
          "name": "erpintegrations",
          "kind": "item"
        },
        {
          "rel": "canonical",
          "href": "https://saas pod>:443/fscmRestApi/resources/11.13.18.05/erpintegrations/getESSExecutionDetails",
          "name": "erpintegrations",
          "kind": "item"
        }
      ]
    }
  ],
  "count": 1,
  "hasMore": false,
  "limit": 25,
  "offset": 0,
  "links": [
    {
      "rel": "self",
      "href": "https://<saas pod>:443/fscmRestApi/resources/11.13.18.05/erpintegrations",
      "name": "erpintegrations",
      "kind": "collection"
    }
  ]
}

c) When executing the ODI package that calls the sequential REST endpoint above, we can review the status in Operator

Rest calls

Figure 20: Importing AH Input Analyses - REST Calls from ODI

 

d) When review the Status of the scheduled job in Fusion, we can review and confirm that the Import Accounting Transactions is successful. We can also review any accounting warning\

Status in ESS

Figure 21: Importing AH Input Analyses - Status in ESS

d) One can then validate that the Journal Entries are actually posted in the Subledger module

Review Transactions in Subldger

Figure 22: Importing AH Input Analyses - Subldegder Journal Entries

This concludes the activities in the blog.

 Want to Learn More?

Click here to sign up to the RSS feed to receive notifications for when new A-team blogs are published.

 Summary

In this blog, we leveraged Oracle Data Integration on the Marketplace to prepare and load data to the AHC_Loan FAH Application. The high level data flow steps were as follow:

  1. Loading source transactions and instrument balances into FAW tables (staging tables and instrument balances).
  2. Based on the Metadata Object Model, invoking the process to cleanse, validate and re-format the source transactions and instrument balances in preparation for their upload to the Accounting Hub. The results will be "stored" in AH Input Analyses. The architecture technical components will capture and process any errors that occur.
  3. Extracting AH Input Analyses from FAW Staging tables, generating Accounting Hub input files and invoking the Accounting Engine.
  4. The Accounting Engine will process the transactions to generate detailed journal entries and financial balances. The architecture technical components will invoke SR balance calculations.

In a subsequent blog, we will use the OCI Native Data Integration serverless Service to implement this inbound architecture and also demonstrate the Outbound Architecture from within FAW 

  1. FAW Data Pipeline will load the Accounting Hub results (detailed journal entries and financial balances) into FAW ADW tables.
  2. FAW Data Pipeline will combine Accounting Hub detailed journal entries and financial balances with source transactions (staging tables and instrument balances). This process is known as enrichment. The resulting enriched source system transactions and financial balances will be available for further processing by Financial Services Industry specific analytical applications and operational reporting (FAW OAC).

Bookmark this post to stay up-to-date on changes made to this blog as our products evolve.

Matthieu Lombard

Consulting Solution Architect

The Oracle A-Team is a central, outbound, highly technical team of enterprise architects, solution specialists, and software engineers.

The Oracle A-Team works with external customers and Oracle partners around the globe to provide guidance on implementation best practices, architecture design reviews, troubleshooting, and how to use Oracle products to solve customer business challenges.

I focus on data integration, data warehousing, Big Data, cloud services, and analytics (BI) products. My role included acting as the subject-matter expert on Oracle Data Integration and Analytics products and cloud services such as Oracle Data Integrator (ODI),  and Oracle Analytics Cloud (OAC, OA For Fusion Apps, OAX).


Previous Post

Asynchronous Webservices in Fusion ERP and SCM

Abhay Kumar | 7 min read

Next Post


F5 BIG-IP Virtual Edition on Oracle Cloud Infrastructure – Part 1 of 3

Rohit Borkar | 6 min read