Best Practices from Oracle Development's A‑Team

  • July 30, 2021

Reference Architecture - Fusion HCM Data Replication into FAW ADW Using ODI Marketplace and HCM Extract - Part 3

Matthieu Lombard
Consulting Solution Architect


Content validated on 7/30/2021 with

  • ODI Version
  • ADW Version Oracle Database 19c Enterprise Edition Release - Production Version
  • Oracle Cloud Application 21B (
  • Fusion Analytics Warehouse Application Version  21.R2.P3


Fusion HCM Analytics provides comprehensive workforce, turnover data sourced from Fusion HCM Cloud in a warehouse designed to support broad set of analytics use cases. In some specific use cases, Fusion HCM Analytics data can be complemented with additional Payroll, Cumulative Earning – Deduction or Costing data sets. For such data, Fusion HCM Cloud provide with an extract mechanism named HCM Extracts. HCM Extracts uses BI Publisher as the delivery mechanism for extracted data.

This blog is part of in a series covering topics on how to utilize Oracle ETL / Data Integration tools to extract data from various data sources to complement Fusion Analytics Warehouse factory data. Future blog topics will cover extracting from: Oracle SaaS Applications, Oracle on-premise applications (e.g. EBS), and other Oracle applications such as Responsys, Eloqua and third party data sources like Salesforce or Workday.

This blog walks though how to configure Oracle Data Integrator (ODI) on Oracle Cloud Marketplace to replicate data from Fusion HCM Cloud, using HCM Extracts, into an Autonomous Data Warehouse (ADW) database that is part of the FAW product.

The "Master" topic will divided in four parts :

This current article covers Part 3 - Configuring ODI Artefacts to consume the HCM Extract.

It assumes all the steps describes in Part 1 and Part 2 blog have been executed.

Suggested Prerequisite Reading Material:

Using Oracle Cloud Marketplace

* Working with HCM Extracts 

Overall prerequisites

* Have access to an Fusion HCM Cloud instance

* Have access to an ODI Marketplace instance

* Follow the steps in the first blog for instructions on:

Policies Assignment (Step 1)

SSH key-pair Generation (Step 4-b and 4-c)

ODI Marketplace Provisioning (Step 2)

ADW Configuration (Step 5-h, 5-i, 5-j and 6-d)

 Architecture Overview

The diagram below details the architecture blue print for this use case:

Figure 1 : Architecture overview

The data flow is as follow:

  • ODI Market Place (MP) will be installed in Linux VM
  • An object storage bucket will be mounted as a folder mount point on the ODI MP VM (with s3sfuse)
  • An SFTP Server will be configured with private key authentication with opc user on the ODI VM
  • Create and Prepare an HCM Payroll Data Extract
    • Edit/ Create HCM Extract Definition in Fusion HCM : Extract Parameters, Blocks + Filter Records, Data Elements, Validate Extract
    • Integrate HCM Extract with BIP: Create RTF template and report in BI Publisher 
    • Set Up Report Options: Define Delivery Method (SFTP) and Associate Report location in BI
    • Run / Schedule Extract using FlowActionService SOAP API to push the HCM Extract to SFTP
    • ODI MP will wait for the HCM Extract to be deliver to the SFTP before integrating it into ADW with the LKM File to Oracle
  • All additional transformations will happen inside the ADW
  • HCM extracts can be scheduled once every day
  • Semantic model will be extended by customizing an existing subject area or addition of a new subject area to allow reporting on HCM data extracted in ADW, as per the CEAL Team blog available here

 Set up ODI Topology in ODI Marketplace

In the Part 1 blog and Part 2 blog, the SFTP server / BI Publisher / Object Storage configuration was achieved as well as the HCM Extract configuration, preparation and execution. The HCM extract file outputed a file on object storage. Now in order for ODI to consume the generated HCM Extract csv file, the ODI Topology needs to be set up, especially the File and ADW technology. The following steps assume that the ODI Studio is opened and that the user is connected to the ODI Repository with a user that has access to define ODI Topology.

Create the File Physical, Logical architectures and set Global context

a)  In Topology > Physical Architecture, create a new File Data Server and name it HCM_EXTRACT_FILES. Save -> Test the connection.

Figure 2 : HCM Extract Physical Architecture overview

b) Create a Physical Schema and set the Schema and Work Schema to /u01/oracle/oss/s3fs-fuse-sftp-bucket

Figure 3 : HCM Extract Physical Schema overview

c) Create a Logical Schema and map the contexts ensuring the default context is mapped:

Figure 4 : HCM Extract Physical Architecture overview

Create the ADW Physical, Logical architectures and set Global context

Refer to this blog to review your ADW topology - Steps 5 h), i) and j).

Create ODI Models and Reverse Engineer HCM Extract Data files

In the Part 2 - Blog, the HCM Extract generate a CSV file named HCMX_PAYMENT_REGISTER.csv in /u01/oracle/oss/s3fs-fuse-sftp-bucket. Follow the steps below to create the ODI Model and Data Store to that the HCM Extract file can be mapped as a source in ODI.

a) In ODI Designer, create a new model folder -> Create a new model. Name it HCM_Extract_Files, select File Technology and LS_HCM_EXTRACT_FILES logical architecture:

Figure 5 : HCM Extract Model

b) Create a new data store named and aliased HCMX_PAYMENT_REGISTER. Select the csv file the HCM Extract has produced.

Figure 6: HCM Extract Data Store

c) Click on File tab in the data store and set the delimited format, the heading to one, the record separator as Unix, the Field seperator as comma, the text delimiter as double quotes and the decimal separator as the .

Figure 7: HCM Extract Data Store

d) Click on attributes and reverse engineer the attributes in the CSV file header. The columns size and file type might be adjusted based on the actual values of the columns (here all columns are VARCHAR(255).

NOTE : As BI Publisher report occasionally uses the same names for parameters columns and actual data columns,  one may need to update the column names in the CSV file so there are no duplicate and ODI avoids firing com.sunopsis.dwg.SQLWorkReposException: ORA-00001: unique constraint (HA_ODI_REPO.AK_REV_COL) violated error . This operation will only need to be done once, for the initial reverse engineering,

Figure 8: HCM Extract Data Store Columns

 Create the ADW ODI diagram, Create target ADW Datastore and Create / Run ODI Package and Mapping

Create the ADW ODI diagram

a) Reuse the existing model from Topology defined as per the previous blog. Below are the figures for the physical, logical schemas and context mapping.

Figure 9: HCM Extract ADW Topology

b) In Designer, create a new model named ODI_DATA_HCM

Figure 10: HCM Extract ADW model

c) Create a new ADW diagram named ODI_DATA_HCM-> Drag and drop the HCMX_PAYMENT_REGISTER File datastore onto the diagram -> Click OK.

Figure 11: HCM Extract ADW Diagram

d) Rename and alias the Data store HCMX_PAYMENT_REGISTER. Enter HCMX_PAYMENT_REGISTER in the resource name and in the Alias.

Figure 12: HCM Extract ADW Data Store

e) Click on Attributes > Reverse Engineer to create the datastore on the ADW diagram.

Figure 13: HCM Extract ADW Data Store Columns

c) Click OK.

Figure 14: Save HCM Extract ADW Data Store

d) Confirm that the data store is created and appears in the ADW diagram.

Create and Run ODI Mapping

Creating the ODI Mapping

a) From Designer -> Create a new Project named HCM Data Replication With HCM Extracts . Rename the first folder into HCM Extract BIP to OSS. Under the folder -> Create a new Mapping named Load_HCMX_PAYMENT_REGISTER.

Figure 15:  HCM Extract Project, Folder and Mapping

b) Drag and drop the File HCMX_PAYMENT_REGISTER  datastore on the left side of the Mapping diagram. Drag and drop the ADW HCMX_PAYMENT_REGISTER datastore on the right side of the Mapping diagram.

c) Connect the two datastores -> select "Match Options" = "By Name".

Figure 16:  Map source and target HCM Extract data stores columns

d) Click on the Physical implementation -> Select the first element in the target group. Confirm the Loading Knowledge Module is set to LKM File to Oracle (Built in).GLOBAL:

Figure 17:  Define LKM

e) Select the second element in the TARGET_GROUP -> Confirm the IKM is set to IKM Oracle Insert.GLOBAL -> Ensure the CREATE_TARGET_TABLE and TRUNCATE_TARGET_TABLE options are set to true.

Figure 18:  Define IKM

f) Click on Logical and View the data on HCMX_PAYMENT_REGISTER source. Save the mapping.

Figure 19:  View source data Define IKM

Running the ODI Mapping

a) Run the mapping and confirm the successful execution

Figure 20: Execute HCM Extract File to ADW load mapping

b) Verify the table has been created in ADW and that the HCM Extract data was loaded

Figure 21: View ADW table with HCM Extract File data loaded

This concludes the activities in the blog.

 Want to Learn More?

Click here for more A-Team Oracle Data Integrator (ODI) Blogs.

Click here to sign up to the RSS feed to receive notifications for when new A-team blogs are published.

Click here to access the ODI Marketplace documentation library.


This article walked through the steps to configure Oracle Data Integrator on Oracle Cloud Marketplace (ODI) to replicate data from HCM Extracts into an Autonomous Data Warehouse (ADW) database.

Bookmark this post to stay up-to-date on changes made to this blog as our products evolve.

Be the first to comment

Comments ( 0 )
Please enter your name.Please provide a valid email address.Please enter a comment.CAPTCHA challenge response provided was incorrect. Please try again.Captcha