How to perform a Technical Setup for HR2HR Integration in Fusion HCM on-premise


HCM Coexistence is an offering in Fusion Apps and provides a tight integration between two HCM systems. A synchronization between both HCM systems is performed via a standard interface where Fusion Apps provides an extract of metadata first and the mapped data from source system will be imported back second. This interface can be used for initial data load as well as scheduled for incremental updates by periodic synchronization processes. The existing pre-defined integration solution is called HR2HR Integration and used by existing customers. For upcoming customer implementations, starting with Release 5, a recent solution called File Based Loader (FBL) will replace the existing one.

Main Article

In this post we will focus and summarize the activities to setup HR2HR Integration. Another post about technical FBL setup will follow in a while and a quick overview about FBL can be found at the bottom of this post.

The implementation of HCM Coexistence offering is fully documented and can be found on Fusion Apps Doc Library (follow the links for the appropriate release like 11.1.6, then Human Capital Management tab, finally open document named Oracle Fusion Applications Coexistence for HCM Implementation Guide) and includes the functional and technical integration steps. Other helpful information for technical can be found in post-installation doc for Fusion Apps under the section Human Capital Management. Last but not least this post is intended to give a comprehensive overview about the technical HR2HR setup as part of HCM Coexistence offering.

HCM Coexistence implementation at a glance

As stated above, HCM Coexistence uses a predefined integration between Fusion HCM and a source system like Peoplesoft. It allows existing customers of E-Business Suite or Peoplesoft to use the best of both worlds: continue using a well established HCM system in production and implement some new functionality in Fusion HCM like Talent Management. Furthermore this HCM Coexistence allows the smooth adoption of other product offerings in Fusion Apps like Fusion Product Data Management, Fusion Accounting Hub or others as core HCM data can be synced between the source system and Fusion HCM.

1) HCM Coexistence Functionality1) HCM Coexistence Functionality

The integration between both systems is CSV file based and uses an FTP server for data exchange. Fusion HCM invokes an outbound interface for sharing meta-data with source system. These will be implementation specific values for enterprise structures to be used for reference purposes on source system side. Once the data extract and mapping has been finished on HCM source side, the loader files will be transferred back to FTP server. Fusion HCM inbound transaction must be triggered via WebService invocation for a data load into Fusion HCM.

2) HCM Integration Overview2) HCM Integration Overview

Skill profiles used in implementation process

There are at  least three different skill sets required in implementation of HCM Coexistence offering where setup of HR2HR Integration is just one task amongst many others. A categorization of theses skill sets could look like this:

  • Functional Setup Specialist with knowledge to maintain the Enterprise Structures and performing a Business Setup for HCM in Fusion Apps
  • Data Exchange Specialist with technical and functional skills about underlying HCM Business Objects and their usage in Fusion Apps
  • Technical Setup Specialist especially for HR2HR Integration components setup and maintenance (ODI, FTP, SOA etc)

These three roles can partially overlap and therefore this is only intended to provide a rough guidance for implementation tasks planning. In reality its up to the implementing individuals to fulfill one or more roles.

Functional Setup Specialist

These skills are required to perform the functional bootstrapping on a freshly installed Fusion Apps instance. Many of these tasks are common implementations steps and independent from the offering to setup. In essence the list of activities looks like this  and they are documented in MOS note with document ID 1395863.1.

  • Preparing the Oracle Fusion Applications Super User for User Management and Configuration (perhaps some technical skills or support required for this task too)
  • Preparing the IT Security Manager Role for User and Role Management (perhaps some technical skills or support required for this task too)
  • Generating the Setup Task List by using the HCM for Coexistence offering
  • Defining Implementation Users
  • Setting Up Basic Enterprise Structures like Locations, Legal Addresses, Business Units, Legal Entities, Jobs etc – in a coexistence scenario this setup must be mappable with the according structures in HCM source system
  • Defining Application Users as the process owners later to run the coexistence interface like extraction and load

The implementation user must setup many more HCM specific structures and processes depending on features used in Fusion HCM like Talent Management or Workforce Compensation. These details aren’t covered here. If not the same individual this specialist must closely cooperate with the according specialists on HCM source system like Peoplesoft or EBS. This specialist will later test and run the HR2HR Integration in close cooperation with the Data Exchange Specialist.

Data Exchange Specialist

This skill set is required to perform the HCM data mapping for initial or incremental data loads. The files being exchanged exist in CSV format and can be bundled in zip files with a tree structure for the different entity groups like Person, Salary, Job Family, Job etc. Such structures are dependent on the implemented business process and might vary in detail between Talent Management or Workforce Compensation. A Data Exchange Specialist must be able to setup the mapping between the values in HCM source system and Fusion HCM. As this is a tight integration between two HCM systems the values will be matched directly by unique keys (BUSINESS_GROUP_ID, ORGANIZATION_ID etc). Once the core functional setup in Fusion HCM is done, an extract of HCM Configuration Parameter must be sent to HCM source system to announce the unique identifiers being used in communication (see figure #2 above, steps 1, 2 and 3). The Data Exchange Specialist must be able to manage data extraction, data mapping and import/loader fault handling. Therefore this role requires technical (Business Object representation in DB, XML, CSV etc) and functional (meaning of Business Objects and their relation to each other) knowledge. The Data Exchange Specialist will usually start with the activities once a core HCM setup has been entered in system.

Technical Setup Specialist

Technical setup of HR2HR Integration requires some deep technical skills of Fusion Applications, Fusion Middleware and Fusion Apps System Administration. Compared to both skill sets above a technical integration setup has a very determined scope in terms of activities and duration. Its not required to start these tasks in the very beginning of an implementation cycle as many of these setup steps can run isolated from the functional setup. The main topic of this post covers the technical HR2HR Integration setup and therefore we will focus on more details in sections below.

HR2HR Integration overview

Involved technologies

Technical Integration Setup is almost a set of activities on Fusion Apps host directly. There are three technology components being involved

  • SOA Suite with BPEL Adapter for FTP
  • Oracle Data Integrator (ODI)
  • FTP server on a remote host

SOA and ODI components reside in Fusion Apps while FTP server is an external component. If there is already an existing server up and running this task could be skipped and the appropriate configuration parameters can be taken from there. The details of this technical HR2HR Integration setup are described below in next chapter.

Integration process flow

This integration process implements a tight integration between two HCM systems as mentioned earlier. In opposite to AIA this solution doesn’t make use of any abstraction layer as the source and target system are closely linked with each other and run in-house. In essence it is used as a data synchronization between two HCM systems sharing the same set of entities and specialized on different features. It will run these three logical steps:

  • Extract of Fusion HCM metadata and transfer the set of CSV files to source system via FTP (manually triggered)
  • Make use of the Fusion HCM reference data in source system and create a mapping between entities in both systems
  • Create extraction tools to export data and create import files for Fusion HCM including the data mapping
  • Trigger the extraction of initial data in source system and transfer these import files to a common area on FTP server
  • Call a web service in Fusion HCM to initiate inbound data transfer and data load/import
  • Rerun this data load on Fusion HCM side via ESS in case of errors or per Business Object to ensure right order for logical object relations (manually triggered)
  • Setup a repetitive process in HCM source system to feed Fusion HCM with updates periodically (automated process)

The current integration solution expects a sub directory E_1 on FTP server and in ODI staging area in Fusion Apps file system. This is a fixed setup step when preparing FTP server and ODI staging area.

3) HR2HR Integration Data Flow3) HR2HR Integration Data Flow

Technical Setup HR2HR Integration

The sections below describe the technical setup steps for HR2HR Integration sorted by components

  • external FTP Server
  • Oracle Data Integrator (Fusion Apps Tech Stack)
  • SOA (Fusion Apps Tech Stack)
  • Fusion Apps HCM Functional Setup Manager
  • For some parameter entries below common placeholders are used like
  • FA_TOP is the top directory for Fusion Apps installation (above sub directory fusionapps)
  • FA_INSTANCE_DIR is the top directory for FA domain and OHS configurations
  • FTP_ROOT in this doc is the remote top directory for ftp data exchange like /home/ftp_hr_soa as shown in some samples and screenshots below

Setup FTP Server

If a remote FTP server for data exchange exist then its probably sufficient to create a specific user for HR2HR Integration and a dedicated exchange directory. Secure FTP is recommended but not required.
For an installation of a FTP server exist multiple options and for sample below the package vsftpd has been chosen. It comes as an optional package in Oracle Enterprise Linux but is also available for other platforms. Other packages could be proftpd or others. Given the sample below we find some config files to edit as super user to configure vsftpd. After package installation config files are located under /etc/vsftpd. The ftp daemon can be configured to start automatically when system comes up. File user_list is a positive list of users to give them access rights explicitly. File ftpusers contains a list of users who are not allowed to connect via (s)ftp.

4) FTP Server vsftpd – Config Files4) FTP Server vsftpd – Config Files

Main configuration file is named vsftpd.conf. It is used to setup all sorts of configuration parameters for vsftpd. In sample below the highlighted value configures a connection from port 20 which would only make sense when using an insecure plain ftp connection.

5) FTP Server Settings in vsftpd.conf5) FTP Server Settings in vsftpd.conf

Setup Oracle Data Integrator

Database Setup

Check in a database client if FUSION.PER_ROLES_DN has select rights on FUSION_ODI_STAGE. Secondly check if a synonym FUSION_ODI_STAGE.PER_ROLES_DN for FUSION.PER_ROLES_DN exist. If not then run the SQL statement below:


Staging Directory in file system

Create an ODI staging directory in your file system according to the settings you’ve registered earlier. In sample below the value is /u01/app/oracle/fa.nfs/Integration/ODI_FILE_ROOT_HCM/ and it shows the directory content after integration is in place. Don’t miss to create a sub directory E1 as HR2HR Integration requires this structure.

6) HR2HR ODI Staging Tree with data samples6) HR2HR ODI Staging Tree with data samples

ODI Config Tool

ODI configuration for HR2HR Integration is deployed via a Python script. It can be obtained Oracle internally via this link and comes a a file hcmPythonUtil.jar. In a first step we have to extract file hcmPythonUtil.jar followed by some editing of configuration entries in various files before running the deployment script
File  config/jps-config.xml must be edited with the correct IDM information, change values in lines 55, 57, 59, 63, 69 below as documented inline. This applies the IDM config data at customer environment to ODI as its needed at runtime.

<?xml version="1.0" encoding="UTF-8" standalone='yes'?>
<jpsConfig xmlns="" xmlns:xsi="" xsi:schemaLocation="" schema-major-version="11" schema-minor-version="1">
    <!-- This property is for jaas mode. Possible values are "off", "doas" and "doasprivileged" -->
    <property name="" value="off"/>

        <!-- SAML Trusted Issuer -->
        <propertySet name="saml.trusted.issuers.1">
            <property name="name" value=""/>

        <serviceProvider type="CREDENTIAL_STORE" name="credstoressp" class="">
            <description>SecretStore-based CSF provider</description>

      <!-- add ldap provider -->
       <serviceProvider type="IDENTITY_STORE" name="idstore.ldap.provider"

 <description>LDAP-based IdentityStore Provider

        <serviceProvider type="IDENTITY_STORE" name="idstore.xml.provider" class="">
            <description>XML-based IdStore Provider</description>

        <serviceProvider type="POLICY_STORE" name="policystore.xml.provider" class="">
            <description>XML-based PolicyStore Provider</description>

        <serviceProvider type="LOGIN" name="jaas.login.provider" class="">
            <description>This is Jaas Login Service Provider and is used to configure login module service instances</description>

        <serviceProvider type="KEY_STORE" name="keystore.provider" class="">
            <description>PKI Based Keystore Provider</description>
            <property name="" value="owsm"/>

        <serviceProvider type="AUDIT" name="audit.provider" class="">
            <description>Audit Service</description>

        <serviceInstance name="credstore" provider="credstoressp" location="./">
            <description>File Based Credential Store Service Instance</description>

<!-- JPS OID LDAP Identity Store Service Instance -->
 <serviceInstance name="idstore.oid" provider="idstore.ldap.provider">
 <property name="" value="dc=access_path" /> <!-- Enter access path for your top-level domain component, sample "dc=mycompany,dc=com" -->
 <property name="idstore.type" value="OID" />
 <property name="cleartext.ldap.credentials" value="cn=OID_Super_User:Password"/> <!-- Enter your values for OID_Super_User and Password, sample "cn=orcladmin:Secret" -->

<property name="ldap.url" value="ldap://oid_host_name:oid_port" /> <!-- Enter your values for LDAP host name and port -->
   <value>cn=user_name,dc=access_path</value> <!-- top level access path for users, sample "cn=Users,dc=mycompany,dc=com" -->
   <value>cn=group_name,dc=access_path</value> <!-- top level access path for groups, sample "cn=Groups,dc=mycompany,dc=com" -->
 <property name="username.attr" value="uid" />
 <property name="groupname.attr" value="cn" />


Next step is to modify properties for ODI configuration in file hcmConfig.prop as listed below. Change value in lines 20, 29, 32, 41, 43, 58, 67, 69, 71, 74, 87, 97, 98, 101 and 102 by replacing the placeholders with installation specific values from customers environment.

# Copyright (c) 2009, 2010, Oracle and/or its affiliates. All rights reserved.
#Title                  :    hcmConfig.prop
#Description            :    
#Author                 :    Srinivas Nachuri
#Date                   :    20120131
#Version                :    1.0.0
#Usage                  :    
#Notes                  :
#Python_version         :    2.6.6

# Version History
# -------------------------------------------------------------------------------
# snachuri 01/31/2012 - creation
# -------------------------------------------------------------------------------

#HCM WLS Instance home

# Update the updateMode flag. Default is readonly.
# read from cmd line args
# default value false, to update set the value to true

#Location where the fusion techstack is present

#Location where the fusion techstack is present

# jps config file location

# ODI Config Specifics

#ODI Master Repository

#ODI Work repository
#Format - Oracle JDBC Thin driver 11g 
# Non-RAC
# jdbc:oracle:thin:@host:port:sid 
# For RAC
#    (HOST=lcqsol25)(PORT=1521))(FAILOVER=on)(LOAD_BALANCE=off))
# Reference :



# ODI Configure Database Connections
# Provide the DB Schema details for the HCMDomain
#ODI DB link Value
#Format - <host>:<port>/<instance_sid_name>

# ODI JDBC URL value
#Format - Oracle JDBC Thin driver 11g 
# Non-RAC
# jdbc:oracle:thin:@host:port:sid 
# For RAC
#    (HOST=lcqsol25)(PORT=1521))(FAILOVER=on)(LOAD_BALANCE=off))
# Reference :



# ODI File Connections
# The File connection value corresponds to the physical directory on 
# the HcmDomain host server hosting the ODI managed server.
# Ensure the physical directories exists.



There might be a clarification needed: how to obtain the password for user FUSION_APPS_HCM_ODI_SUPERVISOR_APPID in line 43 above? This value is internal and set by provisioning tool when installing Fusion Apps.
Run the following commands in shell and WLST to see the password in clear text. Copy the password and take it to your (secure) records! It will be needed for editing file hcmConfig.prop, but also later when configuring HCM Data Extraction inside Fusion Apps (see below in this post).

$ sh <FA_TOP>fusionapps/oracle_common/common/bin/ 
wls:/offline> connect('fa_super_user', 'password', 't3://<hcmDomainAdminServerHostName>:<port>') 
wls:/HCMDomain/serverConfig> listCred(map="",key="FUSION_APPS_HCM_ODI_SUPERVISOR_APPID-KEY")

Sample output:

[Name : FUSION_APPS_HCM_ODI_SUPERVISOR_APPID, Description : Identifies roles with elevated access aimed at developers to help achieve code
based access control that is beyond the access of the current operator to manage batch applications supporting the enterprise data warehouse
with supervisor privileges., expiry Date : null]

According to script above the password for  FUSION_APPS_HCM_ODI_SUPERVISOR_APPID is V31e,sruohqnzb. Please notice that this password is installation specific and must be obtained by each Fusion Apps instance.
Declare a variable ODI_ORACLE_HOME pointing to <FA_TOP>/odi as we use it in the scripts below.
Change value in script below in line 30 to reflect specific path value in your environment.


# Copyright (c) 2009, 2010, Oracle and/or its affiliates. All rights reserved.
#Title                  :
#Description            :    
#Author                 :    Srinivas Nachuri
#Date                   :    20120131
#Version                :    1.0.0
#Usage                  :    
#Notes                  :       This script intializes the HcmDomain env values based on the 
#Python_version         :    2.6.6

# Determine the location of this script...
case ${SCRIPTNAME} in
 /*)  SCRIPTPATH=`dirname "${SCRIPTNAME}"` ;;
  *)  SCRIPTPATH=`dirname "${mypwd}/${SCRIPTNAME}"` ;;


# Modify the path of the HcmDomain if needed to match the env.
. <FA_INSTANCE_DIR>/domains/

# Test Value should uncommented/removed
#. /scratch2/mw_local/FMWTOOLS_11.

# Change the working directory to the HCM script location

java  -cp "$ODI_ORACLE_HOME/oracledi.sdk/lib/*" org.python.util.jython $SCRIPTPATH/ $SCRIPTPATH/hcmConfig.prop

Run the script above twice. First with parameter updateMode=false (file hcmConfig.prop above, line 26). Check log file viewodiconfig.log for errors. If no errors exist then change parameter to updateMode=true and run same script again including check for errors.

SQL Loader for ODI

For an import of Text files we must create a file called sqlldr in $ODI_ORACLE_HOME/bin pointing to sqlldr in $FA_TOP/dbclient/bin sub directory.


ODI Setup Check

Check the correctness of settings in ODI Console as shown below:
Login to ODI Console Application under <HCM_EXTERNAL_HOST>:<ODI_CONSOLE_PORT> as FA Super User. In sample below the link is If the host name and port are unknown then contact your system administrator for these values. The super user name is weblogic_fa in sample below:

7) Login to ODI Console7) Login to ODI Console

As a next step check under access path Topology -> File for existence of data servers FILE_ROOT_HCM and FILE_OUTPUT_HCM. If they are missing the deployment of config entries may have failed.

8) ODI Console Setup Technologies File8) ODI Console Setup Technologies File

Screenshot below shows the assigned values for ODI HCM File Data Server FILE_ROOT_HCM. This must be the same value as configured in in config files above.

9) Check value for FILE_ROOT_HCM9) Check value for FILE_ROOT_HCM

Screenshot below shows the assigned values for ODI HCM File Data Server FILE_OUTPUT_HCM. This must be the same value as configured in config files above.

10) Check value for FILE_OUTPUT_HCM10) Check value for FILE_OUTPUT_HCM

Check for Agent FusionHcmOdiAgent being assigned to context Development. This context will be used as co-existence parameter ODI Context (see screenshot #13 below).

11) Check for context (here “Development”) and Fusion HCM ODI Agent to be set11) Check for context (here “Development”) and Fusion HCM ODI Agent to be set

Setup SOA Server

For a communication with external FTP server we must enter the connection data in file FtpAdapter.rar. Sample below shows a configuration for SFTP as this is supposed to be preferred.
Extract file <FA_TOP>/fusionapps/soa/soa/connectors/FtpAdapter.rar to a temp directory and modify file META-INF/weblogic_ra.xml for JNDI entry eis/Ftp/FtpAdapter as shown in snippet below in lines 24, 29, 34, 39 and 47. After this modification repackage the file FtpAdapter.rar. Keep a backup of the original file and move/copy the modified file to the location mentioned earlier. Please notice that we use port 22 for SFTP connections as listed below.

               <wls:description>Ftp Adapter</wls:description>
                        <wls:value>ftp_server_name</wls:value> <!-- host name of FTP server -->

                        <wls:value>22</wls:value> <!-- port, here we use sftp with the ssh port 22 -->

                        <wls:value>ftp_hr_soa</wls:value> <!-- remote FTP user -->

                        <wls:value>secret</wls:value> <!-- FTP user password -->

                        <wls:name>useSftp</wls:name> <!-- flag to determine whether secure FTP is used or not -->

As next step of SOA preparation enter the credentials of HCM Data Extract user to SOA configuration in HCMDomain. As shown in screenshot below open Enterprise Manager for HCMDomain and follow the path Farm_HCMDomain -> WebLogic Domain -> HCMDomain. In right UI pane open WebLogic Domain menu -> Security. In Credential Store Provider search for menu entry and look for key FUSION_APPS_HCM_HR2HR_APPLOGIN-KEY. Like shown in dialog box below enter the Fusion Apps user account being used when data exchange becomes initiated.

12) Setup HCM Data Exchange User in HCMDomain via Enterprise Manager12) Setup HCM Data Exchange User in HCMDomain via Enterprise Manager

After implementing all SOA specific configuration steps above perform a bounce of HCMDomain.

Configuration Fusion Apps HCM (FSM)

Connect to Fusion HCM Application as Fusion Apps Setup User or HCM Data Exchange Manager and open the task Manage HCM Configuration for Coexistence in Functional Setup Manager (Setup and Maintenance). In UI pane Parameters we can enter the parameters as previously defined in ODI and FTP server setup. A sample of configuration entries is shown below in screenshot:

13) Manage HCM Configuration for Coexistence Parameter13) Manage HCM Configuration for Coexistence Parameter

The value for ODI Password has been obtained in section Setup Oracle Data Integrator as described above and is the same value as for internal user FUSION_APPS_HCM_ODI_SUPERVISOR_APPID.

Testing HR2HR integration

Manage HCM Configuration for Coexistence

Testing a full integration cycle should start with a HCM Data Extraction which includes an outbound notification about Fusion HCM meta data. For this purpose we connect as a user who can run the task Manage HCM Configuration for Coexistence. This task can be assigned to any HCM user but it might be wise to restrict the number of users to a minimum. It might be practicable to have one user dedicated to data exchange activities. For inbound integration we must assign a user in WSM configuration (see screenshot #13) and this could be the same as for outbound operations.
In sample below we find a UI pane named Generate Mapping File for HCM Business Objects. Activating the Submit button in this pane will initiate an outbound transaction to extract and transfer the Fusion HCM meta data to the external FTP server. The Search button can be used to look for previously scheduled processes. Don’t be confused with the Submit button in UI pane above named Parameters. This is used to save changes to parameter configuration.

14) Export Fusion HCM Meta Data14) Export Fusion HCM Meta Data

Once the ESS process has been triggered it will call an ODI based export of these specific Business Objects. The resulting files will be stored in ODI Staging Area and compressed into a zip file. As a next step the SOA Composite HcmCommonBatchLoaderCoreFtpComposite is triggered by ESS job to move zip file to destination directory on FTP server. This SOA process is started via Web Service http://<hcm_internal_server>:<soa_port>/soa-infra/services/default/HcmCommonBatchLoaderCoreFtpComposite/batchloaderftpmoveprocess_client_ep.
In screenshot below the outbound composite including some instances is shown.

15) Outbound FTP Composite with instances15) Outbound FTP Composite with instances

Screenshot #16 shows the process flow of composite from event notification to FTP file transfer. After successful run the zip file is located in outbound directory at FTP server.

16) Outbound FTP Composite Flow16) Outbound FTP Composite Flow

In ODI Console the number of extracted records can be tracked per HCM Business Object Extraction run as shown in screenshot below.

17) Check in ODI for number of exported references17) Check in ODI for number of exported references

Load/Import HCM Data from ODI Staging Area

Once the meta data have been transferred to FTP server they can be imported to HCM source systems to be used as references for data mapping. As said in the beginning of this blog this would be a task of an implementation consultant who is familiar with the technical and functional details of the HCM Business Objects.
At any time, once mapping and data extraction from source system is done, these data can be transferred back for data load and import into Fusion HCM. Screenshot below shows the HCM Data Load page with a list of previously attempted data loads and their status. The task to show this page is also named HCM Data Load  and must be assigned to a privileged HCM user.

18) Scheduler Page to Load/Import HCM Data18) Scheduler Page to Load/Import HCM Data

In case of failure its possible to restart a run for data import. How to restart is shown in following screenshot. It is possible to choose single business objects or all for import. Running import for single objects makes sense to have a better control about import process or to perform incremental runs for some parts of loader file.

19) Schedule a Loader Process Run19) Schedule a Loader Process Run

However: to restart data load/import requires that the loader data have been registered in system when importing into ODI staging area. It doesn’t trigger a FTP transfer of files from server into staging area. This task is shown in section below.

Initiate a data transfer from FTP and invoke Load/Import process

HCM source system will copy the loader file onto the FTP server once extract is ready to share. The import process starting with FTP inbound transfer for Fusion HCM will be started via a Web Service invocation to trigger the according SOA composite named HcmCommonBatchLoaderCoreInboundLoaderComposite. URL of this WS is https://<hcm external host>:<port>/soa-infra/services/default/HcmCommonBatchLoaderCoreInboundLoaderComposite/inboundloaderprocess_client_ep and a sample of XML Payload can be found below. Please take a look at inline comments for a better understanding of values to be entered.

<soap:Envelope xmlns:soap="">
    <wsse:Security soap:mustUnderstand="1" xmlns:wsse="">
        <wsse:Password Type="">Welcome1</wsse:Password>
    <ns1:FT_PSFT_FUSION_MSG xmlns:ns1="">
      <ns1:RunControlID>99</ns1:RunControlID> <!-- sequential number -->
      <ns1:ProcessInstance>1</ns1:ProcessInstance> <!-- random number -->
      <ns1:ZipFileName></ns1:ZipFileName> <!-- File name containing HCM Load Data, must exist in FTP inbox directory -->
      <ns1:FileLocation>ftp_inbox</ns1:FileLocation> <!-- Directory name under <ftp root directory>/E_1 where inbound docs are expected -->
      <ns1:TimeStamp>2013-05-11T15:46:13.032</ns1:TimeStamp>  <!-- Timestamp for inbound data transfer -->
      <ns1:LBOList> <!-- List of Business Objects to be imported -->
        <ns1:LBOName>Location</ns1:LBOName> <!-- First Business Object -->
        <ns1:LBOName>Person</ns1:LBOName> <!-- Second Business Object -->

Once the WS has been invoked an instance of HcmCommonBatchLoaderCoreInboundLoaderComposite will be created, In screenshot below you will find multiple versions of this composite. This may occur when updates were deployed by patches. Usually the composite with highest version number will be active while all others are retired. In UI pane on the right the existing instances are listed. Clicking on Instance ID will open a new window with details.

20) HCM SOA Instances for Load Composite20) HCM SOA Instances for Load Composite

An instance details sample can be found in screenshot below. It shows the process flow and status information. Final activity is called LoaderParameterService – an invocation of ESS job to run data load from ODI staging area.

21) HCM Loader Composite – Activities Flow21) HCM Loader Composite – Activities Flow

Once InboundLoaderJob was started a process tree will be spawned with several jobs to import into ODI staging area.

22) ESS HCM Loader Tree22) ESS HCM Loader Tree

Screenshot below shows a status page of successfully finished loader process. Once the data have been loaded into staging area these scheduled job can be repeated as much as needed.

23) HCM Loader Process Details23) HCM Loader Process Details

As a summary we can say that the sample runs shown in this chapter did not finish with a load of real data as they were mainly intended as tests for connectivity and process flows. As stated on top of this document a successful end-to-end integration includes an exchange of mapped data and causes more efforts than a pure technical integration.

The Future: File Based Loader

In sections above the current implementation of HR2HR Integration has been explained as a solution our customers are using today. In April 2013 a successor of HR2HR Integration was announced named File Based Loader (FBL). It will being some improvements and new features like

  • Full Support for Date Effective transactions
  • Improved Incremental loads
  • Better Business Object coverage with new objects coming up frequently
  • Higher flexibilty in terms of source systems ( 1 or many/ Oracle or Non Oracle)
24) File Based Loader – Process Flow24) File Based Loader – Process Flow

With introduction of FBL more goals are in focus:

  • Increased Flexibility of implementations
  • Start with a clean slate for Talent Projects
  • Create simplified mappings and extracts from source systems
  • More control of data being sent across
  • Define Talent Rapid Start Packages without worrying about HR2HR data.
  • Pre Package Rapid Start Talent Packages with mapping and extract templates
  • Facilitates Turn Key Talent Projects
  • Fast implementations
  • Single Tool for all inbound interfaces to Fusion HCM
  • Talent Integrations
  • Full HCM Data conversion
  • Any other data import to Fusion
25) FBL – Functional Overview25) FBL – Functional Overview


Implementation of HR2HR Integration above is used at many customers sides and this post is intended as a wrap-up of activities for setup and testing. File Based Loader is in the beginning and new implementations will make a usage of it. Some principles of solution as described above will change – others will not. With upcoming Fusion Apps release the adoption of of FBL will become common and we will post another updated integration setup sample then. Stay tuned!


  1. Hi Ulrich,

    We intend to use FBL for on-premise HR2HR integration. But I find the script download URL is nor working for us.

    Please publish if there is any alternate link.


  2. Hi,

    Do we have updated intergration details for this co-existence scenario with PSFT 9.2 and Oracle Cloud for Release 8 (with UCM instead of FTP) and FBL?


Add Your Comment