The Oracle Data Sync tool provides the ability to extract from both on-premise, and cloud data sources, and to load that data into Oracle Analytics Cloud Service (OAC), Oracle Autonomous Analytics Cloud Service (OAAC), and other oracle relational databases, or into Essbase.
In the most recent release, Data Sync added the ability to source files from Universal Content Management (UCM). It should be noted that any text file created in UCM, either by the Fusion BI Cloud Connector tool, or other Fusion Extract tools, can be consumed and loaded into the target database by the data sync tool.
Another article will be published shortly that will detail how to set up Data Sync to load files created by other Fusion Extract Tools. A link will be placed here once that is available. In the meantime check the documentation on OTN for data sync, or pillar specific extract tools, for further information.
The remainder of this article will concentrate on the process of using Data Sync to extract data files created by the Fusion BI Cloud Connector (BICC) tool, and to load that data into a target database. This will provide customers with the ability to schedule BICC extracts from their Fusion Application Product(s), and load that data into a target database, without the need to download the files from UCM before they are consumed.
Licensing for the Data Sync tool is intended for customers of BICS, OACS, or OAAC.
For more details, check the OTN Licensing Agreement link on this download page.
Be sure to download and install the latest version of the Data Sync Tool from OTN through this link.
Data Sync requires JDK8. You can download that through this link.
(Note - customers still using BI Cloud Service (BICS) and loading into the schema service database, can not use this version of Data Sync. The version for BICS is available to download here, but does not have all the functionality of the OAC / OAAC version.)
For further instructions on configuring Data Sync, see this article. If a previous version of Data Sync is being upgraded, use the documentation on OTN.
In Fusion applications, it is possible to delete data. Perhaps a transaction was entered in error and was subsequently deleted, or a new product was added, but then a decision made to remove it. If Fusion data is being extracted to populate a downstream warehouse, it's important to be able to identify new, changed and deleted data.
The BI Cloud Connector (BICC) offers 2 types of extract. A Data Extract, and a Deleted Record Extract.
The Data extract, as the name suggests, is the data from the selected view object(s). This can either be a full extract of all data in the view object(s), or an incremental extract, which includes all the changes in the data from the view object(s) since the last extract.
The Deleted record extract lists all the primary keys that still exist in the view object. This can then be used to cross-reference with previously extracted data, to see which primary keys no longer exist. This allows the warehouse data to be marked as deleted for accurate downstream reporting.
More information on these concepts can be found in the BICC documentation.
This article will walk through the steps to create, schedule, and run both a Data Extract, and a Deleted Record Extract, for a single view object in the BICC. The article will then show how to configure Data Sync to source the files created by BICC from UCM and to load those into a target database.
Depending on the version of Fusion Applications, the screens may look a little different, but the concepts will be the same.
For more detailed instructions on how to set up and configure the BI Cloud Connector, see this documentation.
BI Cloud Connector provides the option to extract Fusion Data to both UCM and Cloud Storage Service. We need to make sure the target is set as UCM.
1. Within the BICC Cloud Extract Configuration GUI, select the 'Configure External Storage' option, and confirm that the storage type is set to UCM:
2. Create a new extract by selecting 'Configure Cloud Extract', and then the '+' button
3. Enter a suitable Offering Code and Name, and then select the Data to be extracted. In this case we filter for all view objects containing the word 'business' and select the one called 'FscmTopModelAM.FinFunBusinessUnitsAM.BusinessUnitPVO' and move it over to the right hand panel.
You can select as many view objects as you need.
4. Once the extract has been configured, you need to schedule it. Do this by going to the 'Manage Extract Schedules' option, then clicking the '+' to start a new schedule.
In this example we selected the 'Data and Deleted Record Extract' and set up a simple recurrence to run the schedule once. Select the options that make sense for your use case.
5. Allowing enough time for the job to complete, go to the Webcenter Content Page. The URL will be in this format:
Once in the webpage, filter to find the files that were created. In this example we used the 'Quick Search' option to search for all files with 'business' in the name.
Depending on the amount of data being extracted, and whether you created a data, primary key, or both as the extract type, you may see many files. Make a note of the common part of the file name. We will need this later.
1. Within the Data Sync Tool, create a new 'Connection' for the UCM source, give it a suitable name, and chose the 'Oracle UCM Connector' type.
2. Identify the URL for the Webcenter UCM portal. You can do this from the URL of the Webcenter screen you were in previously confirming the files had been created. Copy the URL to the .../cs/idcplg part.
3. Within the properties of the newly created UCM connection, select the URL and paste in this UCM URL.
4. Complete the connection by entering in the username and password for a user with access to UCM. Then save the connection and test it to confirm it was configured correctly.
5. Note - Data Sync can automatically delete the files from UCM once the extract has been completed. This can be configured in the 'Advanced Properties' tab of the connection configuration. It defaults to 'FALSE'.
1. Within Data Sync, select 'Project', 'Source Data' and 'Manual Entry' and select the connection created in the previous step.
Note - it's not possible for Data Sync to connect and list the files available in UCM. Each extract will need to be manually set up, If the extract is run again, the first part of the file name will be the same, so this process only needs to be set up once for each view object.
2. Give the mapping a suitable name, and a target. In this case we will create a new table in the target database called 'BU_Data'.
3. The following screen defines the type of UCM extract. This first mapping will load the Data Extract, so we select that option
4. In the next screen we need to define the file attributes.
The 'File Name Pattern' identifies the shared text of the file batch names created that Data Sync will load. In our example:
5. Data Sync offers the ability to reduce the number of files processed by date. If there are multiple versions of the files in UCM, and you only want to load those created since a particular date, you can enter that date in the 'Extract Files Created Since' box. This example uses midnight on May 31st, 2018. Use this same syntax, although this field is optional. If left blank, then all matching files in UCM will be loaded.
6. Finally we need to provide Data Sync with a file definition, giving it details of the fields and data types in the extract file. You could do this manually, but fortunately for files created by the BICC, there's an easy way to set up this definition.
Within the BICC tool, select 'Configure Cloud Extract', and the 'Offering' that you had set up. In the details screen, find the Data Store for the data sync extract view object you are defining. Highlight the Data Store, then hit the 'Export Columns' option and chose 'Metadata Definition' and hit 'OK'. That will save a mdcsv definition file to your local computer.
Locate this file and open it in a text editor. If you examine it, you will see it provides details of the data and their field types.
Select all the text in the file, copy it, and then paste into the 'Optional Data Definition' box in Data Sync
Click 'OK' to close the definition wizard and finish the mapping.
If you also created a Primary Key extract to identify deleted data, you will need to set up a separate mapping to handle that.
1. Within the Project, select Source Data and Manual Entry, and give a suitable logical name and details on the target. In this case we will create a new table called 'BU_Deletes' in the target database.
2. Select the 'Primary Key Extract' option.
3. In the configuration properties box, you just need to enter the file name pattern. Data Sync is able to identify the Primary Key file from it's extracted name suffix, so you can enter the same file name format as before. Then click OK to close the properties entry window.
4. Repeat this process for all view object files created.
As with any Data Sync mapping, you have the ability to change attributes such as the target table's field names, the load strategy, how to handle duplicate records, etc. For more information on this functionality, see previous A-Team articles on the Data Sync or the documentation on OTN.
While this article just covers loading the Data and Primary Key files into the target Database, it is likely you will need to run a downstream process in the database to process the Primary Key file to identify existing data in the target database, that has now been deleted in Fusion, so those rows can be marked as deleted.
This could be done in a stored procedure in the target database. Data Sync can call this stored procedure immediately after the primary key data is loaded. For more information on this functionality, see this previous article.
Run the Data Sync Job and confirm the load was successful
This article walked through the steps to set up the Data Sync tool to load data extract to UCM by the BI Cloud Connector.
Please note - there is currently no connection between the two processes, so you will need to schedule the data sync load to run at a time when you know the BICC extract has finished. It is not possible currently for data sync to be triggered after BICC completes its extract.