X

Best Practices from Oracle Development's A‑Team

Loading Oracle Service Cloud (RightNow) Data into ADW with Data Sync

* This blog was last tested on: Oracle Analytics Cloud Data Sync (2.6.1) + Oracle Service Cloud 20B (Build 266) + ADW 18c by Jay Pearson (Oracle A-Team) *

For other A-Team articles about OAC and Data Sync, click here

Introduction

This functionally has been available since Data Sync v2.2 released in September 2016 - when the ability to connect to a number of different data sources via JDBC was added.

Setting up these Data Sources in Data Sync varies by the source.  Rather than create a single article to cover them all, 3 have been created.  Select the appropriate article to learn more about the steps to set up that Data Source.

This article covers the steps to extract data from Oracle Service Cloud (RightNow) and load that into Oracle Autonomous Data Warehouse (ADW).

 

Data Source Article With Details
Service Cloud / RightNow this article
SalesForce link to article
Greenplum, Hive, Impala, Informix, Mongodb. Postgresql. Redshift, Sybase, and Instruction for other JDBC data sources where the JDBC driver is available link to article

 

 

Main Article

Downloading Latest Version of Data Sync Tool

Be sure to download and install the latest version of the Data Sync Tool from Oracle Analytics Downloads.

For instructions on how to configure ADW as the target, see this article.

For general instructions on configuring Data Sync, see this article.

For further information on the Data Sync Tool, upgrade instructions, OAC Data Sync readme (Data Sync release notes and known issues), and OAC Data Sync Documentation (Data Sync install and user guide) can be found here.

 

Data Load Methods

The Data Sync tool offers 3 different methods of extracting data from Oracle Service Cloud:

1. Loading the data from a report created in the Oracle Service Cloud Report Explorer desktop tool

2. Loading the data directly from individual objects available in Oracle Service Cloud

3. Loading the data directly from a ROQL query

Each method will be discussed during this article.

Setting up the Oracle Service Cloud Connection in Data Sync

1. In the Data Sync tool, create a new Connection:

Cursor

2. In the 'Connection Type' selection, use 'Oracle Service Cloud (RightNow), enter a suitable user name and password, and URL for the RightNow environment:

Cursor

 

3. 'Test' the connection to confirm connectivity:

Screenshot_8_1_16__4_36_PM

 

Data Loading

As mentioned, there are 3 different ways that data can be loaded into ADW from Oracle Service Cloud using the Data Sync Tool.  Each will be discussed in detail in this section.

In each method discussed, filters can be used to restrict the data set size.

 

Method 1: Loading data from a report created in the Oracle Service Cloud 'Report Explorer' tool

This method provides the ability to access all available objects in Oracle Service Cloud.  It goes against the database tables in Service Cloud, and not against the semantic layer that the other methods work against.  This means that only the fields that are needed can be extracted, and only the tables needed are traversed by the query.  This makes the query against Service Cloud more efficient and possibly better performing.

The process involves creating 2 reports in Oracle Service Cloud.

The first contains the data that the extract will use and that will be loaded into ADW (the 'Data Report'), and the second to provide metadata about the data to help Data Sync run incremental loads, and to 'chunk' the data into more manageable extracts (the 'Helper Report').

Prerequisite: Install the Oracle Service Cloud runtime app (Report Designed is not available in the browser app). This can only be installed from Internet Explorer or Microsoft Edge Web Browser.

1. In the Oracle Service Cloud 'Report Explorer' create the 'Data Report' pulling in the fields that will be required in ADW.  In this example some fields are pulled from the 'incidents' table.

Home Ribbon -> New Report

Select Grid Report

Cursor

2. To enable the Data Sync tool to chunk data into more manageable extracts, and to filter data on dates for incremental updates - Two filters MUST be defined in each 'Data Report'.

One filter, that should be based on an ID, is used by data sync to count how many rows will be returned.  From this it can decide how to break the extracts down into smaller chunks.  For instance, if a query is going to return 2 million rows, that may take some time.  Going across network and cloud environments, it's possible that the connection may be interrupted, or perhaps a time-out, or memory usage threshold could be reached.

Data Sync will, by default, break those 2 million rows into 200 'chunks' of 10,000 rows of data (this value can be changed).  The first connection will request the first 10,000 rows based on this Filter ID field.  Once loaded, the next 10,000 rows will be requested, and so on.  This puts less strain on the Service Cloud, and reduces the possibility of interruptions or time-outs.

The second filter should be based on a date field that can be use to help identify new or changed records since the previous load.

The names of these 2 filters is important.  They have to be named:

INCREMENTAL_LOAD - for the date based filter, and

RANGE_COUNT - for the ID based filter.

In the Filter Expression editor in Service Cloud, the filters will look like this:

Screenshot_8_5_16__2_17_PM

In this example, the INCREMENTAL_LOAD filter references the 'incidents.updated' field.  Confirm that the 'Make this filter selectable at run time' remains checked:

Cursor

For the RANGE_COUNT filter, the 'incidents.i_id' field is used from the 'incidents' table.

Cursor

3. Save the main 'Data' Report, and then create the 'Helper' Report

Cursor

 

This needs to have 3 fields.

One that calculates the minimum value of the ID that will be extracted from the report.  Another that calculates the maximum value, and a third that calculates the total number of rows that will be returned - the count.

The names of these fields are important.  Use 'MIN_VALUE', 'MAX_VALUE', and 'COUNT'.

Below are the 3 field definitions used in this example report, using the 'Min', 'Max' and 'Count' functions available in the Service Cloud Report Explorer.

Zoom_Participant_ID__32___Meeting_ID__584-903-655

Zoom_Participant_ID__32___Meeting_ID__584-903-655

Zoom_Participant_ID__32___Meeting_ID__584-903-655

As before, a the filters are required. Add the same two filters for the 'helper' report.  That way the Data Sync tool can request the metadata only for the date range that it needs to extract for.

and use the SAME expression fields as before.

Run the helper report to confirm it works.  In this example it is showing that the data to be returned has 1,494 rows (COUNT), with the minimum ID in the data being 12278 (MIN_VALUE) and the maximum ID being 15750 (MAX_VALUE).

Screenshot_8_5_16__2_22_PM

Save the report.

4. Service Cloud creates a unique ID for each report.  That ID will be needed in Data Sync to reference the 2 reports.

To find the ID, right click on the report and select 'View Definition':

 

Cursor

 

In this example the 'Data' query, the 'Incidents_Data' report has an internal ID of 100804.

Screenshot_8_3_16__11_13_AM

Do the same for the 'helper' report.  In this case, the internal ID is 100805.

These IDs will be needed in the next step.

5. In the Data Sync tool, under 'Project' -> New -> 'Add a New Source Object' -> Source=RightNow -> select 'Enter objects properties manually'.

Chose a Logical Name for the data source, and a Target Name.  Note the Target Name will be the new table created in ADW.  If an existing table already exists, be sure to enter the name correctly.  Make sure the DB_Connection is set to the Service Cloud / RightNow connection created earlier.

In the 'Message' box that appears next, make sure 'Analytics Reports' is selected in the 'Data from:' selector.  The message will be updated to display additional information about this import method.

Cursor

In the final screen, the 'Data' report ID, from step (4), needs to be entered as the 'Analytics Report ID', the 'Helper' report ID in the 'Helper Analytics Report ID'.  The 'Numeric ID' needs to be the logical name of the field used in the 'Data' report that contains the main ID field for the report.  In this case, that field is 'Incident ID'.  Be aware that this field is case sensitive and needs to exactly match the name of the report field. The final field, 'Maximum number of rows to read at a time', is the 'chunking' size.  By default this is 50,000.  This can be changed if needed.

6. To set up 'Incremental' loads, select the data source that was just created, and in the attributes section, select the value in the 'Load Strategy' box.  This will bring up the various load strategies allowed.

For the User Key to identify the unique records, select the ID that can be used to identify unique records for updating.  In this example 'incident ID' is used.

Cursor

for the Filter, use the Date column that will identify changed data.  In this example 'Date Last Updated' is used;

Cursor

7. Run the job and confirm it works.

Method 2 - Loading the data directly from individual objects available in Oracle Service Cloud

For cases where an Object exposed in the semantic layer of Service Cloud contains all the data that is needed, then this approach may be the best approach.

1. In Data Sync, select Project -> New -> Add New source object -> Source=RightNow -> Discover objects(s) from catalog -> OK:

2. In the next screen, make sure 'RightNow' is selected as the Source, and then hit the 'Search' button to pull back a list of all the objects available.

Select the Object(s) that are to be included (in this case 'Billing Payments'), and then the 'Import' button.

A message will be displayed providing more details of this method.  Click 'OK'.

Select the Source Object that was created, and select the 'Source Attributes' tab.

4. Five options are shown.  The 'Numeric Column' and 'Maximum number of rows to read at a time' are mandatory.

The ROQL Query Condition field is optional.  This field can be used to filter the data returned.  For instance, if the Billing.Payments object contains many years of history, but for ADW we are only interested in data changed from 2014 onwards, then a ROQL statement of 'updatedtime > '2014-01-01T00:00:00Z' may be used to restrict the data returned.  This is nothing to do with incremental loading.  This filter will be used every time a job is run, so no data from before this date will every be extracted from Service Cloud.

The 'Numeric Column' needs to be an ID field from the Billing.Payments object.  In this case there is a field called 'id'.  This is case sensitive.

The final column is the 'chunking' size to be used.  This defaults to 50,000, but can be changed if required.

5. As in the previous load example, to set up incremental updates, go to the 'Edit' tab, and select 'Update table' as the Load Strategy:

and select the appropriate value for the unique 'User Keys' and date value for the 'Filter' to allow Data Sync to identify rows changed since the last extract.

6. Run the job to confirm it works.

 

Method 3 - Loading the data directly from a ROQL query

ROQL stands for 'RightNow Object Query Language'.  It has some similarities to SQL and is the query language used to run against the semantic reporting layer in Service Cloud.

In this example the following ROQL query will be used.

SELECT * FROM incidents WHERE updatedtime > '2014-01-01T00:00:00Z'

 

1. In Data Sync, select Project -> New -> Add New source object -> Source=RightNow -> Enter object's properties manually -> OK:

Enter a logical name for the data source, and a target name.  This should be the existing ADW table that will be loaded, or the name of the new table that will be created:

Make sure the 'Data From' box is set to 'ROQL' then hit 'OK':

Cursor

2. In addition to the ROQL query (ROQL Tabular Query), a statement is required to calculate the MAX, MIN, and COUNT of the identity field (in this case ID), as well as the name of the Query Object - in this case 'incidents', and the Numeric Column, in this case 'id'.  NOTE - these last two are case sensitive.

The chunking size ('Maximum number of rows to read at a time') can be adjusted if necessary.

Click 'OK', and the Pluggable Source Data object is created.

3. As before, to set up incremental loads, select the Data Source, then update the load strategy.

And select an appropriate key (an ID) and a filter (an update date / time),

4. Run the job to confirm it works.

Summary
This article walked through the steps to configure the Data Sync tool to be able to connect and extract data from Oracle Service Cloud / RightNow.  It covered 3 different approaches.

NOTE - Service Cloud has inbuilt restrictions for extracting data.  These restrictions are intended to protect the underlying database to prevent a single query using up too many database resources.  The Data Sync tool has built in automatic error handling to accommodate this.  If the error is encountered while requesting data, then the Data Sync tool will recursively retry the data request, but adding further filters to reduce the data set being returned.  At the time of writing, this recursive error-handling is built into methods (2) and (3) outlined in the article.  It will shortly (within a few weeks) be added for Method (1) as well.

For other A-Team articles about OAC and Data Sync, click here

Be the first to comment

Comments ( 0 )
Please enter your name.Please provide a valid email address.Please enter a comment.CAPTCHA challenge response provided was incorrect. Please try again.Captcha