Extending analytics for Integration cloud using Elastic stack.

March 10, 2018 | 7 minute read
Text Size 100%:

Introduction

Oracle Integration Cloud (OIC) offers industry-leading SaaS integration capabilities.  It provides extensive monitoring, tracking and reporting features out-of-the-box. Occasionally, enterprises do have reporting and analysis needs those are better met by additional reporting and analytics products. This article discusses couple of such use cases and describes how to implement one of them using Elastic stack. Information in this article is applicable to release 18.1.3 of integration cloud.

Main article

Let’s consider these scenarios:

  • Customer’s integrations are deployed to multiple instances of OIC. Customer wants a consolidated view of all integrations on single dashboard.
  • Customer needs to customize several aspects of reporting such as type of charts and data retention.
  • Customer wants end-to-end view of transactions across multiple applications, including those deployed to OIC.

Use cases represented by these scenarios can be met by externalizing integration metrics from OIC into another platform specializing on analytics.  Let’s look at some recommended ways to extract metrics from OIC and importing them into ELK (Elastic-LogStash-Kibana). Elastic stack is a widely-used opensource platform for analytics and dashboards. Jump to one of the sections by click the link.

Why Elastic stack?


Elastic is among products that allow infinite scaling and support map-reduce for efficient distributed queries. Note that other products such as Oracle big-data analytics cloud service or Oracle log analytics can also meet aforementioned requirements.  Elastic is used in this blog for its simplicity for demonstration purposes.

For sake of simplicity, the post does not address deployment of ELK stack. Refer to Elastic web site for instructions. A simple installation could run on a laptop. More complex, distributed deployments will require careful planning of compute, storage resources and indexes.

Patterns


Now that the basics on collecting relevant metrics are covered here are patterns that will help meet the requirements.

Consolidated Reporting is achieved by collecting monitoring metrics from multiple OIC instances and feeding them into one analytics application instance.

g1

With ELK stack, LogStash is the agent/aggregator, Elastic is the indexer and Kibana is the analytics and reporting client. This pattern could help building reports for billing and historic analytics or correlate traffic patterns from multiple integration platforms.

End-to-end transaction monitoring can be achieved by collecting start, end times, tracking id and completion status for parts of an end-to-end transaction  from each participating application, feeding them into an analytics application and running map-reduce queries that correlate parts of a transaction using tracking id.

g2

Note that tracking ID is the essential common denominator that correlates parts of the transaction. This pattern could enable tracking of critical, non-repeatable transactions end-to-end. An example would be an incident reported by an utility customer, resulting in an order for field dispatch, through one or more integration platforms. With ELK stack, LogStash is the agent/aggregator, Elastic is the indexer and Kibana is the analytics and reporting client.

Extracting integration metrics


OIC provides REST API to extract metrics from integration platform. See examples below:

Retrieve metrics for all integrations in an OIC instance for past 1 hour:

https://{host:port}/icsapis/v2/monitoring/integrations?q={timewindow: '1h'}&onlyData=true

Output:

{  "code": "NSLF_LOGFIRE_PURCHASE_ORDER",  "id": "NSLF_LOGFIRE_PURCHASE_ORDER|01.00.0000",  "links": [  {  "href": "https://icsshared-a144767.integration.us2.oraclecloud.com:443/icsapis/v2/monitoring/integrations/NSLF_LOGFIRE_PURCHASE_ORDER%7C01.00.0000",  "rel": "self"  }, {  "code": "NSLF_LOGFIRE_PURCHASE_ORDER",  "id": "NSLF_LOGFIRE_PURCHASE_ORDER|02.01.0000",  "links": [  {  "href": "https://icsshared-a144767.integration.us2.oraclecloud.com:443/icsapis/v2/monitoring/integrations/NSLF_LOGFIRE_PURCHASE_ORDER%7C02.01.0000",  "rel": "self"  },  {  "href": "https://icsshared-a144767.integration.us2.oraclecloud.com:443/icsapis/v2/monitoring/integrations/NSLF_LOGFIRE_PURCHASE_ORDER%7C02.01.0000",  "rel": "canonical"  },  {  "href": "https://icsshared-a144767.integration.us2.oraclecloud.com:443/icsapis/v2/integrations/NSLF_LOGFIRE_PURCHASE_ORDER%7C02.01.0000",  "rel": "integration"  }  ],  "name": "NSLF LogFire Purchase Order",  "noOfErrors": 0,  "noOfMsgsProcessed": 0,  "noOfMsgsReceived": 0,  "noOfSuccess": 0,  "successRate": 0,  "version": "02.01.0000"  }

Retrieve metrics for a particular integration in an OIC instance during a 5-minute span:

https://{host:port}/icsapis/v2/monitoring/integrations/CONTACTS%7C01.00.0000?q= {startdate : '2018-03-08 01:00:00' , enddate : '2018-03-08 01:05:00'}

Output:

{  "code": "PUBLISHCONTACTS",  "endPointURI": "https://icsshared-a144767.integration.us2.oraclecloud.com:443/integration/flowsvc/salesforce/PUBLISHCONTACTS/v01/",  "flowId": "cf522509-8156-4212-ae32-38d45854923c",  "id": "PUBLISHCONTACTS|01.00.0000",  "lastUpdatedBy": "ian.milne@oracle.com",  "links": [  {  "href": "https://icsshared-a144767.integration.us2.oraclecloud.com:443/icsapis/v2/monitoring/integrations/PUBLISHCONTACTS%7C01.00.0000",  "rel": "self"  },  {  "href": "https://icsshared-a144767.integration.us2.oraclecloud.com:443/icsapis/v2/monitoring/integrations/PUBLISHCONTACTS%7C01.00.0000",  "rel": "canonical"  },  {  "href": "https://icsshared-a144767.integration.us2.oraclecloud.com:443/icsapis/v2/integrations/PUBLISHCONTACTS%7C01.00.0000",  "rel": "integration"  }  ],  "mepType": "MEP03",  "name": "publishContacts",  "noOfErrors": 0,  "noOfMsgsProcessed": 0,  "noOfMsgsReceived": 0,  "noOfSuccess": 0,  "optimizedVersion": "1.0",  "proxyWSDL": "https://icsshared-a144767.integration.us2.oraclecloud.com:443/integration/flowsvc/salesforce/PUBLISHCONTACTS/v01/?wsdl",  "scheduleApplicableFlag": false,  "scheduleDefinedFlag": false,  "successRate": 0,  "version": "01.00.0000" }

Extracting information for tracking


OIC provides API to fetch information about instances of integrations, including tracking variables. Tracking variables are typically a meaningful identifier for a transaction, such as order id or employee id. ICS mandates one tracking ID and allows up to 3 tracking IDs per integration. The sample endpoint below retrieves completed instances of an integration in the past 1 hour.

https://{host:port}/icsapis/v2/monitoring/instances?q={code : ORDERS', status:’COMPLETED’, timewindow : '1h'}

Output:

"id": "2000002",  "integrationId": "GLOBALFAULT_01",  "integrationName": "globalfault_01",  "integrationVersion": "01.00.0000",  "links": [  {  "href": "https://den02bir.us.oracle.com:7002/icsapis/v2/monitoring/instances/2000002",  "rel": "self"  },  {  "href": "https://den02bir.us.oracle.com:7002/icsapis/v2/monitoring/instances/2000002",  "rel": "canonical"  }  ],  "status": "COMPLETED",  "trackings": [  {  "name": "ZIP",  "value": "560035"  },  {  "name": "tracking_var_2"  },  {  "name": "tracking_var_3"  }  ]

Indexing data using LogStash


There are couple of approaches to index metrics using LogStash. LogStash provides plug-ins to address specific log collection use cases.

File input plug-in can observe updates to a specific log file, collect new entries and index them into Elastic. This works similar to Linux “tail –0f” command.

g3

A custom program or script invokes OIC monitoring endpoint at regular intervals and writes the resulting JSON document to a file. File input monitors the file and indexes new documents.  Here is a sample file input script:

input {  file {  path => "/tmp/icsmonitoring_log"  start_position => "beginning"  } } output {  elasticsearch {  hosts => "localhost:9200"  manage_template => false  index => "icsmon"  document_type => "fileinput"  }  stdout { codec => rubydebug } }

HTTP poller plug-in is another way to collect metrics from OIC. HTTP poller reduces complexity by eliminating the scripts and external log files. This is suitable for simpler security use cases and requires credentials specified in plug-in file.

g4

Here is a sample HTTP poller configuration for ICS.

input {  http_poller {  urls => {  ics => {  url => 'https://{host:port}/icsapis/v2/monitoring/integrations/ORDERS%7C01.00.0000?q=%7Btimewindow%3A%20%271h%27%7D'  method => get  user => "icsuser"  password => "******"  }  }  connect_timeout => 5  request_timeout => 5  socket_timeout => 5  schedule => { cron => "* * * * * UTC"}  codec => "json"  metadata_target => "http_poller_metadata"  } } output {  elasticsearch {  hosts => "localhost:9200"  manage_template => false  index => "icsmon"  document_type => "httppoller"  } stdout {  codec => rubydebug  } }

While indexing documents into Elastic, it is possible to define data types for fields in the JSON document, by defining a “mapping”. Each index can have one associated mapping. If a mapping is not created, Elastic will automatically do the mapping.  Default mapping suffices for JSON returned ICS monitoring API.

Visualizing data using Kibana


We looked at how to collect metrics as JSON documents from ICS and how to feed the metrics into Elastic search.  Next step is to query the collected data, design visualizations and build a dashboard, using Kibana. Kibana is available at http://localhost:5601/app/kibana, in a local deployment.

Building a dashboard in Kibana involves these steps.

  • Define an index pattern
  • Create a saved-search using index pattern
  • Create visualization(s) using saved-search(s)
  • Create a dashboard and add visualizations. Share the dashboard.

Before a chart could be built Kibana, an index pattern and a search using the index pattern should be created.

Index patterns basically identify one or more indexes to be searches. Each search is tied to an index pattern.  Once an index pattern is created, Kibana analyzes the documents in the index and lists the fields.  See graphic below for an index pattern and note the highlighted sections.

001

An index “icsmon” is selected by the pattern. It’s the same index used by LogStash plug-ins.

Next step is to create a search using “icsmon*” index pattern as shown in the next graphic. Note the highlighted areas and annotations.

002

Next, create one or more visualizations using saved search. This is a complex task that requires knowledge of Kibana. The visualization shown in graphics below selects monitoring documents for integration “ORDERS_COMMERCE_TO_OM” and lays out successful and failed requests in a stacked bar chart.  There are two fields on y-axis, numberOfSuccess and numbeOfErrors. x-axis is a Date-histogram.  See graphic below and note highlighted areas.

003

Finally, create a dashboard and add visualizations created in previous steps. In the graphic below, there are two visualizations. “ICS-TRAFFIC” displays all traffic for ICS over time, showing success and failure for each time slice.  “ICS-TRAFFIC-ORDERS” shows traffic for one integration.

004

Conclusion

This blog highlighted needs for customized monitoring, explained the basics of externalizing OIC monitoring information and building simple dashboard from collected metrics. More complex tracking dashboards and variety of custom ad-hoc visualizations can be created in Kibana using the same information.  Tasks explained in this post should be similar for other analytics products.

 

References:

https://www.elastic.co/guide/en/elastic-stack/current/elastic-stack.html

https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-overview.html

https://www.elastic.co/guide/en/logstash/current/plugins-inputs-http_poller.html

https://docs.oracle.com/en/cloud/paas/integration-cloud-service/icsrb/index.html

 

Mani Krishnan


Previous Post

Oracle GoldenGate extract recovery (Missing archivelogs)

Sourav Bhattacharya | 2 min read

Next Post


Connecting Oracle Data Integrator Studio to the Autonomous Data Warehouse

Dayne Carley | 1 min read