Best Practices from Oracle Development's A‑Team

Improving the performance of Oracle Integration flows that use REST calls

Nick Montoya
Principal Solutions Architect

Oracle Integration connects disparate SaaS and on-premises applications to help businesses move faster. This interaction between Oracle Integration Cloud (OIC) and Oracle SaaS (or on-premise) applications needs to be carefully designed and planned for performance and scalability. Overloaded integration flows may run within acceptable timeframes during design phase. They may even pass User Acceptance Testing especially if testers are just going through functional testing. However, when testing for performance and scalability, problems may arise. It is imperative to design solutions that would meet expected peak volumes at runtime. This blog will provide some helpful pointers that you could use to make your solutions achieve better performance and scalability.

Identify Peak Volume Profile and test downstream systems

When designing an Oracle Integration solution for Performance and Scalability, it is very important to identify the peak-hour and peak-day volume. Knowing how many integrations will be running at peak hour will not only help you size your Production OIC environment accordingly, but also, it will help in the testing of your downstream (from OIC point of view) systems.

Testing of downstream systems with expected volumes early in the implementation cycle will help you validate a scalable implementation and/or identify performance bottlenecks in your design.

If these outbound calls take longer than a few seconds, then there is plenty of design work to do. Longer synchronous calls to outbound systems may cause OIC to wait for these calls to finish and it may block other processes from running in OIC. Therefore, end users may experience longer response times.

Below, you can find several areas where you could explore to help improve the performance of your integration flows that use REST calls.

GET REST requests and SQL queries

When making a REST GET request, just like when making a SQL query, the list of fields (attributes in SQL) to be retrieved and the filters (predicate in SQL) for the conditions to be met by the response payload (result set in SQL) will severely impact how long this call takes to complete.

A typical SQL query has the form:

SELECT A1, A2, …, An 

FROM R1, R2, …, Rm


  • Ai represent attributes
  • Ri represent relations
  • P represents the predicate 

In the REST GET request sample below, there are 7 fields/attributes being requested and the predicate is applying a filter of MobileNumber=<your-mobile-number>.


Similar strategies used in SQL query design may be helpful when designing solutions that involve REST calls.


Get only the fields/attributes that you needed

When looking for a few attributes on a given object, it is very tempting to issue a GET REST call to retrieve the object, and then extract the needed attributes out of the response payload.

However, the first API call that is most often found most likely will return the entire object.

For example, the following REST call will return the entire Member object as no attributes/fields were specified in the request. 


For better performance include ONLY the list of fields that would be needed further down in the integration flow. In most cases, these fields would be used in the next OIC Mapper in the flow. In the above image you can identify that there are several of the OOTB fields that are being retrieved from the SaaS system but they are NOT being used in the OIC Mapper. 



Inspect business events usage with performance-tinted glasses 

When an integration flow subscribes to a SaaS business event, Oracle SaaS will diligently send the object payload every time the event is raised -as it should. However, the processing of this OIC flow in the frequency at which each is raised could be a challenge. Furthermore, not all business objects are created equal. Some are heavier than others in terms of payload size.

We had a customer who was using an OOTB object with about 80 attributes. Customizations to this object bumped up the attribute count to 200+. Furthermore, trigger customizations were added on some of these fields. That is, these triggers executed some logic every time those attributes were accessed.

If you have thousands of transactions taking place every hour, and each transaction generating updates on a different object, this would mean that thousands of update business events will be raised every hour. The customer came to the conclusion that they could process these updates asynchronously. Thus, we helped them use a different mechanism to handle their update processes. 

Shall a requirement arise for real-time processing of business events, a different approach would be needed. In this case, a lightweight payload and simple integration would execute the best for performance and scalability reasons.


Cache interim constant attribute values

Unique attributes to an object cannot be cached as they may vary from object to object.

However, there are temporarily constant values that will not change for a given period of time.

For example: 

  • Exchange values for award points, e.g. $10 in purchases will award 100 points
  • 25,000 miles in a year to qualify for a Elite status in an Airline Rewards program
  • Daily purchase incentive e.g. Double reward days at your favorite coffee shop


If we are retrieving data for thousands of customers, and we need 7 attributes where 2 of these will have the same value at the current time, then make 2 REST calls instead of 1:

  1. Make ONE REST call on 7 attributes for 1 object.
  2. Cache the 2 interim constant attributes in OIC variables (CurrencyCode and TerritoryRegion)
  3. Make ONE REST call on 5 attributes for the remaining thousands of objects.
  4. Use OIC Mapper to combine 5 attributes from response payload and 2 cached constant attributes.

This could add up to significant savings in response time if these fields (CurrencyCode and TerritoryRegion) do some runtime lookup processing every time they are accessed. 


Design effective filters

Filters are like WHERE predicates in the SQL Query realm.

You know a filter is effective when the data that you get on the response payload is consumed as is without need for further filtering.

We were working with a customer that needed to run some processes based on the status of a given integration.For this purpose, they were using the OIC Monitoring REST API


However, the above endpoint will return a list of ALL running instances in OIC.

In their integration, they were iterating over each of the instances to identify the ones that they were interested in.

To apply a filter in this usecase, a ‘q’ filter for ’code’ would identify the integration they were looking for.

q(optional): string

Filter parameters.

timewindow: 1h, 6h, 1d, 2d, 3d, RETENTIONPERIOD. Default value is 1h.

code: Integration identifier.

By using a q-based filter parameter, only the needed information would be extracted. The result set got reduced from thousands to a dozen or two.


Use indexed fields as predicate filters

Most OOTB fields that could be used as filters already have an index. However, when objects are customized and new fields added, index creation might need to be explicitly configured.

When adding indexed fields to objects through Application Composer, you may use it only on the most frequently searched custom fields because, to ensure search performance, the number of fields you can index is limited. 

It is important to identify these fields early in the implementation cycle. Late index creation may require large data re-loading as indexed new fields may be needed.

Make sure that the columns being used as filters in your predicate have indexes on them.



The most important performance and scalability tip I could share with you is START EARLY! Most implementations leave performance testing until the very end. When a project is running late, this important step is rushed and not thoroughly completed. Identifying your peak volume user profile and testing against it will help uncover problems that you may only face during runtime. Only retrieve data that you need and in as-small-as-possible data sets. This will pay BIG time especially during PRIME time.


Be the first to comment

Comments ( 0 )
Please enter your name.Please provide a valid email address.Please enter a comment.CAPTCHA challenge response provided was incorrect. Please try again.Captcha