DB Adapter - Distributed Polling (SKIP LOCKED) Demystified

July 12, 2013 | 4 minute read
Greg Mally
Consulting Solutions Architect
Text Size 100%:

Introduction

When leveraging the Oracle Database Adapter (DB Adapter) using the Distributed Polling feature, understanding what is going on under the covers can help with the tuning effort.  This blog will attempt to demystify the configuration and behavior of Distributed Polling with SKIP LOCKED.

Main Article

Oracle SOA Suite 11g offers a set of Java EE Connector Architecture (JCA) technology adapters for various integration scenarios.  One of the more common scenarios is integration with a database.  The DB Adapter provides the normal Create, Read, Update, and Delete (CRUD) operations as well as the ability to poll a database for events (e.g., new or changed records).  The polling feature of the DB Adapter is very powerful/useful and it comes in several variations referred to as Polling Strategies.

A widely used polling strategy, especially in High Availability configurations, is the Logical Delete Polling Strategy because it performs better than a physical delete.  However, when you have a clustered environment where multiple nodes are polling for the same data it is likely that the same record will be processed more than once.  To avoid this problem, the DB Adapter has a Distributed Polling technique that utilizes an Oracle Database feature: SELECT FOR UPDATE SKIP LOCKED.

What Does It All Mean?

Although the standard Oracle documentation goes into a fair amount of detail on distributed polling with SKIP LOCKED, it is difficult to fully understand all the moving parts.  Especially when you factor in the DB Adapter properties of PollingInterval (PI), MaxTransactionSize (MTS), MaxRaiseSize (MRS), RowsPerPollingInterval (RPPI), and NumberOfThreads (NOT).  The old adage of "A picture is worth a thousand words" really is applicable for this situation.  To set the stage for "the pictures", the following .jca will be used as a cross reference:

<adapter-config name="ReadRow" adapter="Database Adapter" wsdlLocation="ReadRow.wsdl" xmlns="http://platform.integration.oracle/blocks/adapter/fw/metadata">
  <connection-factory location="eis/DB/ATeamDemos" UIConnectionName="soabpm-vm" adapterRef=""/>
  <endpoint-activation portType="ReadRow_ptt" operation="receive">
    <activation-spec className="oracle.tip.adapter.db.DBActivationSpec">
      <property name="DescriptorName" value="ReadRow.DBPollingSample"/>
      <property name="QueryName" value="ReadRowSelect"/>
      <property name="MappingsMetaDataURL" value="ReadRow-or-mappings.xml"/>
      <property name="PollingStrategy" value="LogicalDeletePollingStrategy"/>
      <property name="MarkReadColumn" value="STATUS"/>
      <property name="MarkReadValue" value="100"/>
      <property name="MarkUnreadValue" value="0"/>
      <property name="PollingInterval" value="5"/>
      <property name="MaxRaiseSize" value="2"/>
      <property name="MaxTransactionSize" value="4"/>
      <property name="NumberOfThreads" value="2"/>
      <property name="ReturnSingleResultSet" value="false"/>
      <property name="RowsPerPollingInterval" value="32"/>
    </activation-spec>
  </endpoint-activation>
</adapter-config>

Deployment Time (.jca)

When a DB Adapter .jca is deployed, the first thing that happens is the polling threads are created based on the NumberOfThreads property.  These threads will poll independent of each other, starting a transaction and issuing a SELECT FOR UPDATE SKIP LOCK.  A database cursor is returned for all available rows and if there are no records matching the SELECT, then the thread will release the transaction and sleep for the duration specified by the PollingInterval property.  The following represents what happens at deployment time:

Database Adapter .jca Deployment Time

Distributed Polling (SKIP LOCKED)

Once records appear in the database that match the polling SELECT statement, the real fun begins.  Each thread will wake up after sleeping, start a transaction, and issue the SELECT FOR UPDATE SKIP LOCK.  Again, a database cursor is returned but this time there are rows that match the SELECT criteria.  Each thread will now issue a FETCH for a number of rows defined by the MaxTransactionSize property.  It is the FETCH that will lock the rows in the database (SKIP LOCKED) preventing any other FETCH from retrieving those rows.  This allows each polling thread to concentrate only on SELECT, FETCH, and process without concern for duplicate processing.

Now that each thread has its set of rows to work with, they will loop over the fetched rows and group them based on the MaxRaiseSize property.  Each grouping will be delivered to the configured destination and once all rows have been delivered successfully, the transaction is committed.  Each thread will then compare how many rows have been delivered to the value specified by the RowsPerPollingInterval property.  If the rows delivered are equal to/greater than the RPPI property value, the thread will sleep.  Otherwise, it will repeat the whole process over again.

Database Adapter .jca Distributed Polling

Tuning/Behavior Considerations

It's important to understand that the MaxTransactionSize represents the XA transaction for row processing.  That is, the delivery of fetched MTS rows is all or nothing if an error is encountered resulting in a rollback.  This could introduce complexity for the destination with regard to error handling.

The RowsPerPollingInterval property is used to throttle the polling threads.  If the RPPI property is not set, then the polling threads will continue to FETCH and process rows until there are no more available.  If the RPPI property is set, the smaller the value the slower the processing.  Another way to view the RPPI value is with the following scenarios:

  • MTS = 10 and RPPI = 10, then each thread will only process one MTS batch before sleeping
  • MTS = 10 and RPPI = 20, then each thread will process two MTS batches before sleeping
  • MTS = 10 and RPPI = 30, then each thread will process three MTS batches before sleeping

If the equation is not clear by now, it is RPPI / MTS = MTS fetches before polling interval sleep.

Greg Mally

Consulting Solutions Architect


Previous Post

How Many ODI Master Repositories Should We Have?

Christophe Dupupet | 7 min read

Next Post


Long-lived TCP connections and Load Balancers

Christopher Johnson | 7 min read