Oracle GoldenGate: Transactional Data Delivery to WebLogic Java Messaging Service


Oracle GoldenGate (OGG) includes an optional set of adapters that adds functionality to apply transactional data to targets other than a relational database. This article presents basic configuration options and discusses best practices as they relate to Oracle GoldenGate for Java version and the delivery of data to the Oracle WebLogic JMS Server (WebLogic).

Installation and configuration of WebLogic and OGG database capture is outside the scope of this article. However, to understand the concepts presented readers should have an understanding of these products and their use.

Main Article

The OGG Java Adapter

OGG for Java provides the ability to execute code written in Java from the OGG Extract program. OGG for Java consists of two components:

  • A dynamically linked or shared library, implemented in C/C++, integrating as a user exit (UE) with the OGG Extract process through a C API.
  • A set of Java libraries (jars), which comprise the OGG Java API. This Java framework communicates with the user exit through the Java Native Interface (JNI).

Through the OGG Java API, transactional data captured by OGG can be delivered to targets other than a relational database, such as Java Message Service (JMS), disk files, or a custom application’s Java API.

Core Product Specific

The OGG Java Adapter is core product specific; meaning the version 11.2 adapter is designed to function only with the OGG 11.2 core product. This is due to changes in the OGG Trail format and other core product improvements which prevents the UE from being linked with other versions of the Extract program.

Implementation Process

The process for implementing OGG delivery to JMS includes:

  1. Download and install the OGG Java Adapter package.
  2. Modify the existing Primary Change Data Capture (Extract) to get before images for updates.
  3. Generate definitions for the source data.
  4. Setup an Extract Data Pump to move data from the database server to the target implementation server.
  5. Configure the WebLogic Queue or Topic.
  6. Configure the target integration server OGG Manager.
  7. Configure the OGG Java UE and JMS Handler.
  8. Create the OGG Java Extract Data Pump (Java Data Pump).

Figure 1 provides an overview of the application environment.

Figure 1. Environment Overview

Figure 1. Environment Overview

Install the Adapter Package

Because the OGG Java Adapter requires links to the WebLogic Java classes, it must be installed on a server running WebLogic. The adapter package consists of two parts, the OGG Java Adapter and Oracle GoldenGate Generic.

OGG Java Adapter is comprised of the user exit, java libraries, and sample configuration files.

Oracle GoldenGate Generic is a version of the OGG application with no built-in database support.

To install the package:

  1. Create a directory on the WebLogic server to hold the OGG files.
  2. Unzip OGG Generic into this location.
  3. Unzip OGG Java Adapter into this location.
  4. From a Linux shell, start the GGSCI utility and execute the command CREATE SUBDIRS.
Best Practice: Do not install OGG for Java on your database server.

If WebLogic and OGG are already installed on your database server, you do have the option of installing only the OGG Java Adapter into your existing OGG environment. However, be aware that the formatting of XML messages by the Java Adapter is a CPU intensive task and could impact a busy database server.

Modify the Primary Extract

The primary Extract which reads data from the database transaction logs must be set to capture update before images. This is accomplished by adding the option GETUPDATEBEFORES to the parameter file. Stop and restart the Extract to activate the configuration change.

EXTTRAIL ./dirdat/cd
USERID ggadmin, PASSWORD Oracle1
TABLE east.*;

Generate Source Definitions

The OGG DEFGEN utility reads the data structure of all defined source database tables and creates a representation of the structure in a text file. This text file is used to identify and parse the data within the Extract Trail. Run DEFGEN from a Linux shell and transfer the output text file to the OGG dirdef directory on the target integration server.

DEFSFILE ./dirdef/east.def, purge
USERID ggadmin, PASSWORD Oracle1
TABLE east.*;

 [oracle@oel5ora11 OGG_112]$ ./defgen paramfile ./dirdef/defgen.prm

 Create the Extract Data Pump

An Extract Data Pump is a special OGG process that reads data from the Extract Trail and packages this information for transmission to the integration target server via TCP/IP.

Create the parameter file, and then execute GGSCI commands to have the Extract Data Pump read from the ./dirdat/cd trail, and write to the OGG Trail ./dirdat/jm on the target server.

RMTTRAIL ./dirdat/jm
TABLE east.*;

GGSCI (oel5ora) 1> add extract p_jms, exttrailsource ./dirdat/cd
GGSCI (oel5ora) 2> add rmttrail ./dirdat/jm, extract p_jms, megabytes 100</p>

By default the Extract Data Pump, P_JMS, will start reading from the trail ./dirdat/cd000000. If the Extract has been running, it is unlikely this file exists so we need to adjust the starting point of the Extract Data Pump.

To determine the starting trail file for the Extract Data Pump, query the Extract Group for its current write location. This is done via the GGSCI command info {extract} detail.

GGSCI (oel5ora) 3> info ecdc detail
EXTRACT    ECDC      Last Started 2013-05-06 10:59   Status RUNNING
Checkpoint Lag       00:00:00 (updated 65:10:46 ago)
Log Read Checkpoint  Oracle Redo Logs
2013-05-06 15:41:34  Seqno 82, RBA 27873280
SCN 0.10639797 (10639797)
Target Extract Trails:
Remote Trail Name                             Seqno        RBA           Max MB
./dirdat/cd                                    1          14107078        100

 In the example above, we see that the Extract is writing to sequence number 1 of the file ./dirdat/cd. This corresponds to the disk location ./dirdat/cd000001. This output also shows us that the trail files are 100 MB in size, and the last data written was at byte position 14,107,078 within the file.

If the Extract is running as in the example above the last write position, or RBA, will increase as data is added to the trail file. So we will alter the new Extract Data Pump to the beginning of the file, and then set a time for the Extract Data Pump to begin processing data.

GGSCI (oel5ora) 4> alter p_jms, extseqno 1, extrba 0
GGSCI (oel5ora) 5> alter p_jms, begin now

When the P_JMS Extract Data Pump is started, it will read from the beginning of the file ./dirdat/cd000001. Transactions with a commit time equal to or after the process start time of the Extract Data Pump will be transmitted to the target integration server and written to the ./dirdat/jm OGG Trail.

WebLogic Best Practices for OGG

When preparing WebLogic to receive data from OGG, there are several points to consider which will affect OGG performance.

JMS Server

Best Practice: Define a Persistent Store.

The persistent store provides a built-in, high-performance storage solution for WebLogic Server subsystems and services. The persistent store supports persistence to a file-based store or to a web-enabled database.

The key features of the persistent store include:

  • Default file store for each server instance that requires no configuration.
  • One store is shareable by multiple subsystems, as long as they are all targeted to the same server instance or migratable target.
  • High-performance throughput and transactional support.
  • Modifiable parameters that let you create customized file stores and JDBC stores.
  • Monitoring capabilities for persistent store statistics and open store connections.
  • In a clustered environment, a customized store can be migrated from an unhealthy server to a backup server, either on the whole-server level or on the service-level.

Best Practice: Define a Message Buffer Size.

The Message Buffer Size option specifies the amount of memory that will be used to store message bodies in memory before they are paged out to disk. The default value of Message Buffer Size is approximately one-third of the maximum heap size for the JVM, or a maximum of 512 megabytes. The larger this parameter is set, the more memory JMS will consume when many messages are waiting on queues or topics. Once this threshold is crossed, JMS may write message bodies to the directory specified by the Paging Directory option in an effort to reduce memory usage below this threshold.

It is important to remember that this parameter is not a quota. If the number of messages on the server passes the threshold, the server writes the messages to disk and evicts the messages from memory as fast as it can to reduce memory usage, but it will not stop accepting new messages. It is still possible to run out of memory if messages are arriving faster than they can be paged out. Users with high messaging loads who wish to support the highest possible availability should consider setting a quota, or setting a threshold and enabling flow control to reduce memory usage on the server.

JMS Server, ConnectionFactory, Queue, and Topic

Best Practice: Enable Flow Control.

When either a JMS server or it’s destinations exceeds its specified byte or message threshold, it becomes armed and instructs producers to limit their message flow (messages per second).

Producers will limit their production rate based on a set of flow control attributes configured for producers via the JMS connection factory. Starting at a specified flow maximum number of messages, a producer evaluates whether the server/destination is still armed at prescribed intervals (for example, every 10 seconds for 60 seconds). If at each interval, the server/destination is still armed, then the producer continues to move its rate down to its prescribed flow minimum amount.

As producers slow themselves down, the threshold condition gradually corrects itself until the server/destination is unarmed. At this point, a producer is allowed to increase its production rate, but not necessarily to the maximum possible rate. In fact, its message flow continues to be controlled (even though the server/destination is no longer armed) until it reaches its prescribed flow maximum, at which point it is no longer flow controlled.

JMS Queue or Topic?

A JMS Queue provides point to point message communications. Queues have a one-to-one relationship. Only one publisher can write messages to a specific queue and only one consumer can read messages from that queue. Queues retain all messages sent to them until the messages are consumed or until the messages expire.

A JMS Topic provides publish/subscribe communications. Topics have a many-to-many relationship. Multiple publishers can write messages to a topic, and many consumers can read the messages. Topics retain messages only as long as it takes to distribute them to current subscribers.

Typically, the point to point model, or JMS Queue, is used when implementing GoldenGate data delivery to JMS. However the publish/subscribe method, or JMS Topic, may be used in complex environments.

Configure the Integration Server OGG Environment

On the target integration server, configure and create the OGG Manager and Java Data Pump. A Java Data Pump functions the same as a normal Extract Data Pump except it uses the special Java User Exit shared object. The Java User Exit contains instructions for converting OGG Trail data for submission to the JMS Handler.

Define Java Virtual Machine Library Location

The OGG Java User Exit needs a system environment variable that defines the Java Virtual Machine (JVM) Library. This environment variable is Operating System specific:

  • Windows: set PATH to find jvm.dll
  • Linux/Solaris/etc: set LD_LIBRARY_PATH to find,
  • AIX, z/OS:  set LIBPATH

My test environment is running Oracle Enterprise Linux 64-bit; therefore, the environment variable setting is:


If there are multiple entries to the library path environment variable, like shown above, the JVM Library must be specified first.

OGG Manager

As with any OGG implementation, a Manager process must be configured and running. A sample Manager parameter file is shown below.

Best Practice: Use the Manager parameter DYNAMICPORTLIST to define valid OGG communication ports.

Best Practice: Use the Manager parameter PURGEOLDEXTRACTS to define OGG Trail retention policy.

PORT 7809
PURGEOLDEXTRACTS ./dirdat/*, usecheckpoints, minkeepdays 3

 If a firewall exists between the Source Database Server and the Target Integration Server, the OGG ports must be open for two-way communication.

 OGG Java Data Pump

Create the parameter file for the Java Data Pump, and set it to read from the OGG Trail produced by the Extract Data Pump on the source database server

SOURCEDEFS ./dirdef/east.def
CUserExit CUSEREXIT PassThru IncludeUpdateBefores
Table EAST.*;

GGSCI (oel5wl) 1> add extract p_oggq, exttrailsource ./dirdat/jm</p>

 When the P_OGGQ Java Data Pump is started, the Java User Exit will check for a file named in the OGG dirprm directory. This {group}.properties file must exist before the Java Data Pump can be started.

Create the {group}.properties File

The {group}.properties file contains information required by the JMS Handler portion of the Java User Exit to:

  • establish communication with the JMS Queue or Topic.
  • format the data for delivery.

Using our example for the P_OGGQ Java Data Pump, we create a text file in the OGG dirprm directory that contains the runtime options shown below.

### ###
### Path to WebLogic jars ###
### JNDI properties
### JMS Handler
### native library config ###

Lines denoted by the character “#” are comments.

The Oracle GoldenGate Adapters Administration Guide for Java, available for download from the Oracle Technology Network, contains details about all of the shown properties. We will discuss several to provide greater insight into the configuration requirements.


The path to the Java Archive (jar files) required by WebLogic.

JNDI Properties

Here we set the details for connecting to the desired WebLogic server:

  • java.naming.provider.url=t3://{server}:{port}
  • java.naming.factory.initial=weblogic.jndi.WLInitialContextFactory
  •{WebLogic user login}
  •{WebLogic user password}
JMS Handler

In the JMS Handler section, set a name to identify the handler and provide information for the delivery and format of the data:

gg.handler is used to set a unique name for identifying active handlers. A property file may contain definitions for more than one handler.
The type of handler is either a predefined value for built-in handlers, or a fully qualified Java class name. The type ‘jms defines this handler as one that sends messages to a JMS Queue or Topic.
Specifies the format used to transform operations and transactions into messages sent to JMS. ‘minxml is an internal format used for generating XML output.
Specifies whether to output one operation per message (op) or one transaction per message (tx). For environments where database transactions can be very large, op is the recommended setting.
Specifies the type of destination where data is being written.
The name of the destination as configured in WebLogic.
The name of the ConnectionFactory as configured in WebLogic.

 XML, XML2, and MINXML Formats

The JMS handler can be set to generate output in either XML, XML2, or MINXML format. Each has its own advantage and usage; however, MINXML consumes less resource and is the recommended format. The table below illustrates this point by showing the same record written to a JMS Queue in each format.

XML Format XML2 Format MINXML Format
 <operation table=’EAST.NEXT_CUST’ type=’UPDATE_FIELDCOMP_PK’ ts=’2013-05-10 11:34:21.001009′ pos=’00000000000000001143′ numCols=’1′>
<col name=’CUSTOMERS_ID’ index=’0′>
<operation table=’EAST.NEXT_CUST’ type=’UPDATE’ opType=’UPDATE_FIELDCOMP_PK’ txInd=’WHOLE’ ts=’2013-05-10 11:34:21.001009′ numCols=’1′ position=’00000000000000001143′>
<column name=’CUSTOMERS_ID’ index=’0′>
<t><o t=’EAST.NEXT_CUST’ s=’U’ d=’2014-02-26 15:40:12.982034′ p=’00000000000000482945′><c i=’0′><b><![CDATA[31]]></b><a><![CDATA[32]]></a></c></o></t>
 <operation table=’EAST.CUSTOMERS’ type=’INSERT’ ts=’2013-05-10 11:34:21.001009′ pos=’00000000000000001399′ numCols=’10’>
<col name=’CUSTOMERS_ID’ index=’0′>
<before missing=’true’/>
<col name=’CUSTOMERS_GENDER’ index=’1′>
<before missing=’true’/>
<col name=’CUSTOMERS_FIRSTNAME’ index=’2′>
<before missing=’true’/>
<col name=’CUSTOMERS_LASTNAME’ index=’3′>
<before missing=’true’/>
<col name=’CUSTOMERS_EMAIL_ADDRESS’ index=’4′>
<before missing=’true’/>
<col name=’CUSTOMERS_DEFAULT_ADDRESS_ID’ index=’5′>
<before missing=’true’/>
<col name=’CUSTOMERS_TELEPHONE’ index=’6′>
<before missing=’true’/>
<col name=’CUSTOMERS_FAX’ index=’7′>
<before missing=’true’/>
<after isNull=’true’/>
<col name=’CUSTOMERS_PASSWORD’ index=’8′>
<before missing=’true’/>
<col name=’CUSTOMERS_NEWSLETTER’ index=’9′>
<before missing=’true’/>
 <operation table=’EAST.CUSTOMERS’ type=’INSERT’ opType=’INSERT’ txInd=’BEGIN’ ts=’2013-05-10 11:34:21.001009′ numCols=’10’ position=’00000000000000001399′>
<column name=’CUSTOMERS_ID’ index=’0′>
<column name=’CUSTOMERS_GENDER’ index=’1′>
<column name=’CUSTOMERS_FIRSTNAME’ index=’2′>
<column name=’CUSTOMERS_LASTNAME’ index=’3′>
<column name=’CUSTOMERS_EMAIL_ADDRESS’ index=’4′>
<column name=’CUSTOMERS_DEFAULT_ADDRESS_ID’ index=’5′>
<column name=’CUSTOMERS_TELEPHONE’ index=’6′>
<column name=’CUSTOMERS_FAX’ index=’7′>
<after-value isNull=’true’/>
<column name=’CUSTOMERS_PASSWORD’ index=’8′>
<column name=’CUSTOMERS_NEWSLETTER’ index=’9′>
<t><o t=’EAST.CUSTOMERS’ s=’I’ d=’2014-02-26 15:40:10.983445′ p=’00000000000000062767′><c i=’0′><a><![CDATA[7]]></a></c><c i=’1′><a><![CDATA[m]]></a></c><c i=’2′><a><![CDATA[Yvonne]]></a></c><c i=’3′><a><![CDATA[Notimberlake]]></a></c><c i=’4′><a><![CDATA[]]></a></c><c i=’5′><a><![CDATA[1]]></a></c><c i=’6′><a><![CDATA[3075734334]]></a></c><c i=’7′><an/></c><c i=’8′><a><![CDATA[Password]]></a></c><c i=’9′><a><![CDATA[0]]></a></c></o><o t=’EAST.CUSTOMERS_INFO’ s=’I’ d=’2014-02-26 15:40:10.983445′ p=’00000000000000063047′><c i=’0′><a><![CDATA[7]]></a></c><c i=’1′><a><![CDATA[2014-02-26:10:40:11.912917000]]></a></c><c i=’2′><a><![CDATA[1]]></a></c><c i=’3′><a><![CDATA[2014-02-26:10:40:11.912917000]]></a></c><c i=’4′><an/></c><c i=’5′><a><![CDATA[0]]></a></c></o></t>

Troubleshooting Tips

When troubleshooting Java Data Pump issues, sometimes the root cause of the failure is not written to the logs or OGG Report file. Best practice is to run the Java Data Pump from a terminal shell. When run from a terminal shell, all output is written to stdout. For example, to run the P_OGGQ Java Data Pump from a terminal shell:

[oracle@oel5wl OGGJMS_112]$ ./extract pf ./dirprm/p_oggq.prm


Oracle GoldenGate may be used to apply data to the Oracle WebLogic JMS Server via a Java Adapter add-on. This article presented information that would allow users to quickly install and configure the delivery appliance.

Add Your Comment