Oracle GoldenGate: Transactional Data Capture from WebLogic Java Messaging Service


Oracle GoldenGate (OGG) includes an optional set of adapters that adds functionality to capture transactional data from a JMS Queue or Topic and format this data for apply to a relational database. This article presents basic configuration options and discusses best practices as they relate to Oracle GoldenGate for Java version and the capture of data from the Oracle WebLogic JMS Server (WebLogic).

Installation and configuration of WebLogic and OGG database apply is outside the scope of this article. However, to understand the concepts presented readers should have an understanding of these products and their use.

Main Article

The OGG Message Capture Adapter is implemented as a Vendor Access Module (VAM) plug-in to a generic Extract process. The generic Extract is contained in a special Oracle GoldenGate version that has no database functionality. The VAM is used to interact with WebLogic and read messages from a JMS Queue or Topic. An external set of files to define properties, rules, and definitions for how messages are to be parsed and mapped to records in the OGG Trail.

Figure 1 depicts an overview of the JMS message capture environment.


















Key components of the OGG message capture environment include:

  1. Extract Parameter File: Contains parameters that identify the VAM shared object library and property file location.
  2. Properties File: Contains values that set the connection properties and parsing rules for reading the XML messages.
  3. GENDEF: A separate utility that uses the properties file and parser-specific data definitions to create an OGG source definitions file.
  4. Data Pump Extract: Reads the OGG Trail created by the Message Capture Extract and delivers the records to the target database server.

 Functional Overview

When the Extract is started a connection is made to the message system through a generic JMS interface. If a connection cannot be established, the Extract will abend.

If a connection is established, the JMS Handler portion of the VAM shared object will request the next message from the JMS Queue. To consume messages a local JMS transaction is started. A read is executed to retrieve the next message from the Queue and the contents of the message is returned. If the read fails because no message exists, an end-of-file message is returned and the Extract sleeps for the amount of time specified by the EOFDELAY parameter setting (default: 1 second).

When all of the messages that make up a transaction have been read, the VAM shared object parses the message into OGG Trail atomic data format and the Extract writes the transaction records to the OGG Trail. The local JMS transaction is then committed and the messages removed from the Queue. If an error occurs during the message read, parsing, or write; the current JMS transaction is rolled back, leaving the messages in the Queue, and the Extract abends.

Core Product Specific

The OGG Message Adapter VAM is core product specific; meaning the version 11.2 adapter is designed to function only with the OGG 11.2 core product. This is due to changes in the core product which prevents the VAM from being linked with other versions of the Extract program.

Implementation Process

The process for implementing OGG delivery to JMS includes:

  1. Download and install the OGG Java Adapter package.
  2. Configure the OGG JMS VAM Extract.
  3. Generate definitions for the JMS message data.
  4. Create the OGG Java Extract Data Pump (Java Data Pump).

Install the Adapter Package

Because the OGG Java Adapter requires links to the WebLogic Java classes, it must be installed on a server running WebLogic. The adapter package consists of two parts, the OGG Java Adapter and Oracle GoldenGate Generic.

OGG Java Adapter is comprised of the user exit, java libraries, and sample configuration files.

Oracle GoldenGate Generic is a version of the OGG application with no built-in database support.

To install the package:

  1. Create a directory on the WebLogic server to hold the OGG files.
  2. Unzip OGG Generic into this location.
  3. Unzip OGG Java Adapter into this location.
  4. From a Linux shell, start the GGSCI utility and execute the command CREATE SUBDIRS.

Configure the VAM Extract

The VAM Extract consumes data from the JMS Queue and formats it to OGG Trail specifications.

EXTRACT v_oggq
VAM PARAMS(dirprm/
REPORTCOUNT EVERY 5 minutes, rate
EXTTRAIL ./dirdat/vj

In the Extract parameter file, the setting VAM PARAMS(dirprm/ specifies the name of the VAM library and the location of the properties file, TRANLOGOPTIONS VAMCOMPATIBILITY 1 specifies the original implementation of VAM is to be used, and TRANLOGOPTIONS GETMETADATAFROMVAM specifies that metadata will be sent by the VAM.

Configure the VAM Properties File

Message capture is configured by the settings in the VAM properties file. The file name and location is set via the PARAMS option of the Extract VAM parameter. The properties file will contain configuration settings that specify logging characteristics, parser mappings, and JMS connection properties.

 Best Practice: The VAM properties file should reside in the dirprm directory of the OGG installation location.

Configure Logging

Define the VAM log location, logging level for the individual VAM modules, and report generation interval.


goldengate.log.logname identifies the location and name of the log file.

goldengate.log.level sets the logging level to warning and error messages only, options are INFO, WARN, ERROR, or DEBUG.

goldengate.log.tostdout specifies if log data should be written to stdout (a terminal window).

goldengate.log.tofile specifies if log data should be written to the file location identified by goldengate.log.logname.

goldengate.log.modules identifies the program modules for which log data should be generated. denotes the time interval for VAM report generation.

Set the Data Source and Message Identifier Overrides

Define the message data source, whether local transactions are to be used, and the starting point for the message sequence id.


Setting gg.jms.localtx=false disables local JMS transactions, and uses the client_acknowledge features of the VAM. Setting means the system timestamp is used as each message’s sequence id.

Connect to JMS and Retrieve Messages

### JNDI properties


### Java and WebLogic classpath env settings
jvm.bootoptions=-Xmx512m -Xms256m -Djava.class.path=.:dirprm:ggjava/ggjava.jar:/u01/app/oracle/Middleware/patch_wls1

As shown in the sample properties file above, the JMS interface must be configured with specific characteristics:

  1. Java Naming and Directory Interface (JNDI) connection properties: JMS server URL, Initial Context properties, Connection Factory Name, and Destination Name.
  2. JMS Queue or Topic name.
  3. Security information: JNDI authentication credentials and JMS user name and password.
  4. The java.class.path for the JMS client.

Message Parsing

The parser settings in the properties file define how the JMS text message data and header properties will be translated into database transactions and operations.

## Parser settings

### transactions

### operations

### subrules.columns

To translate the JMS text message, the parser must find required information from the JMS header, system generated values, or static values. This required data includes: the transaction identifier, sequence identifier, timestamp, table name. operation type, and column data specific to a particular table name and operation type.

A sample transaction from our WebLogic JMS Queue is shown below:

<t><o t='EAST.CATEGORIES' s='I' d='2014-02-26 15:34:28.322846' p='00000000000000001159'>
<c i='0'><a><![CDATA[1]]></a></c><c i='1'><a><![CDATA[category_hardware.gif]]></a></c>
<c i='2'><a><![CDATA[0]]></a></c><c i='3'><a><![CDATA[1]]></a></c>
<c i='4'><a><![CDATA[2011-04-04:08:31:00.000000000]]></a></c><c i='5'><an/></c></o>
<o t='EAST.CATEGORIES' s='I' d='2014-02-26 15:34:28.322846' p='00000000000000001419'>
<c i='0'><a><![CDATA[2]]></a></c><c i='1'><a><![CDATA[category_software.gif]]></a></c>
<c i='2'><a><![CDATA[0]]></a></c><c i='3'><a><![CDATA[2]]></a></c>
<c i='4'><a><![CDATA[2011-04-04:08:31:00.000000000]]></a></c><c i='5'><an/></c></o>
<o t='EAST.CATEGORIES' s='I' d='2014-02-26 15:34:28.322846' p='00000000000000001654'>
<c i='0'><a><![CDATA[3]]></a></c><c i='1'><a><![CDATA[category_dvd_movies.gif]]></a></c>
<c i='2'><a><![CDATA[0]]></a></c><c i='3'><a><![CDATA[3]]></a></c>
<c i='4'><a><![CDATA[2011-04-04:08:31:00.000000000]]></a></c><c i='5'><an/></c></o>
<o t='EAST.CATEGORIES' s='I' d='2014-02-26 15:34:28.322846' p='00000000000000002610'>
<c i='0'><a><![CDATA[7]]></a></c><c i='1'><a><![CDATA[subcategory_speakers.gif]]></a></c>
<c i='2'><a><![CDATA[1]]></a></c><c i='3'><a><![CDATA[0]]></a></c>
<c i='4'><a><![CDATA[2011-04-04:08:31:00.000000000]]></a></c><c i='5'><an/></c></o>

The message is in MINXML format, so the VAM properties file must be configured accordingly. In the sample message, we can see a single transaction, identified by the <t> and </t> tags, consisting of four insert operations performed on the EAST.CATEGORIES source table. Our properties file transaction rule will be set to recognize the root element for a transaction, tx_rule.match=/t.

Operation specific data is contained within the <o> and </o> tags. Within the operation tag, the source schema and table names are identified by the tag t= and the operation type by s=. The operation type I denotes an Insert. Our properties file is configured accordingly:

To identify the root element for an operation: op_rule.match=./o

To identify the schema and table name within the operation element: op_rule.schemaandtable=@t

To identify the operation type: op_rule.optype=@s

To identify and insert operation: op_rule.optype.insertval=I

To identify and update operation: op_rule.optype.updateval=U

To identify a delete operation: op_rule.optype.deleteval=D

Use the current system time as the transaction timestamp: op_rule.timestamp=*ts

Generate a unique sequence id for each transaction record: op_rule.seqid=*seqid

Because each record contains a transaction timestamp and sequence id, we could have used the message data by specifying op_rule.timestamp=@d and op_rule.seqid=@p.

Column data is contained within the <c> and </c> tags. Column index numbers are identified by the i= tag. Column data is contained with the <a> and </a> tags; while NULL columns are identified by <a> and </an>. To properly parse our sample message, the properties column settings will be:

To identify the root element of a column: col_rule.match=./c
To identify the column index: col_rule.index=@i . The column index will be verified against the source definitions file.
To specify how to obtain before values used for updates and deletes: col_rule.before.value=./b/text()
To identify if a column before value is null: col_rule.before.isnull=./bn/exists()
To specify how to obtain after values used for updates and deletes: col_rule.after.value=./a/text()
To identify if a column before value is null: col_rule.after.isnull=./an/exists()

 XML Sourcedefs

The properties file setting, xml.sourcedefs=dirdef/east.def, specifies the location of the source definitions file that will be used for column index verification. This file contains information about the source table, columns names, and data types. The definition for the EAST.CATEGORIES table shown in our sample JMS transaction is below:

*+- Defgen version 2.0, Encoding UTF-8
* Definitions created/modified  2013-05-06 10:23
*  Field descriptions for each column entry:
*     1    Name
*     2    Data Type
*     3    External Length
*     4    Fetch Offset
*     5    Scale
*     6    Level
*     7    Null
*     8    Bump if Odd
*     9    Internal Length
*    10    Binary Length
*    11    Table Length
*    12    Most Significant DT
*    13    Least Significant DT
*    14    High Precision
*    15    Low Precision
*    16    Elementary Item
*    17    Occurs
*    18    Key Column
*    19    Sub Data Type
Database type: ORACLE
Character set ID: windows-1252
National character set ID: UTF-16
Locale: neutral
Case sensitivity: 14 14 14 14 14 14 14 14 14 14 14 14 11 14 14 14
Definition for table EAST.CATEGORIES
Record length: 170
Syskey: 0
Columns: 6
CATEGORIES_ID     134     12        0  0  0 1 0      8      8      8 0 0 0 0 1    0 1 3
CATEGORIES_IMAGE   64     64       12  0  0 1 0     64     64      0 0 0 0 0 1    0 0 0
PARENT_ID         134     12       82  0  0 1 0      8      8      8 0 0 0 0 1    0 0 3
SORT_ORDER        134      8       94  0  0 1 0      8      8      8 0 0 0 0 1    0 0 3
DATE_ADDED        192     29      106  0  0 1 0     29     29     29 0 6 0 0 1    0 0 0
LAST_MODIFIED     192     29      138  0  0 1 0     29     29     29 0 6 0 0 1    0 0 0
End of definition

This data is generated via the OGG DEFGEN utility. If the message data originated from a relational database supported by Oracle GoldenGate, run DEFGEN against the source database tables to create the defines. If the message data did not originate in a relational database, DEFGEN may be run against the target database tables.

If using the target tables to generate the defines, table column data types must match the data in the message. For example, by reviewing the sample message data we can make conclusions about how the target table EAST.CATEGORIES must be created:

  1. The table has six columns.
  2. Columns 0, 2, and 3 appear to be numeric in nature.
  3. Column 1 is appears to be character data.
  4. Column 4 is a timestamp.
  5. Column 5 cannot be determined

 Generate JMS Message Definitions

OGG for Java includes a utility, GENDEF, that is used to generate a data definitions file from the properties file settings and other parser specific data definition values. The GENDEF output file is then specified in the Extract Data Pump or Replicat SOURCEDEFS parameter so the process can interpret the data contained in the OGG Trail created by the VAM.

The syntax for GENDEF is gendef [-prop {prop-file}] [-out {out-file}]. To generate a definitions file based upon the values in our sample properties file:

[oracle@oel5wl OGGJMS_112]$ ./gendef -prop ./dirprm/ -out ./dirdef/voggq.def
Using property file: ./dirprm/
Outputting definition to: ./dirdef/voggq.def
Source_file = dirdef/east.def


Oracle GoldenGate Adapters for Java provides for data capture from JMS Queues and Topics. This data is parsed and formatted for storage in an OGG Trail, which can then be read by an Extract Data Pump for transmission to a downstream server running Oracle GoldenGate for apply to a relational database. This article presented information detailing the installation and configuration of the modules required for JMS capture.

Reference: Oracle GoldenGate Adapters Administrator’s Guide for Java 11g release 2 (


  1. Bibin John says:

    I have below property file, could you please let me know what needs to be added to capture tokens from message?


    ### transactions

    ### operations
    op_rule.timestamp.format=YYYY-MM-DD HH:MI:SS.FFF

    ### subrules.columns

    • John,

      I do not understand what you mean by “capture tokens from message”. In Oracle GoldenGate a “token” is user defined data that is stored in the header of the GoldenGate Trail that is used for downstream delivery customization.

      In your prior message you sent the following as an example of the data in your WebLogic Queue:

      orclREALTIME1405378970451113187956319197310419456180048912015-02-12:17:39:46.000000000qay_slid_214252PLENTI2015-02-12:17:39:46.000000000 +00:002015-02-12:17:39:46.000000000 +00:00MYWORLD

      I am not a WebLogic expert and do not recognize the format of this data as being XML; so setting the VAM Extract for XML parsing will not work.

      The supported parser settings are fixed width, comma delimited, and XML. The sample data you provided could be fixed width, but without knowing the details of your application data I have no way of determining the proper settings.

      If this is fixed width data, you will need to refer to the Oracle Fusion Middleware Administering GoldenGate Application Adapters documentation, section 11.2.2, Fixed Parser properties and review the files and copybook.cpy in the AdapterExamples directory of your Oracle GoldenGate application for sample configurations.

      Loren Penton

      • Bibin John says:

        My message format is xml. and it looks like xml format lost after i submit the previous comment. in below example i converted “<" to "[" to avoid any issue with formatting.

        [transaction Txts="2016-07-18T22:02:28.389Z" Txid="00000000000000000014053789704511.00000000" startPos="SeqNo=3, RBA=345807" endPos="SeqNo=3, RBA=345807" readts="2015-02-12T22:18:47.813Z" xmlns=""%5D
        [operation table="TEST.TABLE_TEST" type="INSERT" readts="2015-02-12T22:18:47.814Z" txts="2016-07-18T22:02:28.000Z" pos="SeqNo=3, RBA=345807" numCols="6" transaction="00000000000000000014053789704511.00000000"]
        [token name="SOURCE_SYSTEM"]test[/token]
        [token name="MESSAGE_TYPE"]REALTIME[/token]
        [token name="xid_scn"]14053789704511[/token]
        [col name="SORCODE" type="DOUBLE" index="0" keyIndex="0"]
        [col name="ACCTID" type="VARCHAR" index="1" keyIndex="1"]
        [col name="LOYALTY_CARD_NUMBER" type="VARCHAR" index="2" keyIndex="2"]
        [col name="LOYALTY_PROGRAM_ID" type="DOUBLE" index="3" keyIndex="3"]
        [col name="START_DATE_GMT" type="TIMESTAMP" index="4" keyIndex="4"]
        [col name="LOYALTY_PROGRAM" type="VARCHAR" index="5"]

        above message has few tags for token.

        [token name="SOURCE_SYSTEM"]test[/token]
        [token name="MESSAGE_TYPE"]REALTIME[/token]
        [token name="xid_scn"]14053789704511[/token]

        i need to capture these attributes from message and add into trails as user tokens.

        • John,

          My apologies for the confusion, at this time the JMS VAM does not support what you are requesting. You will need to open an enhancement request case with Oracle Support.

          Loren Penton

  2. JOHN BIBIN says:

    could you tell me if we can create user tokens in trails using VAM extract? so an example consider i have below message which has few attributes which i need to capture as tokens. is that possible? from below message i need to capture i need to get “orclREALTIME14053789704511” as tokens in my trail. please help.

    orclREALTIME1405378970451113187956319197310419456180048912015-02-12:17:39:46.000000000qay_slid_214252PLENTI2015-02-12:17:39:46.000000000 +00:002015-02-12:17:39:46.000000000 +00:00MYWORLD

    • Hi John,

      I typically run a test to verify any results before answering end user questions. Regretfully at this time we do not have a Weblogic / GoldenGate environment to use to verify a solution for your question.

      In theory the answer to your question is yes, you can create a trail token with that data. When you ran GENDEF, you created a parser specific data definition file that can be specified in your extract data pump. This data definition file will contain the column name for this data. You should be able to use that in your extract data pump to set the token by including the gendef output file as a sourcedefs. You would then define a token via:

      TABLE schema.table, tokens (token_name = column_name);

      We will put this on our list to validate once we get a test environment built and I’ll post a detailed update.

      Loren Penton

Add Your Comment