Oracle GoldenGate: Testing the Extract’s maximum read performance in extreme environments

MOS Doc ID 2193584.1

Version 1.2 10/14/16

If you have a requirement to be able to process over a Terabyte of redo per hour you may want to first run a simple test to figure out if your system is capable of reading that much data before you spend a large amount of time trying to tune your Oracle GoldenGate (OGG) configuration.

This method is a simple way to test your systems ability to read a very large amount of data.  This test only covers the read and not the rest of the processing. Once you determine that extract can keep up with the read rate, then you can move down the chain to the next step in the process.  That will be covered in another paper.

In order to test the extract maximum read rate all you need to do is create the built-in heartbeat table and create a extract with no map statements in it.  In the latest version of OGG, when you create the heartbeat table in GGSCI, OGG will create the heartbeat table and add a job to update the heartbeat every minute.   The extract process automatically adds the heartbeat to the OGG processes, no map statement is required.

First step: install OGG.  The install process is detailed in the OGG documentation.  This process will assume that the OGG software has been installed according to the documentation.

Second step: Check that you have the minimal setup configured.  You need to check that the database privileges, supplemental logging, and Stream pool size are set correctly.  If any of these values are incorrect or not set to the minimum required, please review the OGG install guide for correct settings.  To check this, issue the following commands in SQLPLUS –

Check Supplemental Logging –

col supplemental_log_data_min format A15 heading 'Minimum|supplemental|log data'
col force_logging format A10 heading 'force|logging'
SQL> SELECT supplemental_log_data_min, force_logging FROM v$database;

Minimum
supplemental    force
log data        logging
--------------- ----------
YES             NO

If the result is NO for either or both properties, refer to the OGG install to set.  Note: you can also do force logging at a tablespace level.  See OGG install documentation for more details.

Check OGG user privileges –

col USERNAME format A10 heading 'User Name'
col PRIVILEGE_TYPE format A10 heading 'PRIVILEGE|TYPE'
col GRANT_SELECT_PRIVILEGES format A10 heading 'GRANT|SELECT|PRIVILEGES'
col CREATE_TIME format A30 heading 'CREATE TIME'
SQL>
SQL> select * from dba_goldengate_privileges;
GRANT
PRIVILEGE SELECT
User Name TYPE PRIVILEGES CREATE TIME
---------- ---------- ---------- ------------------------------
GGADMIN * YES 08-JUL-16 11.56.51.792606 AM

Check to make sure Streams Pool size and GoldenGate replication parameters are set –

set linesize 130
col name format a30
col value format a10
col description format a40

select name, value, description, ISSYS_MODIFIABLE
from v$parameter
where
        name like 'enable_goldengate_replication'
        or name like 'streams_pool_size';

NAME                           VALUE      DESCRIPTION                              ISSYS_MOD
------------------------------ ---------- ---------------------------------------- ---------
streams_pool_size              2147483648 size in bytes of the streams pool        IMMEDIATE
enable_goldengate_replication  TRUE       goldengate replication enabled           IMMEDIATE

If enable_goldengate_replication is not set to true you will not be able to start OGG.  If the streams pool is not sized to at least the minimum recommended size, it can cause performance issues.   Please check OGG documentation for recommended streams_pool_size.

 

Switch the log files.

SQL&gt; ALTER SYSTEM SWITCH LOGFILE;<strong> </strong>

Third Step:   Add the heartbeat table.  This feature is part of OGG 12.2 functionality.

Verify that GGSCHEMA name is in the GLOBALS file –

$ (slc09ujv)[a11204s] /scratch/oracle/OGG12.2\> more GLOBALS
GGSCHEMA ggadmin

If GGSCHEMA is not in the GLOBALS file, please add it.

Enable the Heartbeat functionality by executing the GGSCI command ‘ADD HEARTBEATTABLE’.

GGSCI (slc09ujv) 1> dblogin userid ggadmin password ggs
Successfully logged into database.

GGSCI (slc09ujv as ggadmin@a11204s) 2> add heartbeattable
2016-10-04 12:05:02  INFO    OGG-14001  Successfully created heartbeat seed table ["GG_HEARTBEAT_SEED"].
2016-10-04 12:05:02  INFO    OGG-14032  Successfully added supplemental logging for heartbeat seed table ["GG_HEARTBEAT_SEED"].
2016-10-04 12:05:02  INFO    OGG-14000  Successfully created heartbeat table ["GG_HEARTBEAT"].
2016-10-04 12:05:02  INFO    OGG-14033  Successfully added supplemental logging for heartbeat table ["GG_HEARTBEAT"].
2016-10-04 12:05:02  INFO    OGG-14016  Successfully created heartbeat history table ["GG_HEARTBEAT_HISTORY"].
2016-10-04 12:05:02  INFO    OGG-14023  Successfully created heartbeat lag view ["GG_LAG"].
2016-10-04 12:05:02  INFO    OGG-14024  Successfully created heartbeat lag history view ["GG_LAG_HISTORY"].
2016-10-04 12:05:02  INFO    OGG-14003  Successfully populated heartbeat seed table with [A11204S].
2016-10-04 12:05:02  INFO    OGG-14004  Successfully created procedure ["GG_UPDATE_HB_TAB"] to update the heartbeat tables.
2016-10-04 12:05:02  INFO    OGG-14017  Successfully created procedure ["GG_PURGE_HB_TAB"] to purge the heartbeat history table.
2016-10-04 12:05:02  INFO    OGG-14005  Successfully created scheduler job ["GG_UPDATE_HEARTBEATS"] to update the heartbeat tables.
2016-10-04 12:05:02  INFO    OGG-14018  Successfully created scheduler job ["GG_PURGE_HEARTBEATS"] to purge the heartbeat history table.

Fourth Step: Configure a Manager process-

 

If you don’t already have a manager process set up you will need to create one.  For the proposes of this test a very simple one liner is all that is needed  –

GGSCI> edit params mgr
port 7809

Start the manager process –

GGCSI> start mgr

Fifth step: Create an extract

Add the extract process –

./ggsci
ADD EXTRACT ext_test INTEGRATED TRANLOG, BEGIN NOW

Add the trail –

./ggsci
ADD EXTTRAIL ./dirdat/ET, EXTRACT ext_test

 

Register the Extract –

./ggsci
DBLOGIN USERID gadmin PASSWORD ggs
register extract ext_test database

Create an extract parameter file –

GGSCI> edit params ext_test

extract ext_test
userid gadmin, password ggs
LOGALLSUPCOLS
UPDATERECORDFORMAT COMPACT
TRANLOGOPTIONS INTEGRATEDPARAMS (MAX_SGA_SIZE 200)
exttrail ./dirdat/ET

Start the extract process –

GGSCI> start extract ext_test

Sending START request to MANAGER ...
EXTRACT EXT_TEST starting

 

Monitoring performance

 

Once you have started the extract you can monitor the performance using the following query –

set verify off
set linesize 200
set pagesize 80
col extract_name format a8 heading 'Extract|Name'
col Run_time_HR format 99,999.99 heading 'Run Time'
col mined_GB format 999,999.99 heading 'Total GB|mined'
col sent_GB format 999,999.99 heading 'Total GB|sent'
col Sent_GB_Per_HR format 999,999.99 heading 'Total GB|Per HR'
col capture_lag Heading 'Capture|Lag|seconds'
col Current_time Heading 'Current|Time'
col extract_name format a8 heading 'Extract|Name'
col GB_Per_HR format 999,999.99 heading 'GB Mined|Per HR'
alter session set nls_date_format='YYYY-MM-DD HH24:Mi:SS';

select
        EXTRACT_NAME,
        TO_CHAR(sysdate, 'HH24:MI:SS MM/DD/YY') Current_time,
        ((SYSDATE-STARTUP_TIME)*24) Run_time_HR ,
        (SYSDATE- capture_message_create_time)*86400 capture_lag,
        BYTES_OF_REDO_MINED/1024/1024/1024 mined_GB,
        (BYTES_OF_REDO_MINED/1024/1024/1024)/((SYSDATE-STARTUP_TIME)*24) GB_Per_HR,
        BYTES_SENT/1024/1024/1024 sent_GB,
        (BYTES_SENT/1024/1024/1024)/((SYSDATE-STARTUP_TIME)*24) Sent_GB_Per_HR
   from gv$goldengate_capture;
                                       Capture
Extract  Current                           Lag    Total GB    GB Mined    Total GB    Total GB
Name     Time                Run Time  seconds       mined      Per HR        sent      Per HR
-------- ----------------- ---------- -------- ----------- ----------- ----------- -----------
EXT_TEST 09:30:52 10/06/16        .56        2         .00         .01         .00         .00

The output is in GB per hour.   If the column “SENT_GB” and SENT_GB_PER_HR” are blank, then the process is not running.

Clean up of the heartbeat table and the heartbeat scheduler

In order to cleanup your environment after you are done with the test you may want to remove the heartbeat table and the DBMS scheduler that updates the heartbeat table.   Please note that it is a best practice to use the heartbeat table in a OGG environment.

To remove the Heartbeat table and the DBMS scheduler, issue the following commands in GGSCI –

GGSCI (slc09ujv) 1> dblogin userid ggadmin password ggs
Successfully logged into database.

GGSCI (slc09ujv as ggadmin@a11204s) 2> delete heartbeattable

2016-10-06 14:13:57  INFO    OGG-14007  Heartbeat seed table ["GG_HEARTBEAT_SEED"] dropped.
2016-10-06 14:13:57  INFO    OGG-14009  Heartbeat table ["GG_HEARTBEAT"] dropped.
2016-10-06 14:13:57  INFO    OGG-14011  Heartbeat history table ["GG_HEARTBEAT_HISTORY"] dropped.
2016-10-06 14:13:57  INFO    OGG-14026  Heartbeat lag view ["GG_LAG"] dropped.
2016-10-06 14:13:57  INFO    OGG-14028  Heartbeat lag history view ["GG_LAG_HISTORY"] dropped.
2016-10-06 14:13:57  INFO    OGG-14013  Procedure ["GG_UPDATE_HB_TAB"] dropped.
2016-10-06 14:13:57  INFO    OGG-14020  Procedure ["GG_PURGE_HB_TAB"] dropped.
2016-10-06 14:13:57  INFO    OGG-14015  Scheduler job ["GG_UPDATE_HEARTBEATS"] dropped.
2016-10-06 14:13:57  INFO    OGG-14022  Scheduler job ["GG_PURGE_HEARTBEATS"] dropped.

GGSCI (slc09ujv as ggadmin@a11204s) 3>

At this point the heartbeat table and DBMS scheduler job have been removed from the database.

 

Add Your Comment