X

Best Practices from Oracle Development's A‑Team

Oracle GoldenGate: Oracle to Oracle Instantiation

 

Disclaimer: The information presented in this article is for educational purposes only, applies to Oracle GoldenGate versions 19c and above, and does not constitute any contractual agreement; either implied or explicit. Any scripts or database query examples are presented where-is, as-is, and are unsupported by Oracle Support and Development. Always refer to the official Oracle documentation when planning changes to your Oracle GoldenGate installations.

Target database instantiation is the single most important aspect of replication implementation. In this article we shall present information on how to instantiate a target Oracle Database 19c from an Oracle 19c source as part of an Oracle GoldenGate implementation.

Oracle to Oracle Instantiation

For the purpose of this discussion we have already installed Oracle GoldenGate Microservices 19c to perform remote capture and apply to two different Oracle Database 19c installations. The Oracle GoldenGate installation has been setup as per best practices detailed by Oracle Maximum Availability Architecture. (For Oracle GoldenGate 19c best practices refer to: https://www.oracle.com/a/tech/docs/maa-goldengate-hub.pdf).

To ensure our target database is instantiated properly we perform the following steps:

  1. Patches

The latest Oracle Database Bundle Patch must be applied to the source and target databases to ensure the instantiation and replication implementation does not incur problems from known issues. Review MOS Note Oracle Support Document 2193391.1 (Latest GoldenGate/Database (OGG/RDBMS) Patch recommendations) at https://mosemp.us.oracle.com/epmos/faces/DocumentDisplay?id=2193391.1 to ensure your systems are current.

  1. Data Type Support Validation

The Oracle GoldenGate documentation details the level of support for all data types within the Oracle Database. Manually validating every data type defined within every table being replicated can be cumbersome; so, in Oracle Database 19c a special view exists that may be queried to quickly determine if a table is supported for replication, DBA_GOLDENGATE_SUPPORT_MODE.

This view contains three columns:

The following query will return information about all of the tables in our schema PDBWEST.TCP.

set pagesize 50
alter session set container=pdbeast;
column object_name format a40
column support_mode format a8 heading 'Support|Mode'

select * from DBA_GOLDENGATE_SUPPORT_MODE where owner = 'TPC';

 

For our source the output returned shows “FULL” support for all of the tables:

The support_mode column has three possible states:

FULL – Oracle GoldenGate captures source changes from Oracle Redo.

ID_KEY – The changes cannot be captured from Oracle Redo, but the data can be fetched from the database.

PLSQL – The changes can be replicated via Procedural Replication, which requires additional configuration options in Integrated Extract.

Any table not listed in this output is not supported by Oracle GoldenGate.

  1. Enable Supplemental Logging

Oracle GoldenGate supplemental logging must be enabled at either the schema or table level to ensure primary key and unique index data is recorded in Oracle Redo for update and delete operations. Best practice is to enable schema level supplemental logging. This may be done via the Oracle GoldenGate Microservices WebUI, RestfulAPI interface, or adminclient utility. For this article we’ll use the WebUI.

NOTE

Refer to the Oracle GoldenGate reference materials, https://docs.oracle.com/en/middleware/goldengate/core/19.1/using/working-data-replications.html#GUID-D8B727DE-0197-42B3-9DA7-EB15D2BC758F, if you are unfamiliar with enabling replication using Oracle GoldenGate Microservices.

Connect to the Administration Server WebUI and select the Configuration menu, then connect to the source database.

Scroll down to the Transaction Information section, select the “+” icon to display the fields for adding schematrandata.

Provide the following information: (1) pluggable database name and schema name, and (2) enable scheduling columns (the default). Enable all columns if you may be doing active-active replication or conflict detection resolution at some point in the future. (A-Team recommends setting all columns as a good practice.)

Select the submit button to complete the process. To validate schematrandata has been enabled, enter the schema name in the search box and select the magnifying glass.

The response shows we have our fourteen tables in our TPC schema enabled and prepared for instantiation.

  1. Create the Integrated Extract

Go back to the Overview menu and select the “+” button in the Extracts tab to create an Integrated Extract. Runtime parameters we are using in the Integrated Extract are:

extract ieljp
useridalias ggext_oeldbora domain OracleGoldenGate
exttrail ab

-- Capture DDL operations for listed schema tables
ddl include mapped

-- Add step-by-step history of ddl operations captured
-- to the report file. Very useful when troubleshooting.
ddloptions report

-- Write capture stats per table to the report file daily.
report at 00:01

-- Rollover the report file weekly. Useful when IE runs
-- without being stopped/started for long periods of time to
-- keep the report files from becoming too large.
reportrollover at 00:01 on Sunday

-- Report total operations captured, and operations per second
-- every 10 minutes.
reportcount every 10 minutes, rate

-- Table list for capture
table pdbwest.tpc.*;

 

When finished entering the runtime parameters, select the “Create” button to complete the process. The WebUI will update and show the Integrated Extract.

Start the Integrated Extract by selecting “Start” from the “Action” drop-down.

Once the Integrated Extract shows it is in the “Running” state, you can validate data is being captured in your live database, by selecting “Details” in the “Action” drop-down, and then opening the “Statistics” tab.

  1. Export the source data.

Before starting the source data export, we must check for long running transactions in the source database. These will be running transactions that started before Integrated Extract was created. Execute the following query in the source.

select start_scn, start_time
from gv$transaction
where start_scn < (select max(start_scn) from dba_capture);

 

In the below output, there is a transaction running that was started prior to Integrated Extract being registered with the database. This transaction must either commit or be rolled back before the database export is started.

 

WARNING

Do not perform the database export while the above query returns rows. Doing so will result in missed data, a target database out-of-sync condition, and you will be required to perform the target instantiation again.

 

If the query returns “no rows selected”, use Oracle Data Pump Export (expdp) with the flashback_scn option to create a consistent source database export.

To prepare for the export we need to create a disk location on which to place the files.

$ mkdir -p /u01/oracle_tpc_export

Login to the source database to create the export directory and grant the privileges required for our ggadmin PDB user to perform the export.

$ sqlplus / as sysdba

SQL*Plus: Release 19.0.0.0.0 - Production on Thu Aug 5 07:44:35 2021
Version 19.3.0.0.0
Copyright (c) 1982, 2021, Oracle.  All rights reserved.

Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0

SQL> alter session set container=pdbwest;
Session altered.

SQL> create or replace directory EXP_SCHEMA as '/u01/oracle_tpc_export';
Directory created

SQL> grant read,write on directory EXP_SCHEMA to ggadmin;
Grant succeeded.

SQL> grant datapump_exp_full_database to ggadmin;
Grant succeeded

 

Get the current SCN from the database.

Run the export.

$ expdp ggadmin/********@PDBWEST037 directory=EXP_SCHEMA dumpfile=exp_tpc_scn_65743429.dmp logfile=tpc_export.log flashback_scn=65743429 schemas=TPC

We included the flashback scn value in the export file name. This was done as a reference and reminder that this is the SCN we’ll need to specify when starting the Parallel Replicat.

When the export completes, check the log file for exported row counts and errors.

Copy the dump file to the target database server for import.

  1. Import the data to create the target.

Create the import directory in the database.

Use the Oracle Data Pump Import utility to import the data from the dump file and create the target schema.

$ impdp system/********@PDB2 directory=IMP_SCHEMA dumpfile=exp_tpc_scn_65743429.dmp logfile=tpc_import.log table_exists_action=REPLACE schemas=TPC

When the import completes, check the logfile for imported row counts and errors. The number of imported rows must match the number of exported rows.

  1. Create the Parallel Replicat

We’ve already created the database credentials in the target PDB and an associated alias in Oracle GoldenGate Microservices that will be used by the Parallel Replicat. Prior to creating the Parallel Replicat, we need to create a checkpoint table in the target database.

In the WebUI Configuration menu connect to the target database. Select the “+” button next to the Checkpoint area and enter the schema and table name.

Select the “Submit” button to create the checkpoint table.

Go back to the Overview menu and select the “+” button in the Replicats tab to create a Parallel Nonintegrated Replicat. Runtime parameters we are using in the Parallel NonIntegrated Replicat are:

replicat prljp
useridalias pdb2_oeldbora2 domain OracleGoldenGate
--
-- Capture DDL operations for listed schema tables
--
ddl include mapped
--
-- Add step-by-step history of ddl operations captured
-- to the report file. Very useful when troubleshooting.
--
ddloptions report
--
-- Write capture stats per table to the report file daily.
--
report at 00:01
--
-- Rollover the report file weekly. Useful when PR runs
-- without being stopped/started for long periods of time to
-- keep the report files from becoming too large.
--
reportrollover at 00:01 on Sunday
--
-- Report total operations captured, and operations per second
-- every 10 minutes.
--
reportcount every 10 minutes, rate
--
-- Table map list for apply
--
MAP pdbwest.tpc.*, TARGET pdb2.tpc.*;

 

When finished entering the runtime parameters, select the “Create” button to complete the process. The WebUI will update and show the Parallel Replicat.

Start the Parallel Replicat by selecting “Start with Options” from the “Action” drop-down.

On the Start Replicat Options screen, select “After CSN” as the start point, and enter the flash point SCN from the export as the “CSN”.

Select the “Start” button to start the Parallel Replicat.

  1. Validate source and target

Once the Parallel Replicat has processed all queued source data, the best way to validate the source and target databases is to use Oracle GoldenGate Veridata. Below is the output of our Veridata compare run showing all 14 tables are in sync. (How to setup Veridata to compare source and target schemas is beyond the scope of this article.)

 

Conclusion

In this article we went through the steps required to instantiate an Oracle Database 19c target from an Oracle Database 19c source as part of an Oracle GoldenGate implementation.

Be the first to comment

Comments ( 0 )
Please enter your name.Please provide a valid email address.Please enter a comment.CAPTCHA challenge response provided was incorrect. Please try again.Captcha