Target database instantiation is the single most important aspect of replication implementation. In this article we shall present information on how to instantiate a target Oracle Database 19c from an Oracle 19c source as part of an Oracle GoldenGate implementation.
For the purpose of this discussion we have already installed Oracle GoldenGate Microservices 19c to perform remote capture and apply to two different Oracle Database 19c installations. The Oracle GoldenGate installation has been setup as per best practices detailed by Oracle Maximum Availability Architecture. (For Oracle GoldenGate 19c best practices refer to: https://www.oracle.com/a/tech/docs/maa-goldengate-hub.pdf).
To ensure our target database is instantiated properly we perform the following steps:
The latest Oracle Database Bundle Patch must be applied to the source and target databases to ensure the instantiation and replication implementation does not incur problems from known issues. Review MOS Note Oracle Support Document 2193391.1 (Latest GoldenGate/Database (OGG/RDBMS) Patch recommendations) at https://mosemp.us.oracle.com/epmos/faces/DocumentDisplay?id=2193391.1 to ensure your systems are current.
The Oracle GoldenGate documentation details the level of support for all data types within the Oracle Database. Manually validating every data type defined within every table being replicated can be cumbersome; so, in Oracle Database 19c a special view exists that may be queried to quickly determine if a table is supported for replication, DBA_GOLDENGATE_SUPPORT_MODE.
This view contains three columns:
The following query will return information about all of the tables in our schema PDBWEST.TCP.
set pagesize 50
select * from DBA_GOLDENGATE_SUPPORT_MODE where owner = 'TPC';
For our source the output returned shows “FULL” support for all of the tables:
The support_mode column has three possible states:
FULL – Oracle GoldenGate captures source changes from Oracle Redo.
ID_KEY – The changes cannot be captured from Oracle Redo, but the data can be fetched from the database.
PLSQL – The changes can be replicated via Procedural Replication, which requires additional configuration options in Integrated Extract.
Any table not listed in this output is not supported by Oracle GoldenGate.
Oracle GoldenGate supplemental logging must be enabled at either the schema or table level to ensure primary key and unique index data is recorded in Oracle Redo for update and delete operations. Best practice is to enable schema level supplemental logging. This may be done via the Oracle GoldenGate Microservices WebUI, RestfulAPI interface, or adminclient utility. For this article we’ll use the WebUI.
Refer to the Oracle GoldenGate reference materials, https://docs.oracle.com/en/middleware/goldengate/core/19.1/using/working-data-replications.html#GUID-D8B727DE-0197-42B3-9DA7-EB15D2BC758F, if you are unfamiliar with enabling replication using Oracle GoldenGate Microservices.
Connect to the Administration Server WebUI and select the Configuration menu, then connect to the source database.
Scroll down to the Transaction Information section, select the “+” icon to display the fields for adding schematrandata.
Provide the following information: (1) pluggable database name and schema name, and (2) enable scheduling columns (the default). Enable all columns if you may be doing active-active replication or conflict detection resolution at some point in the future. (A-Team recommends setting all columns as a good practice.)
Select the submit button to complete the process. To validate schematrandata has been enabled, enter the schema name in the search box and select the magnifying glass.
The response shows we have our fourteen tables in our TPC schema enabled and prepared for instantiation.
Go back to the Overview menu and select the “+” button in the Extracts tab to create an Integrated Extract. Runtime parameters we are using in the Integrated Extract are:
When finished entering the runtime parameters, select the “Create” button to complete the process. The WebUI will update and show the Integrated Extract.
Start the Integrated Extract by selecting “Start” from the “Action” drop-down.
Once the Integrated Extract shows it is in the “Running” state, you can validate data is being captured in your live database, by selecting “Details” in the “Action” drop-down, and then opening the “Statistics” tab.
Before starting the source data export, we must check for long running transactions in the source database. These will be running transactions that started before Integrated Extract was created. Execute the following query in the source.
|select start_scn, start_time
where start_scn < (select max(start_scn) from dba_capture);
In the below output, there is a transaction running that was started prior to Integrated Extract being registered with the database. This transaction must either commit or be rolled back before the database export is started.
Do not perform the database export while the above query returns rows. Doing so will result in missed data, a target database out-of-sync condition, and you will be required to perform the target instantiation again.
If the query returns “no rows selected”, use Oracle Data Pump Export (expdp) with the flashback_scn option to create a consistent source database export.
To prepare for the export we need to create a disk location on which to place the files.
Login to the source database to create the export directory and grant the privileges required for our ggadmin PDB user to perform the export.
Get the current SCN from the database.
Run the export.
We included the flashback scn value in the export file name. This was done as a reference and reminder that this is the SCN we’ll need to specify when starting the Parallel Replicat.
When the export completes, check the log file for exported row counts and errors.
Copy the dump file to the target database server for import.
Refer to the Oracle documentation, https://docs.oracle.com/en/database/oracle/oracle-database/19/sutil/oracle-data-pump-export-utility.html#GUID-5F7380CE-A619-4042-8D13-1F7DDE429991, for more information on Oracle Data Pump Export.
Create the import directory in the database.
Use the Oracle Data Pump Import utility to import the data from the dump file and create the target schema.
When the import completes, check the logfile for imported row counts and errors. The number of imported rows must match the number of exported rows.
Refer to the Oracle documentation, https://docs.oracle.com/en/database/oracle/oracle-database/19/sutil/datapump-import-utility.html#GUID-D11E340E-14C6-43B8-AB09-6335F0C1F71B for more information on Oracle Data Pump Import.
We’ve already created the database credentials in the target PDB and an associated alias in Oracle GoldenGate Microservices that will be used by the Parallel Replicat. Prior to creating the Parallel Replicat, we need to create a checkpoint table in the target database.
In the WebUI Configuration menu connect to the target database. Select the “+” button next to the Checkpoint area and enter the schema and table name.
Select the “Submit” button to create the checkpoint table.
Go back to the Overview menu and select the “+” button in the Replicats tab to create a Parallel Nonintegrated Replicat. Runtime parameters we are using in the Parallel NonIntegrated Replicat are:
When finished entering the runtime parameters, select the “Create” button to complete the process. The WebUI will update and show the Parallel Replicat.
Start the Parallel Replicat by selecting “Start with Options” from the “Action” drop-down.
On the Start Replicat Options screen, select “After CSN” as the start point, and enter the flash point SCN from the export as the “CSN”.
Select the “Start” button to start the Parallel Replicat.
Once the Parallel Replicat has processed all queued source data, the best way to validate the source and target databases is to use Oracle GoldenGate Veridata. Below is the output of our Veridata compare run showing all 14 tables are in sync. (How to setup Veridata to compare source and target schemas is beyond the scope of this article.)
In this article we went through the steps required to instantiate an Oracle Database 19c target from an Oracle Database 19c source as part of an Oracle GoldenGate implementation.