Oracle GoldenGate Best Practices: GoldenGate Downstream Extract with Oracle Data Guard

Introduction

Oracle’s customers are seeking the highest level of data protection and availability.  In doing so, they have deployed complex configurations using the latest HA and DR solutions.  The goal of this blog post is to discuss an approach of integrating Oracle GoldenGate Downstream Integrated Extract with Oracle Data Guard.

Main Article

 

Downstream Integrated Extract: In release 11.2.1, OGG released a feature which enables downstream capture of data from a single source or multiple sources. This feature is specific to Oracle databases only. This feature helps customers meet their IT requirement of limiting new processes being installed on their production source system. This feature requires configuration of log transport to the downstream database on the source system. It also requires an open downstream database which is where the Integrated Extract will be installed.

Downstream Integrated Extract can be configured to process either in real-time mode or archive log only mode.  Real-time mining can be only be configured for one source database at a time on a given downstream database server. If the implementation requires more than a single source database be replicated, the archive log mode of downstream Integrated Extract must be configured.

More details on installation and configuration of Downstream Integrated Capture can be found in doc ID:1485620.1.

 

Downstream_capture_new

 

In this scenario, Downstream Integrated Extract will run on a third database that is capable of running the LogMining server and storing Logminer’s metadata.  This database would receive redo via a LGWR ASYNC connection to the primary and would mine that redo using Downstream Integrated Extract with Real Time Mine(RTM).

 

Pros:

 1. This completely offloads the mining and extraction of redo from the source system.  The only impact would be the cost to ship redo via ASYNC transport which is extremely lightweight.

2. Extract need not be relocated following the failover to physical standby.  The connection to the source database can be failed over to the new primary via TAF/FAN.

3. This deployment also has the benefit that GoldenGate files, i.e. checkpoint, trail, etc are accessible before and after the role transition because the downstream database hosts these files.  Therefore, there are no additional steps or configuration required to ensure that files are accessible after a role transition.

4. LGWR ASYNC will guarantee that the downstream database remains safely behind the target standby.

5. The branch transversal code is shared with Logical standby and relies on the system.logmnr_evolve$ information, not what is maintained in the new primary’s control file.  This has been tested indirectly through the Data Guard broker tests where logical standby bystanders properly follow the new primary after a failover to a physical standby database.

6. Logminer also has stop_scn logic that should facilitate proper thread merge of both DL and NDL terminal logs.  Again, this has been tested indirectly through some Data Guard broker tests where logical standby bystanders follow the new primary after a failover to a physical standby database.

 

Cons:

 1. This configuration requires the deployment and maintenance of another database.  However, the downstream database need only be capable of running Logminer and storing its metadata.

2. This introduces a single point of failure in the downstream database.  This could be mitigated by using RAC, DG, or a combination to provide added HA/DR for the downstream site.

To configure Downstream Integrated Capture to take advantage of the benefits listed above the following steps should be performed.  Again, note that this is not a comprehensive set of instructions that applies to all configurations but rather a guideline for what should be considered when deploying such a configuration.

Assumptions:

 

  • IE requires RDBMS 11.2.0.3.0_RELEASE with a minimum of bundle patch 3.  We recommend bundle patch 14 since this contains all of the latest fixes.
  • FSFO is configured in SYNC MODE
  • User is familiar with DG broker setup
  • User is familiar with standard GoldenGate setup and downstream GoldenGate setup so those details are omitted
  • User is familiar with using srvctl
  • Both Primary and Standby database are RAC with CRS.  If a single instance Primary or Standby is used then Oracle Restart is required to configure role based services.

High level configuration steps:

 

  1. Setup a broker configuration with a primary and standby database.

 

2. Configure role based services on both the primary and standby database to publish a service name for use in accessing the primary database.  This role based service will ensure that IE connects to the proper primary database after role change events.

Primary cluster:

 

srvctl add service –d db1 –s oggserv –l PRIMARY –q TRUE –e SESSION –m BASIC –w 60 –z 20 –P BASIC –r t1,t2,t3

-d           Unique name for the database

-s            Service name

-l             service role

-q            Send Oracle advanced queuing notifications

-e            Type of failover(SESSION/SELECT/NONE)

-m       Setting for fast failover from the primary site to the standby site

-w       wait time between connect attempts

-z            number of times to attempt to connect after failover

-r        list of participating instances

-p       TAF policy specification

 

For more details on these flags consult the Server Control Utility Reference.

 

Standby cluster:

 

srvctl add service –d db2 –s oggserv –l PRIMARY –q TRUE –e SESSION –m BASIC –w 60 –z 20 –P BASIC –r t4,t5

 

3. Make sure that your new role base service has an appropriate address list with the SCAN hostnames of both primary and standby databases defined in the tnsnames.ora on to facilitate access to all of the available instances.

 

GGCON =
  (DESCRIPTION =
    (ADDRESS_LIST =
      (ADDRESS = (PROTOCOL = TCP)(HOST = coe-01)(PORT = 1530))
      (ADDRESS = (PROTOCOL = TCP)(HOST = coe-02)(PORT = 1531))
      (LOAD_BALANCE = off)
      (FAILOVER = on)
    )
    (CONNECT_DATA =
      (SERVICE_NAME = dg)
      (FAILOVER_MODE = (TYPE = SELECT)(METHOD = BASIC)(RETRIES = 20)(DELAY = 60))
    )
  )

4. Create a 3rd database and follow the instructions in Appendix 1 of the “Oracle GoldenGate Installation and Setup Guide for 11g Release 2 ” to configure IE in downstream mode.  Note that in this case GoldenGate is installed locally on the file system of the downstream database.  Access to the source database must use the designated primary service name and the parameter file for manager should include the AUTORESTART option.  For downstream to function properly without the need for restart, it must have the service name configured with client side TAF enabled so that it can retry the service until it is republished by the new primary.

mgr.prm

###########################################################

PORT 7809

AUTOSTART ER *

AUTORESTART EXTRACT *, RETRIES 3, WAITMINUTES 1, RESETMINUTES 60

AUTORESTART REPLICAT *, RETRIES 3, WAITMINUTES 1, RESETMINUTES 60

####################################################################

 

capture.prm

############################################################

EXTRACT capture

RMTHOST coe-02, MGRPORT 7809

RMTTRAIL ./dirdat/rt

USERID gg_admin@ggcon PASSWORD gg_admin

TRANLOGOPTIONS MININGUSER ggadmcap@dbmscap MININGPASSWORD ggadmcappw

TRANLOGOPTIONS INTEGRATEDPARAMS (downstream_real_time_mine Y)

DISCARDFILE ./dirrpt/capture.dsc, APPEND

MEGABYTES 50

TABLE tk*.*;

#############################################################

Once all of the setup steps have been completed, the physical standby and target sites are healthy and replicating the necessary data then you may attempt a role transition.  Here is a rough overview of the steps:

 

1. Crash primary database

2. Broker will initiate the FSFO once the preconfigured threshold has expired (default 30 seconds)

3. Extract’s connection with the db should initiate attempts to failover the existing connection via client side TAF

4. Once the DG failover completes and the new primary opens for the first time, CRS will publish the service on the new primary database

5. Extract should then be able to reconnect to the new primary service assuming that it’s retry window has not expired

6. Once the connection to the new primary database is complete, Extract is then capable of replicating data from the new primary database

7. Mount the old primary and allow for reinstatement to proceed.  Upon completion, open the new standby db read only and start MRP

8. At this point the DR site is restored and replication is in sync with the new primary DB

 

Add Your Comment