Migration and upgrade of 11g database on-premise to 18c on Oracle Cloud Infrastructure with near zero downtime

August 30, 2019 | 12 minute read
Rishi Mahajan
Consulting Solutions Architect
Text Size 100%:

Introduction

With customers migrating their mission critical 24x7 databases to Oracle Cloud Infrastructure (OCI), it brings the need to migrate such databases to cloud with near zero downtime. The database migration becomes more complex if source database to be migrated is on a version that is not supported on cloud. This puts a mandate for customers to first upgrade their on-premise database version to OCI supported version and then migrate to OCI. This is doable for non-critical databases which can afford downtime for such an upgrade. But for mission critical applications, it may not be possible to take downtime to perform an upgrade on-premise to bring it to a version required for cloud migration. Customers may also have hardware constraints, restriction for additional testing hours required to do it on-premise.

The cloud migration projects may also require migration of database to a more recent version (like 18c/19c). With 18c/19c on cloud, it also becomes mandatory to move from traditional (non-container) database architecture to multitenant architecture. For high performance and consolidation,  an additional requirement can be to move to ExaCS (Exadata Cloud Service) on Oracle Cloud Infrastructure.

This blog post talks about strategy to migrate very large databases (with all additional requirements discussed above) from on-premise to Oracle Cloud Infrastructure with minimal downtime using Oracle Goldengate. 

For our customer scenario, the source database can be of version 11.2.0.3 (11.2.0.4 being the only version of 11g available on OCI) and has to be upgraded to 18c during cloud migration. From 12c onwards, Oracle Cloud Infrastructure DBaaS service supports multitenant architecture which means that the database has to be converted into PDB during migration to cloud.

Migration Scenario

So, our scenario for migration is

  1. Migrate 11.2.0.3 source database to 18c on Oracle Cloud Infrastructure
  2. Move database from on-premise traditional hardware to ExaCS on Oracle Cloud Infrastructure.
  3. Move database to multitenant architecture i.e. convert non-container database to container database on cloud.
  4. Migrate with near zero downtime alongwith fallback option for cutover.
  5. Faster movement of data from on-premise to cloud.

Let's talk about these requirements

1. Migrate 11.2.0.3 source database to 18c on Oracle Cloud Infrastructure

For migrating 11.2.0.3 database from on-premise to 18c on OCI ExaCS, there are 2 high level steps

  1. Upgrade 11.2.0.3 non-container database to 18c non-container database
  2. Convert upgraded 18c non-container database to container database.

The first step is important and its needs a decision whether this upgrade needs to be done on-premise or on cloud.

A customer may want to do it on-premise to separate upgrade issues from cloud migration issues. But doing this on-premise may demand additional hardware and additional testing cycles which customer may or may not be willing to do.

For another customer, it may make more sense to perform this upgrade during the migration and perform just one round of testing on the final upgraded version of database in cloud. There is no hardware available on-premise to test upgrade, no bandwidth to perform multiple cycles of testing on-premise as well as on cloud and on top of that they want to move to cloud faster.

The second step for conversion to multitenant architecture can be done directly on cloud.

2. Move database from on-premise traditional hardware to ExaCS on Oracle Cloud Infrastructure

The driver for movement to Exadata Cloud Service can be consolidation, performance or database size. For migration of databases greater than what DBaaS service supports, it is recommended to use Exadata Cloud service.

3. Move database to multitenant architecture i.e. convert non-container database to container database on cloud

As OCI DBaaS services support multitenant architecture, so any database from 12c onwards is required to be converted to multitenant (pluggable) database after migration to cloud.

4. Migrate with near zero downtime alongwith fallback option for cutover

Goldengate can perform migration with near zero downtime and fallback option.

5. Faster movement of data from on-premise to cloud

Another import factor that come in cloud migration is the method to be used for movement of data from on-premise to cloud.

The data can be moved from on-premise to OCI over the internet, using Fastconnect or by using DTA (Data Transfer Appliance). Data movement over the internet has security concerns and can be used for smaller databases but certainly not an option for multi terabyte databases. So the data transfer option for large database is either DTA or Fastconnect. With Fastconnect, the data can be moved to cloud much faster. DTA has slightly higher turnaround time and can be used as an option where time is not a constraint for data movement. For our scenerio of migration using Goldengate, either DTA or transfer over Fastconnect can be used.

Migration Procedure

We have the scenario, so lets see the end to end steps/process for migration.

On Source (On-premise)

  1. Install and Configure Goldengate instance.
  2. Start Capture on source for change synchronization.
  3. Perform RMAN backup of source database for Goldengate Initial Load.
  4. Transfer RMAN backup from source to cloud.

On Target (Cloud)

  1. Install and Configure Goldengate instance.
  2. Restore RMAN backup of source 11.2.0.3 database under 18c oracle home and upgrade to 18c.
  3. Update the Timezone of upgraded database.
  4. Convert non-container database to container (pluggable) database
  5. Start Replicat on target database for change synchronization

1. Install and Configure Goldengate on source

There are 3 things that are required to be considered for Goldengate installation.

1. Where is Goldengate software going to be installed?

Goldengate on source can be installed on the database server itself or on a separate machine. Considering source system is a production live system, the customer may not allow installation of Goldengate software on source database server. So it makes more sense to install Goldengate on a separate machine and perform remote capture of transactions from source database. The remote capture process uses SQL*NET to make connection to database and capture transactions remotely. The remotely captured transactions are written into local trail files on the server where Goldengate software is installed. A Goldengate user needs to be created on source database to capture transactions .

2. What will be the version of Goldengate software?

It is always best to use the latest supported version of Goldengate for source database version as per certification matrix on My Oracle Support. The highest version of Goldengate that can capture from an 11.2.0.3 database is Goldengate 12.3.

3. What would be the capture mode?

Running capture on an 11.2.0.3 database in integrated mode requires certain patches on 11.2.0.3 database. If those patches are there on the database, integrated mode can be used for capture, if not either we need to apply patches (which most probably customer may not allow on production system) or we need to use classic mode of capture.

For more information, check MOS Note - 11.2.0.3 Database Specific Bundle Patches for Integrated Extract 11.2.x (Doc ID 1411356.1)

The Goldengate software relies on Oracle Client libraries to connect to database. So Oracle Client software needs to installed on Goldengate server.

Install Oracle 11.2.0.3 Client
$ ./runInstaller -silent -responsefile /tmp/client_install.rsp


Install Goldengate 12.3.0.1.4
$ unzip 123014_fbo_ggs_Linux_x64_shiphome.zip
$ cd fbo_ggs_Linux_x64_shiphome/Disk1
$ ./runInstaller -silent -reponsefile /tmp/oggcore.rsp

 

Configure Manager

The manager has been configured on port 7809 with standard set of parameters.

GGSCI (mlib-gghub) 3> view params mgr

PORT 7809
DYNAMICPORTLIST 7740-7760

AUTOSTART EXTRACT *
AUTOSTART REPLICAT *

AUTORESTART ER *, RETRIES 3, 
WAITMINUTES 10, RESETMINUTES 60
DOWNREPORTMINUTES 15
LAGCRITICALSECONDS 10
LAGINFOMINUTES 0
LAGREPORTMINUTES 15

GGSCI (mlib-gghub) 4>

 

Configure Extract

Assuming that there are no long running transactions on database, the extract can be started with begin now option.  The parameter file for extract captures transactions for 2 schemas - SCH_B01 and SCH_C01 along with sequences. 

GGSCI (mlib-gghub) 3> add extract ext11g, tranlog, begin now

GGSCI (mlib-gghub) 3> add exttrail ./dirdat/et, extract ext11g

GGSCI (mlib-gghub) 3> view params ext11g

EXTRACT ext11g
EXTTRAIL ./dirdat/et
USERIDALIAS oci_migra
TRANLOGOPTIONS DBLOGREADER
DISCARDFILE ./dirrpt/ext11g.dsc, 
APPEND MEGABYTES 50

WARNLONGTRANS 2h CHECKINTERVAL 5m
GETUPDATEBEFORES
NOCOMPRESSDELETES
NOCOMPRESSUPDATES

DISCARDROLLOVER AT 01:00 ON SUNDAY
STATOPTIONS REPORTFETCH
REPORTCOUNT every 10 minutes, RATE
REPORTROLLOVER AT 01:00 ON SUNDAY

TABLE SCH_B01.* ;
TABLE SCH_C01.* ;

SEQUENCE SCH_B01.* ;
SEQUENCE SCH_C01.* ;

 

Configure Pump

Pump has been configured to read the trail files generated by extract process and transfer changes to Goldengate instance on cloud. x.x.x.x is the IP address of the Goldengate instance on cloud. The pump transfers all captured changes to target without any filtering.

GGSCI (mlib-gghub) 3> add extract pmp11g, EXTTRAILSOURCE ./dirdat/et

GGSCI (mlib-gghub) 3> add rmttrail ./dirdat/rt ,extract pmp11g

GGSCI (mlib-gghub) 3> view params pmp11g

EXTRACT pmp11g
RMTHOST x.x.x.x MGRPORT 7809
PASSTHRU
RMTTRAIL ./dirdat/rt

TABLE SCH_B01.* ;
TABLE SCH_C01.* ;

SEQUENCE SCH_B01.* ;
SEQUENCE SCH_C01.* ;

 

2. Start capture on source for change synchronization

The extract process can be started on source to start capturing the transactions from source database. The pump is also started at the same time to transfer captured changes to target Goldengate instance on cloud.

GGSCI (mlib-gghub) 2> start extract EXT11G

GGSCI (mlib-gghub) 2> start extract PMP11G

 

3. Perform RMAN backup of source database for Goldengate Initial Load

Considering size of database and complexity of migration, an RMAN backup of source database can be used for initial load of target database on cloud.

The database backup can be taken directly on Oracle Cloud Infrastructure Object storage using RMAN. However some customers do not prefer to configure libraries to perform backup on their database server or have bandwidth limitation for direct transfer to object storage. Such customers can take the backups locally and move them directly to the ExaCS node.

An important factor to consider for RMAN backup is the number of channels to be used. The more the number of channels, the faster would be the backup. The number of channels are also limited by performance of underlying I/O subsystem. So it is suggested to run few backup tests to arrive at the optimum number of channels that would provide maximum speed for available I/O subsystem.

Backup compression also plays an important role. The smaller the size of compressed backup, the lesser will be the time to transfer the backup pieces to cloud over Fastconnect. The compression ratio that can be achieved for backup depends on underlying data in database but it is always good to use compression to reduce the size of backup pieces.

A lot of customers have restrictions to execute backup for migration on  the production system. For such cases, the backup can be taken from a standby database (if available).

$ cat /home/oracle/backup.sh

#!/bin/ksh

export ORACLE_SID=bildb

rman target / msglog /home/oracle/backup.log cmdfile=/home/oracle/backup.rcv

$ cat /home/oracle/backup.rcv

run

{

allocate channel ch1 type DISK ;

allocate channel ch24 type DISK ;

BACKUP AS COMPRESSED BACKUPSET DATABASE FORMAT '/rman_backups/bildb/%d_D_%T_%u_s%s_p%p'

plus archivelog format '/rman_backups/bildb/arch_%U';

BACKUP CURRENT CONTROLFILE FORMAT '/rman_backups/bildb/cntrl_%s_%p_%t';

release channel ch1;

release channel ch24;

}

$ nohup sh /home/oracle/backup.sh &

RMAN> restore database preview summary

The restore database preview summary will give the SCN number of RMAN backup. This is the SCN upto which data will be restored  on target for Goldengate initial load. This SCN will also be the starting point for change synchronization on target database.

4. Transfer RMAN backup from source to cloud

The backup can be transferred using SCP/WinSCP directly to  ExaCS database server. As ExaCS database system has limited space on local filesystems, an ACFS based filesystem can be created to hold the temporary backups for restore. The ACFS filesystem to hold backup pieces can be created on RECO diskgroup to ensure that DATA diskgroup has sufficient space available for restore. 

5. Install and Configure Goldengate on Target 

Just like source, a Goldengate instance is required on target too. As our target is Exadata Cloud Service on Oracle Cloud Infrastructure, it is recommenced to install Goldengate on a separate machine. A compute instance can be created and Goldengate can be installed manually on it. The installation steps are same as mentioned for Goldengate installation on source.

The replicat created is a parallel extract although an integrated can also be created.

Configure Replicat

GGSCI (mlib-gghubtgt) 2> add replica rep18c parallel, exttrail ./dirdat/rt, checkpointtable oci_migra.ggchkpt

GGSCI (mlib-gghubtgt) 2> view params rep18c

REPLICAT rep18c

useridalias oci_migra_cld

DISCARDFILE ./dirrpt/rep18c.dsc, append

MAP_PARALLELISM 12

APPLY_PARALLELISM 12

REPORTCOUNT EVERY 30 SECONDS, RATE

REPORTROLLOVER AT 01:00 ON SUNDAY

DISCARDROLLOVER AT 01:00 ON SUNDAY

MAP SCH_B01.*, TARGET BILDB.SCH_B01.*;

MAP SCH_C01.*, TARGET BILDB.SCH_C01.*;

 

6. Restore RMAN backup of source 11.2.0.3 database under 18c oracle home and upgrade to 18c

With Exadata Cloud Service, it is assumed that the target container database is already created. The backup that has been brought from on-premise database will be upgraded to 18c and plugged in as pluggable database in this precreated container.

Although the database backup we brought from on-premise is 11.2.0.3, it can be restored under 18c home on ExaCS.

For more information on upgrade to 18c, check MOS Note - Complete Checklist for Manual Upgrades to Non-CDB Oracle Database 18c (Doc ID 2418045.1)

Few important points for restore -

1) Set until SCN is the SCN captured from source RMAN backup. This is to ensure that we are restoring the database upto a certain point. And this will also be the starting point to start applying our change deltas coming via Goldengate.

2) Only restore the database. Do not open the database after restore and recovery. This is because we want to upgrade our database to 18c. 

For upgrade, open the database with upgrade option and run dbupgrade.

RMAN> restore controlfile from '/rman_backups/cntrl_882_1_1005936032';


RMAN> alter database mount;
SQL "alter database rename file ''/oradata/bildb/bildb/onlinelog/thread1/redo1_1.log'' to ''+RECOC1''";
SQL "alter database rename file ''/oradata/bildb_mirr/bildb/onlinelog/thread1/redo1_2.log'' to ''+RECOC1''";
SQL "alter database rename file ''/oradata/bildb/bildb/onlinelog/thread1/redo2_1.log'' to ''+RECOC1''";
SQL "alter database rename file ''/oradata/bildb_mirr/bildb/onlinelog/thread1/redo2_2.log'' to ''+RECOC1''";
SQL "alter database rename file ''/oradata/bildb/bildb/onlinelog/thread1/redo3_1.log'' to ''+RECOC1''";
SQL "alter database rename file ''/oradata/bildb_mirr/bildb/onlinelog/thread1/redo3_2.log'' to ''+RECOC1''";
SQL "alter database rename file ''/oradata/bildb/bildb/onlinelog/thread1/redo30_1.log'' to ''+RECOC1''";
SQL "alter database rename file ''/oradata/bildb_mirr/bildb/onlinelog/thread1/redo30_2.log'' to ''+RECOC1''";
SQL "alter database rename file ''/oradata/bildb/bildb/onlinelog/thread1/redo31_1.log'' to ''+RECOC1''";
SQL "alter database rename file ''/oradata/bildb_mirr/bildb/onlinelog/thread1/redo31_2.log'' to ''+RECOC1''";
SQL "alter database rename file ''/oradata/bildb/bildb/onlinelog/thread1/redo32_1.log'' to ''+RECOC1''";
SQL "alter database rename file ''/oradata/bildb_mirr/bildb/onlinelog/thread1/redo32_2.log'' to ''+RECOC1''";
SQL "alter database rename file ''/oradata/bildb/bildb/onlinelog/thread1/redo40_1.dbf'' to ''+RECOC1''";
SQL "alter database rename file ''/oradata/bildb_mirr/bildb/onlinelog/thread1/redo40_2.dbf'' to ''+RECOC1''";
SQL "alter database rename file ''/oradata/bildb/bildb/onlinelog/thread1/redo41_1.dbf'' to ''+RECOC1''";
SQL "alter database rename file ''/oradata/bildb_mirr/bildb/onlinelog/thread1/redo41_2.dbf'' to ''+RECOC1''";

RMAN> run
{
allocate channel ch1 type DISK ;
allocate channel ch24 type DISK ;
set until scn 2579195632075;
set newname for database to '+DATAC1';
restore database;
switch datafile all;
switch tempfile all;
recover database;
release channel ch1;
release channel ch24;
}
SQL> alter database open resetlogs upgrade
$ORACLE_HOME/bin/dbupgrade -n 20
SQL> @?/rdbms/admin/utlu122s.sql
SQL> @?/rdbms/admin/catuppst.sql 
SQL> @?/rdbms/admin/utlrp.sql

 

7. Update the Timezone of upgraded database

An important step after upgrade is to update the timezone of source database. The timezone of source database may not be the latest one and needs an upgrade to the latest one as per cloud database. At the time of writing this blog, the latest timezone version on cloud is 31. So if the timezone of source database is, say 14, then it needs to be updated to 31.  The process to update the timezone is simple and very well documented in MOS Note "Updating the RDBMS DST version in 12c Release 1 (12.1.0.1 and up) using DBMS_DST (Doc ID 1509653.1)". The timezone patch should already be there on Oracle binaries, so only the data portion has be executed as per MOS Note. A quick way to check the timezone version is to run this statement in target container.

SQL> select * from v$timezone_file;

FILENAME                VERSION     CON_ID
-------------------- ---------- ----------
timezlrg_31.dat              31          0

 

8. Convert non-container database to container (pluggable) database

After timezone update, the non-container database is ready to be plugged in as a pluggable database in target container. The non-container database has to be kept in READ ONLY mode for the plugin operation. Since the database size is huge, the pluggable database can be created using NOCOPY clause which means same set of datafiles that belong to non-container database are repointed to pluggable database instead of performing a copy operation. This saves time as well space for huge databases. 

For more information on conversion process, check MOS Note How to migrate an existing pre-12c database (non-CDB) to 12c CDB database? (Doc ID 1564657.1).

The final step after pluggable database creation is to run non_cdb_to_pdb.sql script to update the medata.

Generate PDB descriptor file in non-CDB

SQL> alter database open read only;

SQL>BEGIN

DBMS_PDB.DESCRIBE(pdb_descr_file => '/tmp/bildb.xml');

END;

/

SQL> shutdown immediate

Create PDB in CDB container

SQL> SET SERVEROUTPUT ON;

DECLARE

compatible CONSTANT VARCHAR2(3) := CASE DBMS_PDB.CHECK_PLUG_COMPATIBILITY(pdb_descr_file => '/tmp/bildb.xml')

WHEN TRUE THEN 'YES'

ELSE 'NO'

END;

BEGIN

DBMS_OUTPUT.PUT_LINE(compatible);

END;

/

SQL> CREATE PLUGGABLE DATABASE bildb USING '/tmp/bildb.xml' SOURCE_FILE_NAME_CONVERT=('+DATA/bildb/DATAFILE','+DATA/bildb/DATAFILE','+DATA/bildb/TEMPFILE','+DATA/bildb/TEMPFILE') NOCOPY TEMPFILE REUSE;

Pluggable database created.

SQL> alter session set container=bildb;

Session altered.

SQL> show con_name

CON_NAME
------------------------------
BILDB
SQL>

SQL> @/u01/app/oracle/product/18.0.0.0/dbhome_1/rdbms/admin/noncdb_to_pdb.sql

 

9. Start replicat on target database for change synchronization

Now that database has been restored and plugged in as pluggable database in the target container, initial load for Goldengate is complete. The next step is to start change synchronization for applying change deltas.

We need to start change synchronization from the same SCN that we captured during RMAN backup taken from source database.

GGSCI (mlib-gghubtgt) 2> START REPLICAT rep18c, ATCSN 2579195632075

After starting the replication, the changes from on-premise database will start getting applied to 18c database on Oracle Cloud Infrastructure.

For cutover, it is recommended to create  reverse replication path from 18c database on cloud to 11.2.0.3 database on-premise. After cutover, the reverse replication path has to be enabled for transactions to flow from cloud database to on-premise 11.2.0.3 database.

Conclusion

1) Using this single step direct migration procedure, the on-premise databases can be easily migrated to Oracle Cloud Infrastructure with near zero downtime and at the same time upgraded to latest version.

2) The procedure also proves that older database versions (like 11.2.0.3) which are not supported on Oracle Cloud DBaaS service can  be migrated to latest version on Oracle Cloud Infrastructure in a single step.

3) This single step migration procedure reduces cost, end to end migration time and testing effort required for such migrations.

Rishi Mahajan

Consulting Solutions Architect


Previous Post

Federation with Oracle Cloud Infrastructure and Oracle Access Manager

Vinay Kalra | 2 min read

Next Post


Deploying Remote Data Gateway in a Private Subnet

Dayne Carley | 1 min read