With customers migrating their mission critical 24x7 databases to Oracle Cloud Infrastructure (OCI), it brings the need to migrate such databases to cloud with near zero downtime. The database migration becomes more complex if source database to be migrated is on a version that is not supported on cloud. This puts a mandate for customers to first upgrade their on-premise database version to OCI supported version and then migrate to OCI. This is doable for non-critical databases which can afford downtime for such an upgrade. But for mission critical applications, it may not be possible to take downtime to perform an upgrade on-premise to bring it to a version required for cloud migration. Customers may also have hardware constraints, restriction for additional testing hours required to do it on-premise.
The cloud migration projects may also require migration of database to a more recent version (like 18c/19c). With 18c/19c on cloud, it also becomes mandatory to move from traditional (non-container) database architecture to multitenant architecture. For high performance and consolidation, an additional requirement can be to move to ExaCS (Exadata Cloud Service) on Oracle Cloud Infrastructure.
This blog post talks about strategy to migrate very large databases (with all additional requirements discussed above) from on-premise to Oracle Cloud Infrastructure with minimal downtime using Oracle Goldengate.
For our customer scenario, the source database can be of version 18.104.22.168 (22.214.171.124 being the only version of 11g available on OCI) and has to be upgraded to 18c during cloud migration. From 12c onwards, Oracle Cloud Infrastructure DBaaS service supports multitenant architecture which means that the database has to be converted into PDB during migration to cloud.
So, our scenario for migration is
Let's talk about these requirements
For migrating 126.96.36.199 database from on-premise to 18c on OCI ExaCS, there are 2 high level steps
The first step is important and its needs a decision whether this upgrade needs to be done on-premise or on cloud.
A customer may want to do it on-premise to separate upgrade issues from cloud migration issues. But doing this on-premise may demand additional hardware and additional testing cycles which customer may or may not be willing to do.
For another customer, it may make more sense to perform this upgrade during the migration and perform just one round of testing on the final upgraded version of database in cloud. There is no hardware available on-premise to test upgrade, no bandwidth to perform multiple cycles of testing on-premise as well as on cloud and on top of that they want to move to cloud faster.
The second step for conversion to multitenant architecture can be done directly on cloud.
2. Move database from on-premise traditional hardware to ExaCS on Oracle Cloud Infrastructure
The driver for movement to Exadata Cloud Service can be consolidation, performance or database size. For migration of databases greater than what DBaaS service supports, it is recommended to use Exadata Cloud service.
3. Move database to multitenant architecture i.e. convert non-container database to container database on cloud
As OCI DBaaS services support multitenant architecture, so any database from 12c onwards is required to be converted to multitenant (pluggable) database after migration to cloud.
4. Migrate with near zero downtime alongwith fallback option for cutover
Goldengate can perform migration with near zero downtime and fallback option.
5. Faster movement of data from on-premise to cloud
Another import factor that come in cloud migration is the method to be used for movement of data from on-premise to cloud.
The data can be moved from on-premise to OCI over the internet, using Fastconnect or by using DTA (Data Transfer Appliance). Data movement over the internet has security concerns and can be used for smaller databases but certainly not an option for multi terabyte databases. So the data transfer option for large database is either DTA or Fastconnect. With Fastconnect, the data can be moved to cloud much faster. DTA has slightly higher turnaround time and can be used as an option where time is not a constraint for data movement. For our scenerio of migration using Goldengate, either DTA or transfer over Fastconnect can be used.
We have the scenario, so lets see the end to end steps/process for migration.
On Source (On-premise)
On Target (Cloud)
There are 3 things that are required to be considered for Goldengate installation.
1. Where is Goldengate software going to be installed?
Goldengate on source can be installed on the database server itself or on a separate machine. Considering source system is a production live system, the customer may not allow installation of Goldengate software on source database server. So it makes more sense to install Goldengate on a separate machine and perform remote capture of transactions from source database. The remote capture process uses SQL*NET to make connection to database and capture transactions remotely. The remotely captured transactions are written into local trail files on the server where Goldengate software is installed. A Goldengate user needs to be created on source database to capture transactions .
2. What will be the version of Goldengate software?
It is always best to use the latest supported version of Goldengate for source database version as per certification matrix on My Oracle Support. The highest version of Goldengate that can capture from an 188.8.131.52 database is Goldengate 12.3.
3. What would be the capture mode?
Running capture on an 184.108.40.206 database in integrated mode requires certain patches on 220.127.116.11 database. If those patches are there on the database, integrated mode can be used for capture, if not either we need to apply patches (which most probably customer may not allow on production system) or we need to use classic mode of capture.
For more information, check MOS Note - 18.104.22.168 Database Specific Bundle Patches for Integrated Extract 11.2.x (Doc ID 1411356.1)
The Goldengate software relies on Oracle Client libraries to connect to database. So Oracle Client software needs to installed on Goldengate server.
Install Oracle 22.214.171.124 Client
The manager has been configured on port 7809 with standard set of parameters.
GGSCI (mlib-gghub) 3> view params mgr
AUTOSTART EXTRACT *
AUTORESTART ER *, RETRIES 3,
GGSCI (mlib-gghub) 4>
Assuming that there are no long running transactions on database, the extract can be started with begin now option. The parameter file for extract captures transactions for 2 schemas - SCH_B01 and SCH_C01 along with sequences.
GGSCI (mlib-gghub) 3> add extract ext11g, tranlog, begin now
GGSCI (mlib-gghub) 3> add exttrail ./dirdat/et, extract ext11g
GGSCI (mlib-gghub) 3> view params ext11g
WARNLONGTRANS 2h CHECKINTERVAL 5m
DISCARDROLLOVER AT 01:00 ON SUNDAY
TABLE SCH_B01.* ;
SEQUENCE SCH_B01.* ;
Pump has been configured to read the trail files generated by extract process and transfer changes to Goldengate instance on cloud. x.x.x.x is the IP address of the Goldengate instance on cloud. The pump transfers all captured changes to target without any filtering.
GGSCI (mlib-gghub) 3> add extract pmp11g, EXTTRAILSOURCE ./dirdat/et
GGSCI (mlib-gghub) 3> add rmttrail ./dirdat/rt ,extract pmp11g
GGSCI (mlib-gghub) 3> view params pmp11g
TABLE SCH_B01.* ;
SEQUENCE SCH_B01.* ;
The extract process can be started on source to start capturing the transactions from source database. The pump is also started at the same time to transfer captured changes to target Goldengate instance on cloud.
GGSCI (mlib-gghub) 2> start extract EXT11G
GGSCI (mlib-gghub) 2> start extract PMP11G
Considering size of database and complexity of migration, an RMAN backup of source database can be used for initial load of target database on cloud.
The database backup can be taken directly on Oracle Cloud Infrastructure Object storage using RMAN. However some customers do not prefer to configure libraries to perform backup on their database server or have bandwidth limitation for direct transfer to object storage. Such customers can take the backups locally and move them directly to the ExaCS node.
An important factor to consider for RMAN backup is the number of channels to be used. The more the number of channels, the faster would be the backup. The number of channels are also limited by performance of underlying I/O subsystem. So it is suggested to run few backup tests to arrive at the optimum number of channels that would provide maximum speed for available I/O subsystem.
Backup compression also plays an important role. The smaller the size of compressed backup, the lesser will be the time to transfer the backup pieces to cloud over Fastconnect. The compression ratio that can be achieved for backup depends on underlying data in database but it is always good to use compression to reduce the size of backup pieces.
A lot of customers have restrictions to execute backup for migration on the production system. For such cases, the backup can be taken from a standby database (if available).
$ cat /home/oracle/backup.sh
rman target / msglog /home/oracle/backup.log cmdfile=/home/oracle/backup.rcv
$ cat /home/oracle/backup.rcv
allocate channel ch1 type DISK ;
allocate channel ch24 type DISK ;
BACKUP AS COMPRESSED BACKUPSET DATABASE FORMAT '/rman_backups/bildb/%d_D_%T_%u_s%s_p%p'
plus archivelog format '/rman_backups/bildb/arch_%U';
BACKUP CURRENT CONTROLFILE FORMAT '/rman_backups/bildb/cntrl_%s_%p_%t';
release channel ch1;
release channel ch24;
$ nohup sh /home/oracle/backup.sh &
RMAN> restore database preview summary
The backup can be transferred using SCP/WinSCP directly to ExaCS database server. As ExaCS database system has limited space on local filesystems, an ACFS based filesystem can be created to hold the temporary backups for restore. The ACFS filesystem to hold backup pieces can be created on RECO diskgroup to ensure that DATA diskgroup has sufficient space available for restore.
Just like source, a Goldengate instance is required on target too. As our target is Exadata Cloud Service on Oracle Cloud Infrastructure, it is recommenced to install Goldengate on a separate machine. A compute instance can be created and Goldengate can be installed manually on it. The installation steps are same as mentioned for Goldengate installation on source.
The replicat created is a parallel extract although an integrated can also be created.
GGSCI (mlib-gghubtgt) 2> add replica rep18c parallel, exttrail ./dirdat/rt, checkpointtable oci_migra.ggchkpt
GGSCI (mlib-gghubtgt) 2> view params rep18c
DISCARDFILE ./dirrpt/rep18c.dsc, append
REPORTCOUNT EVERY 30 SECONDS, RATE
REPORTROLLOVER AT 01:00 ON SUNDAY
DISCARDROLLOVER AT 01:00 ON SUNDAY
MAP SCH_B01.*, TARGET BILDB.SCH_B01.*;
MAP SCH_C01.*, TARGET BILDB.SCH_C01.*;
With Exadata Cloud Service, it is assumed that the target container database is already created. The backup that has been brought from on-premise database will be upgraded to 18c and plugged in as pluggable database in this precreated container.
Although the database backup we brought from on-premise is 126.96.36.199, it can be restored under 18c home on ExaCS.
For more information on upgrade to 18c, check MOS Note - Complete Checklist for Manual Upgrades to Non-CDB Oracle Database 18c (Doc ID 2418045.1)
Few important points for restore -
1) Set until SCN is the SCN captured from source RMAN backup. This is to ensure that we are restoring the database upto a certain point. And this will also be the starting point to start applying our change deltas coming via Goldengate.
2) Only restore the database. Do not open the database after restore and recovery. This is because we want to upgrade our database to 18c.
For upgrade, open the database with upgrade option and run dbupgrade.
RMAN> restore controlfile from '/rman_backups/cntrl_882_1_1005936032';
An important step after upgrade is to update the timezone of source database. The timezone of source database may not be the latest one and needs an upgrade to the latest one as per cloud database. At the time of writing this blog, the latest timezone version on cloud is 31. So if the timezone of source database is, say 14, then it needs to be updated to 31. The process to update the timezone is simple and very well documented in MOS Note "Updating the RDBMS DST version in 12c Release 1 (188.8.131.52 and up) using DBMS_DST (Doc ID 1509653.1)". The timezone patch should already be there on Oracle binaries, so only the data portion has be executed as per MOS Note. A quick way to check the timezone version is to run this statement in target container.
SQL> select * from v$timezone_file;
FILENAME VERSION CON_ID
After timezone update, the non-container database is ready to be plugged in as a pluggable database in target container. The non-container database has to be kept in READ ONLY mode for the plugin operation. Since the database size is huge, the pluggable database can be created using NOCOPY clause which means same set of datafiles that belong to non-container database are repointed to pluggable database instead of performing a copy operation. This saves time as well space for huge databases.
For more information on conversion process, check MOS Note How to migrate an existing pre-12c database (non-CDB) to 12c CDB database? (Doc ID 1564657.1).
The final step after pluggable database creation is to run non_cdb_to_pdb.sql script to update the medata.
Generate PDB descriptor file in non-CDB
SQL> alter database open read only;
DBMS_PDB.DESCRIBE(pdb_descr_file => '/tmp/bildb.xml');
SQL> shutdown immediate
Create PDB in CDB container
SQL> SET SERVEROUTPUT ON;
compatible CONSTANT VARCHAR2(3) := CASE DBMS_PDB.CHECK_PLUG_COMPATIBILITY(pdb_descr_file => '/tmp/bildb.xml')
WHEN TRUE THEN 'YES'
SQL> CREATE PLUGGABLE DATABASE bildb USING '/tmp/bildb.xml' SOURCE_FILE_NAME_CONVERT=('+DATA/bildb/DATAFILE','+DATA/bildb/DATAFILE','+DATA/bildb/TEMPFILE','+DATA/bildb/TEMPFILE') NOCOPY TEMPFILE REUSE;
Pluggable database created.
SQL> alter session set container=bildb;
SQL> show con_name
Now that database has been restored and plugged in as pluggable database in the target container, initial load for Goldengate is complete. The next step is to start change synchronization for applying change deltas.
We need to start change synchronization from the same SCN that we captured during RMAN backup taken from source database.
GGSCI (mlib-gghubtgt) 2> START REPLICAT rep18c, ATCSN 2579195632075
After starting the replication, the changes from on-premise database will start getting applied to 18c database on Oracle Cloud Infrastructure.
For cutover, it is recommended to create reverse replication path from 18c database on cloud to 184.108.40.206 database on-premise. After cutover, the reverse replication path has to be enabled for transactions to flow from cloud database to on-premise 220.127.116.11 database.
1) Using this single step direct migration procedure, the on-premise databases can be easily migrated to Oracle Cloud Infrastructure with near zero downtime and at the same time upgraded to latest version.
2) The procedure also proves that older database versions (like 18.104.22.168) which are not supported on Oracle Cloud DBaaS service can be migrated to latest version on Oracle Cloud Infrastructure in a single step.
3) This single step migration procedure reduces cost, end to end migration time and testing effort required for such migrations.