Leveraging a Migration to Modernize a Multitenant Environment

October 28, 2020 | 5 minute read
Text Size 100%:

Migrations are a necessary evil. But if they can't be avoided, one should at least take advantage of the opportunity and piggyback some modernization onto the move. This short article tells the story of such a migration. Here's the setting:

The workload to migrate consisted of several dozen instances of Weblogic, each with their own database for application and data. The databases were hosted on a few container databases, enclosed in one PDB for each instance of Weblogic. These databases were still on 12c (12.1.0.2), so in addition to migrating the applications (with minimal disruption to the users), the mandate was to also upgrade the databases to 19c using the same downtime window. To make things more interesting, each instance pair (Weblogic and PDB) was to be migrated at a different time than all the other pairs. With these requirements, there were three separate tasks to deal with:

  1. Migrate the Weblogic instances.
    Since the actual application and its data lives in the corresponding database, there wasn't really anything to migrate here. All that was needed was a set of new Weblogic instances in the target location. To bootstrap them, they need a system schema in an existing database. Providing a new PDB (shared by all Weblogic instances) took care of that. Then, once the application database for this particular instance was migrated and upgraded to 19c, a quick reconfiguration of the instance was sufficient to bring up the application.
  2. Migrate the application database (for each Weblogic instance).
    The application use case allowed for a switchover downtime of around one hour. This made a rather traditional approach with incremental RMAN backups feasible, where the final backup and restore only brings in the small set of last minute updates to the data. After considering several other alternatives, we decided to go with transportable tablespaces. This was easy, as each application instance usually lived in a single, dedicated tablespace. It was also the method that allowed an elegant upgrade to 19c. Note that we had to migrate one PDB at a time, so doing the full CDB with all its PDBs was not an option.
  3. Upgrading the database from 12c to 19c
    Usually, a database upgrade is a rather involved process. If you upgrade the CDB, all the PDBs need to be upgraded as well, and at the same time. You need to do it before migrating, which implies that you'll have downtime for the upgrade and then more downtime because of the migration. We were looking for a way to avoid both of this, by circumventing an actual upgrade. In particular, we needed to migrate individual PDBs one by one. So instead of upgrading the source, we made use of how the actual data was being moved, "plugging" the newly populated tablespace into a virgin 19c PDB on the target side and pulling all the database artifacts that were not included in the tablespace in via a database link to the source.

These last two steps, which are closely related, will be discussed in detail here. They allowed for a seamless migration of individual instances of the application with only one minimal outage to the application, while bringing the database from 12c to 19c at the same time.

There is a nifty little tool to use with transportable tablespace migrations called "xtts". It is described in great detail in MOS Note 2471245.1. It takes away the burden of configuring RMAN and dealing with full or incremental backups and restores. You configure it with some basic information about the source and target databases and which tablespaces to migrate. It then takes care of the rest - including the copying of the data to the target system, if desired. Once configured, you can run it as often as you like, and it will sync the target with the latest updates without disrupting the source.

In our case, we had some additional challenges dealing with compression and encryption of the backups. xtts assumes a few things about the RMAN configuration and these assumptions didn't match our environment. A few slight modifications of the code templates used by xtts took care of that. For example, we had to replace

backup for transport allow inconsistent

with

set encryption off for all tablespaces;set compression algorithm "basic";
backup as backupset for transport allow inconsistent

Our tablespaces were already encrypted using TDE, so additional RMAN encryption was unnecessary and only introduced issues with encryption passwords for the restore. Once these issues were resolved, restoring the tablespaces on a 19c database system running in OCI DBaaS worked like a charm.

It is important to remember that at this stage, all we did was move a tablespace. We didn't create a database, we didn't import the data into a database. We only moved the data stored in the tablespace by copying the tablespace.  As a reminder, tablespaces, on their own, only contain data (tables, table data, indexes and types). They do not contain any of the database logic.  All of that resides in the data dictionary, which lives in the SYSTEM tablespace.  Since this can't be set to read-only, it can't be transported.  So we need a different way to move all of this "logic" ( triggers, synonyms, views, grants, roles, sequences, global temporary tables, database links etc) into the new PDB.  And finally, we also need to attach the transported tablespace to the new PDB.  All of this is done with a feature called " Full Transportable Export Import" using Data Pump.  Again, the necessary steps are part of the overall XTTS process:

As described in DocID 2471245.1, the last phase in XTTS transfers is a final incremental backup and restore, followed by an import of the object metadata into the target database as described above. In our case, this target database was a virgin 19c PDB (created using the same character set and timezone settings as the source database). To achieve this (following option 2.C of phase 5), we created a database link to the source database

create public database link pdb1.<target database> connect to system identified by <password> 
using '//<source db IP>:1521/pdb1.<source database>';

and ran impdp of the source over that database link, pointing to the already restored data files for our migrated tablespace:

impdp system/<password>@pdbt1 network_link=pdb1 full=y transportable=always metrics=y 
exclude=statistics directory=LOGDIR logfile=pdb1.log 
transport_datafiles='<list of migrated datafiles>' ENCRYPTION_PASSWORD=<password>

During this process, Data Pump builds the users at first, then it exports the tablespace meta information, and imports it into the new PDB. At this stage, the files are now known to the database, the user has table and index objects. And table functions and some other pieces are there, too. Finally, Data Pump rebuilds all the missing pieces such as views, triggers, roles, grants, user quotas, sequences, synonyms etc.  This is triggered by using "FULL=YES and TRANSPORTABLE=ALWAYS".

Once done, we have a fresh 19c database with all the details of our application schema present, ready to be connected to the Weblogic instance. The import usually took only a few minutes.

Over all, we successfully migrated and upgraded 48 test/dev databases and another 17 production databases using this approach. This solution, while not being "zero downtime", provided very little disruption to application availability and eliminated the need for any additional effort for the 12c to 19c upgrade, which would otherwise have required a separate project.

Resources and Acknowledgements:

  • Image courtesy of Mike Dietrich. If you're looking for database migration resources, check out his site!
  • Many thanks to Robert Pastijn for his invaluable help understanding xtts.
  • Thanks also go to Mike Dietrich for valuable suggestions to this blog.
  • Latest updates to XTTS can always be found in MOS Note 2471245.1.

 

Stefan Hinker

After around 20 years working on SPARC and Solaris, I am now a member of A-Team, focusing on infrastructure on Oracle Cloud.


Previous Post

Oracle Analytics Private Endpoint Series

Dayne Carley | 1 min read

Next Post


Tuning Oracle Cloud Guard

Ryan Davis | 6 min read