Migrations are a necessary evil. But if they can't be avoided, one should at least take advantage of the opportunity and piggyback some modernization onto the move. This short article tells the story of such a migration. Here's the setting:
The workload to migrate consisted of several dozen instances of Weblogic, each with their own database for application and data. The databases were hosted on a few container databases, enclosed in one PDB for each instance of Weblogic. These databases were still on 12c (126.96.36.199), so in addition to migrating the applications (with minimal disruption to the users), the mandate was to also upgrade the databases to 19c using the same downtime window. To make things more interesting, each instance pair (Weblogic and PDB) was to be migrated at a different time than all the other pairs. With these requirements, there were three separate tasks to deal with:
These last two steps, which are closely related, will be discussed in detail here. They allowed for a seamless migration of individual instances of the application with only one minimal outage to the application, while bringing the database from 12c to 19c at the same time.
There is a nifty little tool to use with transportable tablespace migrations called "xtts". It is described in great detail in MOS Note 2471245.1. It takes away the burden of configuring RMAN and dealing with full or incremental backups and restores. You configure it with some basic information about the source and target databases and which tablespaces to migrate. It then takes care of the rest - including the copying of the data to the target system, if desired. Once configured, you can run it as often as you like, and it will sync the target with the latest updates without disrupting the source.
In our case, we had some additional challenges dealing with compression and encryption of the backups. xtts assumes a few things about the RMAN configuration and these assumptions didn't match our environment. A few slight modifications of the code templates used by xtts took care of that. For example, we had to replace
backup for transport allow inconsistent
set encryption off for all tablespaces;set compression algorithm "basic"; backup as backupset for transport allow inconsistent
Our tablespaces were already encrypted using TDE, so additional RMAN encryption was unnecessary and only introduced issues with encryption passwords for the restore. Once these issues were resolved, restoring the tablespaces on a 19c database system running in OCI DBaaS worked like a charm.
It is important to remember that at this stage, all we did was move a tablespace. We didn't create a database, we didn't import the data into a database. We only moved the data stored in the tablespace by copying the tablespace. As a reminder, tablespaces, on their own, only contain data (tables, table data, indexes and types). They do not contain any of the database logic. All of that resides in the data dictionary, which lives in the SYSTEM tablespace. Since this can't be set to read-only, it can't be transported. So we need a different way to move all of this "logic" ( triggers, synonyms, views, grants, roles, sequences, global temporary tables, database links etc) into the new PDB. And finally, we also need to attach the transported tablespace to the new PDB. All of this is done with a feature called " Full Transportable Export Import" using Data Pump. Again, the necessary steps are part of the overall XTTS process:
As described in DocID 2471245.1, the last phase in XTTS transfers is a final incremental backup and restore, followed by an import of the object metadata into the target database as described above. In our case, this target database was a virgin 19c PDB (created using the same character set and timezone settings as the source database). To achieve this (following option 2.C of phase 5), we created a database link to the source database
create public database link pdb1.<target database> connect to system identified by <password> using '//<source db IP>:1521/pdb1.<source database>';
and ran impdp of the source over that database link, pointing to the already restored data files for our migrated tablespace:
impdp system/<password>@pdbt1 network_link=pdb1 full=y transportable=always metrics=y exclude=statistics directory=LOGDIR logfile=pdb1.log transport_datafiles='<list of migrated datafiles>' ENCRYPTION_PASSWORD=<password>
During this process, Data Pump builds the users at first, then it exports the tablespace meta information, and imports it into the new PDB. At this stage, the files are now known to the database, the user has table and index objects. And table functions and some other pieces are there, too. Finally, Data Pump rebuilds all the missing pieces such as views, triggers, roles, grants, user quotas, sequences, synonyms etc. This is triggered by using "FULL=YES and TRANSPORTABLE=ALWAYS".
Once done, we have a fresh 19c database with all the details of our application schema present, ready to be connected to the Weblogic instance. The import usually took only a few minutes.
Over all, we successfully migrated and upgraded 48 test/dev databases and another 17 production databases using this approach. This solution, while not being "zero downtime", provided very little disruption to application availability and eliminated the need for any additional effort for the 12c to 19c upgrade, which would otherwise have required a separate project.