Best Practices from Oracle Development's A‑Team

Oracle GoldenGate: Microsoft SQL Server to Oracle Migrations (Part 1)


This article is applicable to Oracle GoldenGate Classic installations, version 19c and prior.

Cross-database replication and migrations pose a challenge to IT organizations due to the complexity involved in converting data types and with target database instantiation. In this series of articles we shall explore three options for migrating a Microsoft SQL Server Database to an Oracle Database.

In Part 1, we'll use the Microsoft Import and Export Data Wizard to export the source SQL Server Database, create and instantiate the target Oracle Database Schema, and Oracle GoldenGate to perform a zero-downtime migration from Microsoft SQL Server to an Oracle 12c Pluggable Database running on Linux.

In Part 2, we'll use Oracle SQL Developer to capture and convert the source SQL Server Database to an Oracle 12c Pluggable Database, instantiate the Oracle target, and Oracle GoldenGate to perform a zero-downtime migration.

In Part 3, we'll demonstrate Oracle GoldenGate functionality to read direct from the source SQL Server database, convert the data to Oracle data formats, and instantiate the Oracle schema as part of a zero-downtime migration.

Main Article

The SQL Server Import and Export Wizard provides a way for copying data from a SQL Server source to a destination database. This feature may also be leveraged to provide connectivity to an Oracle database. To connect to Oracle via the SQL Server Import and Export Wizard, we must install the Oracle Data Access Client software and configure the network layer (Oracle*Net) on our machine where the SQL Server Import and Export Wizard resides.

Since my target database is Oracle 12c, I downloaded the appropriate Oracle Data Access Client (ODAC) software from Oracle TechNetWork.

Install Oracle Data Access Client

After decompressing the downloaded software packager, start the Oracle Universal Installer by selecting the setup.exe file.


When the Oracle Universal Installer starts, select your desired language.


Define the Windows user that will be used to run Oracle services. In most cases, the Windows built-in account (the default), will be acceptable.


Specify the location to install the software.


Select the components to install. Accepting the default selections will suffice.


Accept the default to set the ODP.NET configuration to system wide.


Provide the settings required to created the Oracle tnsnames.ora file. The tnsnames.ora file contains client side network configuration parameters. In my test environment, my Oracle database resides on the server slc08ggk, listens for incoming database connections via TCP/IP on port 1521, and I will be using the Oracle Pluggable Database (PDB) OGGPDB to store my data.  I am using the PDB name as the alias for referencing this connection.

The installer will perform various checks of the server to ensure it meets the minimum requirements for this software package.


After the prerequisite checks verification completes, select Install.


The client software will be installed and configured.


Select Close to terminate the installer.


Test the Oracle connection via Microsoft Data Link by doing the following:

Create an empty text file on the Windows Desktop named TestOraLink.udl


Double-click on the file to start the test. In the Data Link Properties Provider tab, select Oracle Provider for OLE DB.


In the Data Link Properties Connection tab provide: (1) the alias name we specified for our data source connection to Oracle (oggpdb), (2) the Oracle database user name, and (3) the Oracle database user password.


Select Test Connection. A pop-up window will present either success or failure status.


Close the Data Link Properties windows.

Before continuing, ensure the SQL Server database has been configured to allow replication by Oracle GoldenGate and Oracle GoldenGate has been installed as specified in the Installing and Configuring Oracle GoldenGate for Microsoft SQL Server reference guide.

Setup Real-Time Data Capture and Apply

Before executing the migration in the Import and Export Wizard, we need to setup a Classic Extract and Extract Data Pump to capture transactional data from the SQL Server 2008 source database and stage it at the target Oracle GoldenGate instance, and an Integrated Replicat to apply the captured data to my Oracle tables. My Oracle GoldenGate configuration looks like this:

Clear text passwords are a bad thing, I only show them here for simplicity. For SQL Server, it is best practice to use Windows Authentication. If that is not possible, you should use password encryption functionality provided with the Oracle GoldenGate product.

Source Settings

Classic Extract
extract eljp
sourcedb mss08ljp, userid ggadmin, password Oracle1
tranlogoptions managesecondarytruncationpoint
exttrail ./dirdat/ep
reportcount every 10 minutes, rate
table dbo.*;
Extract Data Pump
extract pljp
rmthost slc08ggk.us.oracle.com, mgrport 7898, compress
rmttrail ./dirdat/rp
reportcount every 10 minutes, rate
table dbo.*;

Create the source Oracle GoldenGate instance by executing the following GGSCI commands:

GGSCI (den01eln) 4> add extract eljp, tranlog, begin now
EXTRACT added.
GGSCI (den01eln) 5> add exttrail ./dirdat/ep, extract eljp, megabytes 500
GGSCI (den01eln) 6> add extract pljp, exttrailsource ./dirdat/ep
EXTRACT added.
GGSCI (den01eln) 7> add rmttrail ./dirdat/rp, extract pljp

On the target (Oracle) side, modify the Oracle GoldenGate Manager parameter file.

port 7898
purgeoldextracts ./dirdat/*, usecheckpoints
dynamicportlist 15000-15100
autorestart er *, retries 3, waitseconds 60, resetminutes 60

The parameter to include is in bold. The Oracle GoldenGate Manager is responsible for spawning new processes when the GGSCI start command is executed. By default, should a running process fail, i.e.; go into the ABEND state, it will not be restarted by Manager. By specifying this autorestart parameter, Manager will attempt to restart any failed Oracle GoldenGate processes. Manager will make three attempts to restart the process, wait sixty seconds to issue the restart after the process fails, and then reset its retry counter after sixty minutes.

We will be using this functionality to restart the Integrated Replicat as part of our post-migration automation.

Create an Integrated Replicat parameter file.

 Integrated Replicat
replicat rmssljp
userid lpenton@oggpdb, password Oracle1
reportcount every 60 seconds, ratesqlexec 'alter table lpenton."categories" add primary key ("categories_id")'
sqlexec 'alter table lpenton."categories_description" add primary key ("categories_id", "language_id")'
sqlexec 'alter table lpenton."customers" add primary key ("customers_id")'
sqlexec 'alter table lpenton."customers_info" add primary key ("customers_info_id")'
sqlexec 'alter table lpenton."customers_lookup" add primary key ("lookup_id")'
sqlexec 'alter table lpenton."next_cust" add primary key ("customers_id")'
sqlexec 'alter table lpenton."next_order" add primary key ("orders_id")'
sqlexec 'alter table lpenton."orders" add primary key ("orders_id")'
sqlexec 'alter table lpenton."orders_products" add primary key ("orders_products_id")'
sqlexec 'alter table lpenton."orders_status_history" add primary key ("orders_status_history_id", "orders_id", "date_added")'
sqlexec 'alter table lpenton."orders_total" add primary key ("orders_total_id")'
sqlexec 'alter table lpenton."products" add primary key ("products_id")'
sqlexec 'alter table lpenton."products_description" add primary key ("products_id", "language_id")'
sqlexec 'alter table lpenton."products" add primary key ("products_id")'handlecollisionsmap dbo.ogg_migrate, target lpenton."ogg_migrate",
eventactions (shell './dirsql/comp_migration.sh', abort);


map dbo.categories, target lpenton."categories";
map dbo.categories_description, target lpenton."categories_description";
map dbo.customers, target lpenton."customers";
map dbo.customers_info, target lpenton."customers_info";
map dbo.customers_lookup, target lpenton."customers_lookup";
map dbo.next_cust, target lpenton."next_cust";
map dbo.next_order, target lpenton."next_order";
map dbo.orders, target lpenton."orders";
map dbo.orders_products, target lpenton."orders_products";
map dbo.orders_status_history, target lpenton."orders_status_history";
map dbo.orders_totals, target lpenton."orders_totals";
map dbo.products, target lpenton."products";
map dbo.products_description, target lpenton."products_description";
map dbo.products_to_categories, target lpenton."products_to_categories";

In my Integrated Replicat configuration, there is a special settings HANDLECOLLISIONS. This will allow the Integrated Replicat to process through the stream of data that was captured and queued up while the export was running, and then perform a normal shutdown once the queued data has been processed.

The SQL Server Import and Export Wizard does not create primary keys or unique indexes on the Oracle tables, so I "automated" the process via the use of SQLEXEC in the Integrated Replicat. When the Integrated Replicat is started, it will login to the Oracle database and execute the alter statements before beginning to apply the queued data. (This could have been done manually before starting the Integrated Replicat; but, hey, this is an article about Oracle GoldenGate and I wanted to show a capability that is often overlooked.)

Add the Integrated Replicat to the Oracle GoldenGate instance by executing the following GGSCI commands:

GGSCI (slc08ggk) 3> dblogin userid lpenton@oggpdb, password Oracle1
Successfully logged into database OGGPDB.
GGSCI (slc08ggk as lpenton@a12101s/OGGPDB) 4> add replicat rmssljp, integrated, exttrail ./dirdat/rp
REPLICAT (Integrated) added.

SQL Server Import and Export Wizard

We are now ready to start the SQL Server Import and Export Wizard to create and instantiate the Oracle target.

Earlier you will not that I installed the 64-bit ODAC package, ODAC121024_x64. This package is not usable by SQL Server Management Studio, so I must run the Import and Export Wizard from a CMD or PowerShell window. To do this, I execute the command DTSWizard.exe.


SQL Server Import and Export Wizard will start.


Select the SQL Server data source for connecting to the database, the database server, SQL Server database authentication method, and database to export.


Select Oracle Provider for OLE DB as the destination and then select Properties.


In the Data Link Properties Connection tab provide: (1) the alias name we specified for our data source connection to Oracle (oggpdb), (2) the Oracle database user name, and (3) the Oracle database user password. Be sure to select Allow saving password so the password will be retained throughout the wizard session.


Select Test Connection to validate the connection settings.


Select OK, OK, and then Next to display the Specify Table Copy or Query screen. Select Copy data from one or more tables or views.


The Select Source Tables and Views screen will display.


Select the tables you want to export to Oracle, destination table names will be displayed.

If you are not exporting all of the tables, CTRL-right click each table being exported and then select Edit Mappings. Select the target Oracle schema where the tables are to be exported. If the tables already exist, choose to either Drop and recreate new destination tables or Delete rows in existing destination tables. For our migration, the target tables do not exit let the wizard create them.


Double-click on each target table to define column mappings and data type conversions.


In the example above, the column categories_id is defined as a bigint in SQL Server. Oracle Database has no understanding of the bigint data type, so we must convert it to data type Oracle understands. In this case the Oracle data type NUMBER(19) is equivalent to the SQL Server BIGINT. Selecting the Edit SQL button allows me to perform bulk modifications of the target table create script.

impexp12Perform the same data type conversion mappings for each table being exported. I am also copying the modified SQL create statements and pasting them into a text document for later use (I'll explain why later).

Oracle is case insensitive in referencing table and column names; however, Oracle stores table and column names in uppercase and automatically converts DML statements to uppercase. The double-quoted table and columns names used in the create statements will force Oracle to store this information as case specific; lowercase in this case. This will require you to use double-quotes when specifying table mappings for Oracle GoldenGate replication and in all application DML querying that Oracle tables.

Once the table modifications are complete, we're ready to move on. Select Next.


SSIS allows you to provide data type conversion mapping via an XML file instead of editing the SQL statements. XML is not my forte so select Next again.


At this point you can save and run the SSIS package created by the wizard, or just run the package. If you have shutdown your application and taken the SQL Server database off-line, you can select Finish and perform the migration.

If your are doing a zero-downtime migration do not select Finish.

I am doing a zero-downtime migration; so, I need to setup Oracle GoldenGate to capture transactions from my SQL Server Database before continuing with the Import and Export Wizard.

Start Real-Time Data Capture

Setting up the export could take several hours, so I want to make sure I am not capturing unnecessary data that will be moved during the export. In GGSCI, alter the starting checkpoint for Classic Extract:

GGSCI (den01eln) 8> alter eljp, begin now
EXTRACT altered.

Before starting the Classic Extract, we need to check the database to ensure there are no active long running transactions. Transactions that were started prior to adding the Classic Extract must either be committed or aborted before starting export.

In GGSCI, execute the INFO command to get the VAM Read Checkpoint for the Classic Extract:

GGSCI (den01eln) 9> info eljp
EXTRACT    ELJP      Initialized   2017-04-20 09:42   Status STOPPED
Checkpoint Lag       00:00:00 (updated 00:04:27 ago)
VAM Read Checkpoint  2017-04-20 09:42:19.914000

In SQL Server Management Studio, use the DBCC OPENTRAN command to view running database transactions.


In the output shown above, an active transaction started at Apr 20 2017 9:41:58:850 AM is still running. Since the Classic Extract was added at Apr 20 2017 9:42:19:914 AM, the transaction will not be captured by Classic Extract. Therefore, the transaction must either (1) be committed so the data is available in the database for the export, or (2) aborted and restarted so Classic Extract will capture the data.

After aborting the transaction, there are now no transactions running that were started prior to adding the Classic Extract; all source data will be instantiated to the target either by the export, or by Oracle GoldenGate.


In the source Oracle GoldenGate instance, start the Classic Extract and Extract Data Pump. Real-time data will be captured and queued at the target Oracle GoldenGate instance while we run the export.

GGSCI (den01eln) 10> start er *
Sending START request to MANAGER ...
Sending START request to MANAGER ...

GGSCI (den01eln) 11> info er *
EXTRACT    ELJP      Last Started 2017-04-20 09:58   Status RUNNING
Checkpoint Lag       00:00:00 (updated 00:00:02 ago)
Process ID           6300
VAM Read Checkpoint  2017-04-20 09:54:11.730000
LSN: 0x0000005b:000000de:0029, Tran: 0000:00002e80
EXTRACT    PLJP      Last Started 2017-04-20 09:58   Status RUNNING
Checkpoint Lag       00:00:00 (updated 00:00:00 ago)
Process ID           4676
Log Read Checkpoint  File ./dirdat/ep000000000
First Record  RBA 1280

Execute The Export

Now that Oracle GoldenGate is capturing real-time data, we can run the export. Go back to the SQL Server Import and Export Wizard and select Finish.


Select Finish again to verify the choices made and begin the export.


The export will begin and report status in the wizard window.


When the export completes, select Close to exit the SQL Server Import and Export Wizard.


Target OGG Instance

When the export finishes, we are ready to complete the migration. In the target Oracle GoldenGate instance (Oracle Database side), start the Integrated Replicat.

GGSCI (slc08ggk as lpenton@a12101s/OGGPDB) 5> start rmssljp
Sending START request to MANAGER ...

Execute the GGSCI INFO command to check the status of the Integrated Replicat.

GGSCI (slc08ggk) 6> info rmssljp
REPLICAT   RMSSLJP   Last Started 2017-04-20 13:22   Status RUNNING
Checkpoint Lag       00:00:00 (updated 00:00:05 ago)
Process ID           3072
Log Read Checkpoint  File ./dirdat/rp000000000
First Record  RBA 0

The Integrated Replicat will temporarily go into the ABEND state, showing that queued during the source database export has been processed.

GGSCI (slc08ggk) 19> info rmssljp
REPLICAT   RMSSLJP   Last Started 2017-04-20 13:22   Status ABEND
Checkpoint Lag       00:00:00 (updated 00:00:07 ago)
Log Read Checkpoint  File ./dirdat/rp000000000
First Record  RBA 0

After sixty seconds, Manager will perform a restart of the Integrated Replicat; which will then continue processing queued up real-time data.

GGSCI (slc08ggk) 84> info rmssljp
REPLICAT   RMSSLJP   Last Started 2017-04-20 13:25   Status RUNNING
Checkpoint Lag       00:00:00 (updated 00:00:06 ago)
Process ID           23950
Log Read Checkpoint  File ./dirdat/rp000000000
2017-04-20 13:27:17.977679  RBA 1340850

The SQL Server and Oracle databases are now in sync and the migration is complete. Oracle GoldenGate will continue to capture data from SQL Server and apply it to Oracle as long as the Classic Extract, Extract Data Pump, and Integrated Extract remain running.

Things To Consider

There are a couple of things we must consider about our new Oracle schema that was created by SQL Server Import and Export Wizard:

(1) Tables are created without primary keys or indexes.

(2) The Oracle table and column names are in lowercase. This required that I use double-quotes when mapping my source to target tables, and when I referenced columns. This may not be optimal for running an application against Oracle.

The solution to our first problem is actually quite simple, alter the Oracle tables and assign a primary key or unique index to them. I chose to do this via the Integrated Extract.

The solution for the second problem is a bit trickier, and would typically require an Oracle to Oracle database migration or down time to address. I dealt with this problem via some scripting and Oracle GoldenGate advanced functionality as part of the migration project, which required no manual intervention or an outage.

"Fixing" My Oracle Database

Remember that text file I created earlier containing the table create statements executed by SQL Server Import and Export Wizard? I had Oracle GoldenGate Integrated Replicat use that file to create a second set of tables, populate them with data already applied to the lowercase named tables, and then switch over to applying data to these new tables for on-going replication.

I copied that file to my target server as a file named oradb.sql in the GoldenGate dirsql directory. I can then edited the file to remove all double quotes and define primary keys for each table. Here's a sample of the file contents:

categories_id NUMBER(19) NOT NULL,
categories_image VARCHAR2(64),
parent_id NUMBER(19) NOT NULL,
sort_order NUMBER(3),
date_added TIMESTAMP,
last_modified TIMESTAMP,
primary key (categories_id)

At the end of the file, after all of the table create statements, I added "insert into, select from" statements to load the new tables:

insert into lpenton.categories select * from lpenton."categories";
insert into lpenton.categories_description select * from lpenton."categories_description";
insert into lpenton.customers select * from lpenton."customers";

I then created a second Integrated Replicat param file named rmssljp_new.prm, with the following settings:

replicat rmssljp
userid lpenton@oggpdb, password Oracle1
reportcount every 10 minutes, rate
map dbo.categories, target lpenton.categories;
map dbo.categories_description, target lpenton.categories_description;
map dbo.customers, target lpenton.customers;
map dbo.customers_info, target lpenton.customers_info;
map dbo.customers_lookup, target lpenton.customers_lookup;
map dbo.next_cust, target lpenton.next_cust;
map dbo.next_order, target lpenton.next_order;
map dbo.orders, target lpenton.orders;
map dbo.orders_products, target lpenton.orders_products;
map dbo.orders_status_history, target lpenton.orders_status_history;
map dbo.orders_totals, target lpenton.orders_totals;
map dbo.products, target lpenton.products;
map dbo.products_description, target lpenton.products_description;
map dbo.products_to_categories, target lpenton.products_to_categories;

Then I defined a "trigger" in the Integrated Replicat I used to complete the migration. If you look at the configuration in the earlier section where we initially setup Oracle GoldenGate, you will see this MAP statement:

map dbo.ogg_migrate, target lpenton."ogg_migrate",
eventactions (shell './dirsql/comp_migration.sh', abort);

In my SQL Server database, I created a table named ogg_migrate that has two columns:

CREATE TABLE [dbo].[ogg_migrate](
[mig_ts] [datetime] NOT NULL,
[mig_txt] [varchar](100) NOT NULL,
PRIMARY KEY ([mig_ts],[mig_txt])

When the Import and Export Wizard completed, I added a row to this table:

insert into dbo.migrate values (CURRENT_TIMESTAMP, 'migrate complete')

This data was captured by Oracle GoldenGate and queued for data apply by the Integrated Replicat. When Integrated Replicat processed this record from the queue, a Linux shell process was spawned and the script ./dirsql//comp_migration.sh was executed.

The contents of this script file are:

cd /ogg_ateam/oracle/ogg_12.2_12c
sqlplus lpenton@oggpdb/Oracle1 @./dirsql/oradb.sql > ora.out
mv ./dirprm/rmssljp.prm ./dirprm/rmssljp_migrate.prm
mv ./dirprm/rmssljp_new.prm ./dirprm/rmssljp.prm

The script does the following:

(1) Positions into my Oracle GoldenGate home directory.

(2) Executes the oradb.sql file to create the new set of tables with Oracle default table and column names.

(3) Renames the existing rmssljp.prm Integrated Replicat parameter file to rmssljp_migrate.prm

(4) Renames the secondary Integrated Replicat parameter file show above (rmssljp_new.prm) to rmssljp.prm.

I can verify the tables were created in the database.

SQL> select table_name from dba_tables where owner = 'LPENTON';

I can verify data is being replicated to the uppercase tables names by executing the GGSCI STATS command.

GGSCI (slc08ggk) 4> stats rmssljp total
Sending STATS request to REPLICAT RMSSLJP ...
Start of Statistics at 2017-04-21 11:23:10.
Integrated Replicat Statistics:
Total transactions                              2040.00
Redirected                                         0.00
DDL operations                                     0.00
Stored procedures                                  0.00
Datatype functionality                             0.00
Event actions                                      0.00
Direct transactions ratio                          0.00%
Replicating from dbo.customers_info to OGGPDB.LPENTON.CUSTOMERS_INFO:
*** Total statistics since 2017-04-21 09:25:11 ***
Total inserts                                   2000.00
Total updates                                     40.00
Total deletes                                      0.00
Total discards                                     0.00
Total operations                                2040.00

At a later date I can come back and delete the lowercase table names, or I could have done it as part of the oradb.sql script executed by the Integrated Replicat. The only thing left to do now is validate the target database with Oracle GoldenGate Veridata, which is beyond the scope of this article.


In this article we performed a zero-downtime migration from Microsoft SQL Server to an Oracle Pluggable Database via the Microsoft Import and Export Wizard and Oracle GoldenGate. We further demonstrated some advanced features of Oracle GoldenGate that allowed me to modify the target Oracle tables programmatically.

In the next part of this blog series, we'll use Oracle SQL Developer to capture and convert the source SQL Server Database to an Oracle 12c Pluggable Database, instantiate the Oracle target, and Oracle GoldenGate to perform a zero-downtime migration.

For more information on what other articles are available for Oracle GoldenGate please view our index page.

Join the discussion

Comments ( 2 )
  • Robertson Monday, June 3, 2019
    Need to replicate 1000's of tables from one database in MS SQL Server 2016 SP2 Standard Edition (Source) to MYSQL 8.0 database (target) on Linux
    How do we copy the metadata and modify the datatypes for all these tables
  • Loren Penton Monday, June 3, 2019
    Hi Robertson,

    Thank you for your comment. Currently, the conversin of table metadata between databases is a manual task. Please have your Oracle Account Rep contact the GoldenGate Product Manager for OGG on SQL Server and Oracle Consulting for assistance.

    Loren Penton
    Oracle GoldenGate A-Team
Please enter your name.Please provide a valid email address.Please enter a comment.CAPTCHA challenge response provided was incorrect. Please try again.Captcha