MFTCS: Production and DR in the Cloud

 

Introduction

Oracle Managed File Transfer (MFT) Cloud Service is a Platform as a Service (PaaS) computing platform solution for running a high performance, standards-based, end-to-end managed file gateway in the cloud. It features design, deployment, and monitoring of file transfers using a lightweight web-based design-time console that includes transfer prioritization, file encryption, scheduling, and embedded FTP and SFTP servers.

Oracle Cloud @ Customer computing platform uses Oracle Compute Cloud Service, Oracle Database Cloud Service and Oracle Java Cloud Service as its basic infrastructure.

This document details the steps and best practices which need to followed to setup a Cold Standby Disaster Recovery (DR) site for the MFT Cloud Service Provisioned on Oracle Cloud @ Customer (OCC).

 

Assumptions:

This document is written with the following assumptions:

  • All required licences and subscriptions are procured.
  • Oracle C@C are available on Both Production and DR Site.
  • High Speed Network Connectivity is already established between the Production and DR Site.
  • Firewall Rules are set (For the identified Ports and End Points ) to allow the communication between the Production and DR Site.
  • Disaster Recovery Capability is available for all Upstream and DownStream systems.
Pre-Requisites:
  • MFTCS Cluster is already provisioned on the Production Site following the configurations available in Table-2.
  • DBCS is provisioned following the configuration available in Table-1 on DR site & Data Guard Setup is completed for the database provisioned on the DBCS.
  • DG Broker is configured and Enabled on the Both Primary and Standby Databases.
  • Switchover of Database from Primary to Standby is already tested and validated successfully
This Document does not cover the following:
  • Setting up Data Guard between the Primary and DR databases.
  • Steps required for the DNS Switchover.
  • Steps to be followed for Switchover/Failover operations.

Main Article

1. Configure Primary and DR database for DR.

Please refer the Table-1 before provisioning the DBCS on the Primary and DR Domains. Once the DBCS is successfully provisioned, configure Data Guard following the Oracle Best Practises. Also Enable DG Broker to enable Switchover between the Primary and Standby Databases.

Please refer the links below to configure the DBCS and Configure Data Guard Manually after deploying the DBCS Primary and DR Instances. For this demonstration a single Node Non-ASM Database Cloud Service Instance is deployed.

Creating an Oracle Database Cloud Service Instance

Creating a Physical Standby Database

Table-1: DBCS Configuration on Primary and DR Site

No: DBCS Primary DR
1 Oracle Cloud Domain usoraclePrimary usoracleDR
2 Oracle Cloud User abc.xyz@oracle.com abc.xyz@oracle.com
3 Password <password> <password>
4 DBCS Software Version 12.1.0.2 12.1.0.2
5 DBCS Software Edition Enterprise Edition Enterprise Edition
6 DBCS Service Name soapdb1.usoraclePrimary.oraclecloud.internal soapdb1.usoracleDR.oraclecloud.internal
7 DBCS Shape OC4 OC4
8 Database Name soadb1 soadb1
9 Pluggable Database soapdb1 soapdb1
10 DB Instance Name soadb1 soadb1
11 DB Backup Container soadb1_container soadb1_container
12 Database System Password <password> <password>
13 Usable Database Storage 185GB 185GB
14 Administration Password <password> <password>
15  Character Set AL32UTF8 AL32UTF8
16 National Character Set AL16UTF16 AL16UTF16
17  Database Clustering with RAC Unchecked Unchecked
18  Standby Database with Data Guard Unchecked Unchecked
19  Enable Oracle GoldenGate Unchecked Unchecked
20  Include Demos PDB Unchecked Unchecked
21  Backup Destination Both Cloud Storage and Local Storage Both Cloud Storage and Local Storage
22 Cloud Storage Container https://us2.storage.oraclecloud.com/v1/Storage-usoraclePrimary/soastorage1 https://us3.storage.oraclecloud.com/v1/Storage-usoracleDR/soastorage1
23  Cloud Storage User Name abc.xyz@oracle.com abc.xyz@oracle.com
24 Cloud Storage Password <password> <password>
Data Guard Specific
25 DB_UNIQUE_NAME  soadb1  soadr1

 

2. Configure MFTCS Cluster on Primary Cloud Identity Domain.

Provision a MFTCS Cluster on the Primary Domain. Please refer the information available in the Table-2 ‘Primary Site’ Column to complete the provisioning wizard.

Please follow the links pasted below to provision a MFTCS cluster using the SOACS Provisioning wizard.

Provisioning Oracle Managed File Transfer Cloud Service

Using the SOACS Provisioning Wizard to Provision MFTCS

Nodes Provisioned:

MFTCS Primary Admin Node (Node 1): mftcluster01-wls-1
MFTCS Primary Node 2: mftcluster01-wls-2
MFTCS Primary LB Node(OTD):mftcluster01-lb-1

Table-2: MFTCS Configuration on Production and DR Site.

 

No: MFT Configuration Primary Site DR Site
1 Oracle Cloud Domain usoraclePrimary usoracleDR
2 Oracle Cloud User abc.xyz@oracle.com abc.xyz@oracle.com 
3 Oracle Cloud Password <password> <password> 
4 Service Type MFT Cluster MFT Cluster
5 Instance Names mftcluster01-wls-1, mftcluster01-wls-2 mftcluster01-wls-1,    mftcluster01-wls-2
6 DB Configuration soacsdbinst1 soacsdbinst2
7 Cluster Size 2 2
8 Compute Shape OC1m OC1m
9 WebLogic User weblogic weblogic 
10 WebLogic Password <password> <password>
11 Load Balancer Provisioning true true
12 Load Balancer Policy Round Robin Round Robin
13 Load Balancer Compute Shape OC3 OC3
14 MFT Backup Container MFTClusterStorage MFTClusterStorage

 

3. Configure DBFS Mount for Primary to DR Sync

During the provisioning of MFT Cluster, 2 DBFS mount points are already created and these two mounts are used for the MFT application use.

For the purpose of Primary to DR Sync, a new separate mount point is created. Please follow the steps mentioned below to mount an additional DBFS mount on MFTCS Clutser.

3.1 Create DBFS Tablespace and User On Primary MFT Database

This document uses an environment file script.env to source the environment variables and paths. Please refer the Appendix section of this blog for the Environment Files.

3.1.1 Login to sqlplus as sysdba user on the Primary Database:

$ . ~/script.env
$ sqlplus -s sys/${passwd}@${A_DBNM} as sysdba <<EOF
> ALTER SESSION SET container = SOAPDB1;
> show con_name;
> create tablespace tbsdbfs_mft datafile ‘/u02/app/oracle/oradata/tbsdbfsmft01.dbf’ size 1G    autoextend on next 100m;
> create user soadbfsmft identified by soadbfsmft default tablespace tbsdbfs_mft quota unlimited on tbsdbfs_mft CONTAINER=CURRENT;
> grant connect, create table, create procedure, dbfs_role to soadbfs;
> EOF

Session altered.
Tablespace created.
User created.
Grant succeeded

3.1.2 Connect to the Schema user created @ Step 3.1.1 and Create the DBFS file system.

$ sqlplus -s sys/${passwd}@${A_DBNM} as sysdba <<EOF
> ALTER SESSION SET container = SOAPDB1;
> connect soadbfsmft/soadbfsmft@soapdb1;
> @/u01/app/oracle/product/12.1.0/dbhome_1/rdbms/admin/dbfs_create_filesystem.sql tbsdbfs_mft dbfs;
> EOF

No errors.
——–
CREATE STORE:
begin dbms_dbfs_sfs.createFilesystem(store_name => ‘dbfs’, tbl_name => ‘dbfs’,
tbl_tbs => ‘tbsdbfs_mft’, lob_tbs => ‘tbsdbfs_mft’, do_partition => false,
partition_key => 1, do_compress => false, compression => ”, do_dedup => false,
do_encrypt => false); end;
——–
REGISTER STORE:
begin dbms_dbfs_content.registerStore(store_name=> ‘dbfs’, provider_name =>
‘sample1’, provider_package => ‘dbms_dbfs_sfs’); end;
——–
MOUNT STORE:
begin dbms_dbfs_content.mountStore(store_name=>’dbfs’, store_mount=>’dbfs’);
end;
——–
CHMOD STORE:
declare m integer; begin m := dbms_fuse.fs_chmod(‘/dbfs’, 16895); end;
No errors.
SQL>

4. Update the wallet on MFTCS Primary Node 1

Update the Existing Oracle Wallet available in the MFT Domain Location /u01/data/domains/MFTClust_domain/dbfs/wallet to add the credentials of the DBFS Schema user created @ Step 3.1.1

      1. This step need to be executed on the MFTCS Node

4.1 List the Existing Credential in the Wallet

[oracle@mftcluster01-wls-1 ~]$ /u01/app/oracle/middleware/oracle_common/bin/mkstore -wrl /u01/data/domains/MFTClust_domain/dbfs/wallet -listCredential
Oracle Secret Store Tool : Version 12.2.1.2.0OWSM-BP
Copyright (c) 2004, 2018, Oracle and/or its affiliates. All rights reserved.
Enter wallet password:

List credential (index: connect_string username)
1: ORCL SP570413460_DBFS

4.2 Update the Wallet with the new credential for DBFS Schema:

[oracle@mftcluster01-wls-1 bin]$ pwd
/u01/app/oracle/middleware/oracle_common/bin
[oracle@mftcluster01-wls-1 bin]$ /u01/app/oracle/middleware/oracle_common/bin/mkstore -wrl /u01/data/domains/MFTClust_domain/dbfs/wallet -createCredential SOAPDB1 soadbfsmft soadbfsmft
Oracle Secret Store Tool : Version 12.2.1.2.0OWSM-BP
Copyright (c) 2004, 2018, Oracle and/or its affiliates. All rights reserved.
Enter wallet password:

4.3 List the Existing Credential in the Wallet:

[oracle@mftcluster01-wls-1 bin]$ /u01/app/oracle/middleware/oracle_common/bin/mkstore -wrl /u01/data/domains/MFTClust_domain/dbfs/wallet -listCredential
Oracle Secret Store Tool : Version 12.2.1.2.0OWSM-BP
Copyright (c) 2004, 2018, Oracle and/or its affiliates. All rights reserved.
Enter wallet password:

List credential (index: connect_string username)
2: SOAPDB1 soadbfsmft
1: ORCL SP570413460_DBFS

5. Update the wallet on MFTCS Primary Node 2

Repeat the steps from 4.1 to 4.3 on Primary MFT Node 2.

6. Update the tnsnames.ora on Primary Node1

  1. This step need to be executed on the MFTCS Node.

Add the tnsentry for SOAPDB1 (Entry added to the Oracle Wallet) to the /u01/data/domains/MFTClust_domain/dbfs/tnsnames.ora

[oracle@mftcluster02-wls-2 dbfs]$ more tnsnames.ora
ORCL =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = soadbinst1)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = SOAPDB1.usoraclePrimary.oraclecloud.internal)
)
)
SOAPDB1 =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = soadbinst1.compute-usoraclePrimary.oraclecloud.internal)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = soapdb1.usoraclePrimary.oraclecloud.internal)
)
)

Also add the following 2 entries to the tnsnames.ora. This is required for the DBFS Sync Script.

soadb1 =
(DESCRIPTION =
(SDU=65536)
(RECV_BUF_SIZE=1048576v0)
(SEND_BUF_SIZE=10485760)
(ADDRESS = (PROTOCOL = TCP)(HOST = 111.111.11.111)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = soadb1.usoraclePrimary.oraclecloud.internal)
)
)
soadr1 =
(DESCRIPTION =
(SDU=65536)
(RECV_BUF_SIZE=10485760)
(SEND_BUF_SIZE=10485760)
(ADDRESS = (PROTOCOL = TCP)(HOST = 222.222.22.222)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = soadr1.usoracleDR.oraclecloud.internal)
)
)

7. Update the tnsnames.ora on Primary Node2

Repeat the Step 6 on Primary MFT Node 2.

8. Create Required Directories for DBFS on MFTCS Primary Nodes

On Node 1:

[opc@mftcluster01-wls-1 ~]$ sudo -s
[root@mftcluster01-wls-1 opc]# mkdir /u02
[root@mftcluster01-wls-1 opc]# chown oracle:oracle /u02
[root@mftcluster01-wls-1 opc]# mkdir /u02

On Node 2:

[opc@mftcluster01-wls-2 ~]$ sudo -s
[root@mftcluster01-wls-2 opc]# mkdir /u02
[root@mftcluster01-wls-2 opc]# chown oracle:oracle /u02
[root@mftcluster01-wls-2 opc]#

9. Update dbfsMount.sh to include the new DBFS Mount Point

On MFTCS Primary Node 1:

Add the highlighted entries (in RED) to the already existing /u01/data/domains/MFTClust_domain/dbfsdbfsMount.sh. This will ensure that the newly added DBFS mounts are available during the system startup.

[oracle@mftcluster01-wls-1 dbfs]$ cat dbfsMount.sh
#!/bin/sh
ORACLE_HOME=/u01/app/oracle/middleware/dbclient
MOUNT_PATH=/u01/soacs/dbfs
MOUNT_PATH_DIRECTIO=/u01/soacs/dbfs_directio
MOUNT_PATH_ADV=/u01/soacs/dbfs_adv
MOUNT_PATH_SYNC=/u02
mkdir -p $MOUNT_PATH
mkdir -p $MOUNT_PATH_DIRECTIO
mkdir -p $MOUNT_PATH_ADV
mkdir -p $MOUNT_PATH_SYNC
if mountpoint -q $MOUNT_PATH ;
then
echo “DBFS is already mounted”
exit
fi
if mountpoint -q $MOUNT_PATH_DIRECTIO ;
then
echo “DBFS DIRECTIO is already mounted”
exit
fi
if mountpoint -q $MOUNT_PATH_SYNC ;
  then
    echo “DBFS for SYNC is already mounted”
  exit
fi
##if ! grep -q “^fuse\b.*\b${USER}\b” /etc/group ; then
##  echo “${USER} user not a member of the fuse group”
##exit
##fi
newgrp fuse << END1
newgrp oracle << END2
ORACLE_HOME=/u01/app/oracle/middleware/dbclient
export ORACLE_HOME
TNS_ADMIN=/u01/data/domains/MFTClust_domain/dbfs
export TNS_ADMIN
LD_LIBRARY_PATH=/u01/app/oracle/middleware/dbclient/lib
export LD_LIBRARY_PATH
cd /u01/data/domains/MFTClust_domain/dbfs
$ORACLE_HOME/bin/dbfs_client -o wallet /@ORCL -o direct_io $MOUNT_PATH_DIRECTIO &>dbfs.log &
$ORACLE_HOME/bin/dbfs_client -o wallet /@ORCL $MOUNT_PATH &>dbfs.log &
$ORACLE_HOME/bin/dbfs_client -o wallet /@SOAPDB1 $MOUNT_PATH_SYNC &>dbfs.log &
cd $OLDPWD
END2
END1

On MFTCS Primary Node 2:

 Repeat the Step 9 on Node 2 also.

10. Start the MFT Primary Instances

Restart the Primary Nodes both Node1 and Node 2 and verify whether DBFS mount /u02 are available and accessible from the both nodes.

On MFTCS Primary Node 1:

[oracle@mftcluster01-wls-1 dbfs]$ df -h /u02
Filesystem           Size Used Avail Use% Mounted on

dbfs-@SOAPDB1:/      1023M  120K 1023M   1% /u02
[oracle@mftcluster01-wls-1 dbfs]$ ls -lrt /u02
total 0
drwxrwxrwx 3 root root 0 May  4 18:59 dbfs

On MFTCS Primary Node 2:

[oracle@mftcluster01-wls-1 dbfs]$ df -h /u02
Filesystem            Size Used Avail Use% Mounted on

dbfs-@SOAPDB1:/      1023M  120K 1023M   1% /u02
[oracle@mftcluster01-wls-2 ~]$ ls -lrt /u02
total 0
drwxrwxrwx 3 root root 0 May  4 18:59 dbfs

11. Convert the physical standby database to a snapshot standby

Pre-Requisites:
  • Data Guard is configured between the Primary and Standby DBCS.
  • Verification of Data Guard Configuration is completed.
  • DGMGRL Switchover Testing is completed

11.1 Convert the physical standby database to a snapshot standby

Before Provisioning the MFTCS on the DR site, covert the physical standby database to snapshot standby mode. This is to ensure that, only one set of schemas are available for the MFTCS nodes provisioned on the Primary and DR sites. After the MFTCS Provisioning on DR Site, Once the Snapshot Standby Database is converted back to physical standby database the new set schemas created during the MFTCS configuration @DR site will be removed.

11.1.1 Convert the Standby Database to SnapShot Standby

Launch the DGMGRL and Connect as SYS.

$ . ~/script.env
$ dgmgrl sys/${passwd}@${A_DBNM}
DGMGRL for Linux: Version 12.1.0.2.0 – 64bit Production
Copyright (c) 2000, 2013, Oracle. All rights reserved.
Welcome to DGMGRL, type “help” for information.
Connected as SYSDBA.
DGMGRL> CONVERT DATABASE soadr1 to SNAPSHOT STANDBY;

Converting database “soadr1” to a Snapshot Standby database, please wait…
Database “soadr1” converted successfully

11.1.2 Validate the Standby database

DGMGRL> validate database soadr1;

Database Role:     Snapshot standby database
Protection Mode:   MaxPerformance
Error: Switchover to a snapshot standby database is not possible
Primary Database:  soadb1
Ready for Switchover:  No
Ready for Failover:    Yes (Primary Running)
Temporary Tablespace File Information:
soadb1 TEMP Files:  5
soadr1 TEMP Files:  4
Standby Apply-Related Information:
Apply State:      Not Running
Apply Lag:        26 seconds (computed 3 seconds ago)
Apply Delay:      0 minutes

12. Provision the MFTCS DR instances using the MFTCS provisioning wizard

Provision a MFTCS Cluster on the DR Cloud Identity Domain. Please refer the information available in the Table-2 ‘DR Site’ Column to complete the provisioning wizard.

MFTCS DR Admin Node (Node 1): mftcluster01-wls-1
MFTCS DR Node 2: mftcluster01-wls-2
MFTCS DR LB Node(OTD):mftcluster01-lb-1

Once the MFTCS Provisioning Wizard is completed, please verify the environment by accessing the Weblogic Console, MFT Console url’s.

13. Copy Primary MFTCluster Domain Home to DR location

Pre-Requisites:

  •  Stop the Weblogic services including the Managed Server, Admin Server and Node Managers running from the DR MFTCluster nodes before executing the following steps:

This step need to be executed only once during the initial setup of DR. DBFS Mount configurations including the Oracle Wallet used for the DR sync between the Primary and DR nodes will be mounted once this step is completed.

13.1 On Primary Domain Node 1 (Admin Node):

As root user on Node 1:

[root@mftcluster01-wls-1 domains]cd /u01/data/domains
[root@mftcluster01-wls-1 domains]# tar -cvzf MFTClust_domain_Site1.tar.gz MFTClust_domain > Tar.log 2>tar.err

13.2 On DR Domain (Both Nodes)

Backup the existing MFTCS domain Home from DR Nodes:

As oracle user on Node 1:

[oracle@mftcluster02-wls-1 ~]$ cd /u01/data/domains/
[oracle@mftcluster02-wls-1 domains]$ mv MFTClust_domain MFTClust_domain_Backup

As oracle User on Node 2:

[oracle@mftcluster02-wls-2 ~]$ cd /u01/data/domains/
[oracle@mftcluster02-wls-2 domains]$ mv MFTClust_domain MFTClust_domain_Backup

13.3 Transfer the MFTClust_domain_Site1.tar.gz to Both Nodes on DR Domain

Use the sftp option to transfer the tar file created during step 13.1 from the Primary node to the DR nodes (Both Nodes).

13.4. Extract the MFTClust_domain_Site1.tar.gz on Both Nodes on DR Domain

As root user on both DR Nodes:

[root@mftcluster01-wls-1 domains]cd /u01/data/domains
[root@mftcluster02-wls-1 domains]# tar -xvf MFTClust_domain_Site1.tar.gz > unTar.log 2>unTar.err

Ensure that the domain directories from Primary Node1 are copied and extracted to DR Node 1 and Node 2 before continuing with the next steps.

13.5  Create Required Directories for DBFS on DR Nodes

On Node 1:

[opc@mftcluster02-wls-1 ~]$ sudo -s
[root@mftcluster02-wls-1 opc]# mkdir /u02
[root@mftcluster02-wls-1 opc]# chown oracle:oracle /u02
[root@mftcluster02-wls-1 opc]# mkdir /u02

On Node 2:

[opc@mftcluster02-wls-2 ~]$ sudo -s
[root@mftcluster02-wls-2 opc]# mkdir /u02
[root@mftcluster02-wls-2 opc]# chown oracle:oracle /u02
[root@mftcluster02-wls-2 opc]#

14. Update the DR Domain Information

14.1 Verify the Schema Prefix on Primary and DR Nodes

After Copying the Domain home from Primary Node to DR Nodes, verify whether Schema Prefix are same on Both Primary and DR nodes

Primary Node:

[oracle@mftcluster01-wls-1 jdbc]$ cat /u01/data/domains/MFTClust_domain/config/jdbc/mds-mft-jdbc.xml|grep MDS
<value>SP570413461_MDS</value>
<jndi-name>jdbc/mds/MFTMDSLocalTxDataSource</jndi-name>

DR Node:

[oracle@mftcluster01-wls-1 ~]$ cat /u01/data/domains/MFTClust_domain/config/jdbc/mds-mft-jdbc.xml|grep MDS
<value>SP570413461_MDS</value>
<jndi-name>jdbc/mds/MFTMDSLocalTxDataSource</jndi-name>

14.2 Replace the Database Configurations and Cloud Domain Name

As Oracle User on Node 1 DR:

  1. 14.2.1 Replace Database Configurations:

find /u01/data/domains/MFTClust_domain/config/fmwconfig -name ‘*.xml’ | xargs sed -i ‘s/soadbinst1:1521\/SOAPDB1./Primary_Domain_Name/soadbinst2:1521\/SOAPDB1. >/DR_Domain_Name/g’
find /u01/data/domains/MFTClust_domain/config/jdbc -name ‘*.xml’ | xargs sed -i ‘s/soadbinst1:1521\/SOAPDB1./Primary_Domain_Name/soadbinst2:1521\/SOAPDB1. >/DR_Domain_Name/g’

  1. 14.2.2. Replace Domain Name:

find /u01/data/domains/MFTClust_domain/config -name ‘config.xml’ | xargs sed -i ‘s/<Primary_Domain_Name>/<DR_Domain_Name>/g’

  1. 14.2.3 Replace FrontEnd Host

find /u01/data/domains/MFTClust_domain/config -name ‘config.xml’ | xargs sed -i ‘s/333.333.33.333/444.444.44.444/g’

As Oracle User on Node 2 DR:

Repeat the steps 14.2.1 to 14.2.3 on DR Node 2.
  1. 14.2.4 Replace NodeManager Listen Address on Node 2

find /u01/data/domains/soainstp_domain/nodemanager -name ‘nodemanager.properties’ | xargs sed -i ‘s/mftcluster-wls-1/mftcluster-wls-2/g’

Ensure that Steps 14.2.1 to 14.2.4 are executed on DR Node 2 before continuing with the next steps.

15. Update the tnsnames.ora on the DR Nodes

Update the tnsnames.ora on the DR Instances (Node 1 and Node 2) to point to the DR database instances. This step is required since the domain home is copied from the Primary Node to DR Nodes. (Highlighted in RED)

[oracle@mftcluster02-wls-2 dbfs]$ more tnsnames.ora
ORCL =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = soadbinst2)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = SOAPDB1.usoracleDR.oraclecloud.internal)
)
)
SOAPDB1 =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = soadbinst2.compute-usoracleDR.oraclecloud.internal)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = soapdb1.usoracleDR.oraclecloud.internal)
)
)
soadb1 =
(DESCRIPTION =
(SDU=65536)
(RECV_BUF_SIZE=10485760)
(SEND_BUF_SIZE=10485760)
(ADDRESS = (PROTOCOL = TCP)(HOST = 111.111.11.111)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = soadb1.usoraclePrimary.oraclecloud.internal)
)
)
soadr1 =
(DESCRIPTION =
(SDU=65536)
(RECV_BUF_SIZE=10485760)
(SEND_BUF_SIZE=10485760)
(ADDRESS = (PROTOCOL = TCP)(HOST = 222.222.22.222)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = soadr1.usoracleDR.oraclecloud.internal)
)
)

16. Start the MFTCS DR Instances Connecting to the Snapshot Standby Database

Restart the DR Nodes both Node1 and Node 2  and verify that DBFS mount /u02 are available and accessible on the both nodes.

[oracle@mftcluster01-wls-1 dbfs]$df -h .
dbfs-@SOAPDB1:/      1023M  120K 1023M   1% /u02
[oracle@mftcluster01-wls-1 dbfs]$ ls -lrt /u02
total 0
drwxrwxrwx 3 root root 0 May  4 18:59 dbfs
[oracle@mftcluster01-wls-1 dbfs]$

[oracle@mftcluster01-wls-1 dbfs]$df -h .
dbfs-@SOAPDB1:/      1023M  120K 1023M   1% /u02
[oracle@mftcluster01-wls-2 ~]$ ls -lrt /u02
total 0
drwxrwxrwx 3 root root 0 May  4 18:59 dbfs

Once the DBFS Mounts are verified, launch the Admin Console of the DR MFTCS Admin Node and verify the following:

  • Verify the Node Manager Status on Both DR Nodes
  • Verify the Status of the Managed Servers.
  • If not running, start the Managed Servers from the console.
  • Verify the cluster frontend address.
  • Launch the Sample Application from the MFTCluster using the DR Load Balancer Public IP.

17. Convert the snapshot standby database back to the physical standby database

Pre Requisite:

Stop the WLS services running from the DR MFTCS Nodes.

Ensure that WLS Services running from the DR Node 1 and Node 2 are stopped before continuing with the next steps.

17.1 Validate the Primary database

Launch the DGMGRL and Connect as SYS.

[oracle@soadbinst1 ~]$ . ~/script.env
[oracle@soadbinst1 ~]$ dgmgrl sys/${passwd}@${A_DBNM}
DGMGRL for Linux: Version 12.1.0.2.0 – 64bit Production
Copyright (c) 2000, 2013, Oracle. All rights reserved.
Welcome to DGMGRL, type “help” for information.
Connected as SYSDBA.
DGMGRL> validate database soadb1;

Database Role:    Primary database
Ready for Switchover:  Yes

17.2 Validate the Standby database

DGMGRL> validate database soadr1;

Database Role:     Snapshot standby database
Protection Mode:   MaxPerformance
Error: Switchover to a snapshot standby database is not possible
Primary Database:  soadb1
Ready for Switchover:  No
Ready for Failover:    Yes (Primary Running)
Standby Apply-Related Information:
Apply State:      Not Running
Apply Lag:        2 days 3 hours 15 minutes 33 seconds (computed 1 second ago)
Apply Delay:      0 minutes

17.3 Convert Snapshot Standby to Physical Standby Database.

DGMGRL> CONVERT DATABASE soadr1 to PHYSICAL STANDBY;

Converting database “soadr1” to a Physical Standby database, please wait…
Oracle Clusterware is restarting database “soadr1” …
Continuing to convert database “soadr1” …
Database “soadr1” converted successfully

17.4 Validate the Standby database

DGMGRL> validate database soadr1;

Database Role:     Physical standby database
Primary Database:  soadb1
Ready for Switchover:  Yes
Ready for Failover:    Yes (Primary Running)
Temporary Tablespace File Information:
soadb1 TEMP Files:  5
soadr1 TEMP Files:  4
Standby Apply-Related Information:
Apply State:      Running
Apply Lag:        1 day(s) 20 hours 40 minutes 28 seconds (computed 1 second ago
Apply Delay:      0 minutes

17.5 Validate the Primary database

DGMGRL> validate database soadb1;

Database Role:    Primary database
Ready for Switchover:  Yes

Give some time for the redo-apply to catch up on standby before continuing with the next steps and verify the “Ready to Switchover” status before continuing with the next Switchover Process.

Once the Initial Step-up is completed (Steps 1 through 17), DataGuard Replication will ensure that all database changes are propagated from the Primary DB running @ Production Site to the Standby site running @ DR site. To ensure that MFTCS Mid-Tier artifacts are in sync between the Production and DR sites, Weblogic Domain directories need to be copied from the Prod to DR sites through the DBFS mount /u02.

Periodically copy the Weblogic Domain Directory from the Local mount to DBFS mount /u02 on the Production Nodes using scripts or cron job. Data Guard Replication enabled at DB layer will ensure the Weblogic Domain Directory copied to the DBFS Mount @Production Site are synced and available @DR Site. During the Switchover/Failover scenarios, latest versions of the Weblogic Domain Directory can be accessed from the DBFS mount @DR site even if the Production site is unavailable due to some unforeseen reason.

If Active Data Guard is enabled at the Standby Database, Weblogic Domain Directories can be accessed/copied from the DBFS Mount @DR site to the Local mount without even doing a database switchover or role change. Using Scripts, copy of the Weblogic Domain from the DBFS@DR Site to Local Disk can be automated including the replacement of Database Connect Strings, Host Domain Names, Front End Hosts etc.

18. Sequence of Steps for Switchover Testing

  1. 1. Ensure that Weblogic Domain Directory is periodically copied to the DBFS mount on Production nodes.
  2. 2. Validate the availability of Latest Weblogic Directory on the DBFS Mount /u02 @DR Site.
  3. 3. Stop the MFTCS Mid Tier running on the Production Site. (Managed Servers, Admin Server, Node Manager & OTD Instance)
    4. Switchover the Database from Primary to DR Site. (Standby database running on the DR Site will become the new primary)
  4. 5. Copy the Weblogic Domain Directory from the DBFS Mount to Local Mount @DR Site on Node 1 and Node 2.
    6. Replace the Host Domain Name, Database Connect String, Front End Host on the Node 1@DR Site.
  5. 7. Replace the Host Domain Name, Database Connect String, Front End Host, Node Manager Listener Address on the Node 2@DR Site.
    8. Start the Mid Tier on the DR Site. (Node Manager, Admin Server, Managed Servers)
  6. 9. Start the OTD Instance on the DR Site.

Summary

In this article, we demonstrated one of the approach which need to followed to achieve Disaster Recovery for the MFTCS PAAS on the Oracle Cloud@Customer Platform using DBFS and Oracle DataGuard to do the synchronization of Mid tier and Database Tier artifacts from the Primary to DR Site. Most of the Processes defined in this article can be automated using the Oracle Enterprise Manager and Site Guard. Disaster Recovery approach need to tested and validated on each customer environment based on the business requirement before adapting any of the processes demonstrated in this article.

Appendix

 

script.env

export passwd='<password>’
export DB_NAME='<DB_Name>’
export A_DBNM='<Local_DB_Unique_Name>’
export A_PUB_IP=’111.111.11.111′
export A_PRIV_IP=’DG-DBCS-Prim1.compute-usoraclePrimary.oraclecloud.internal’
export A_PORT=’1521′
export A_DB_DOMAIN=’usoraclePrimary.oraclecloud.internal’
export B_DBNM='<Remote_DB_Unique_Name>’
export B_PUB_IP=’222.222.22.222′
export B_PRIV_IP=’DG-DBCS-Stdby1.compute-usoracleDR.oraclecloud.internal’
export B_PORT=’1521′
export B_DB_DOMAIN=’usoracleDR.oraclecloud.internal’

 

Add Your Comment