This write up illustrates Oracle GoldenGate's behavior in a distributed file system environment that supports file locking. Oracle GoldenGate(OGG) is very commonly used in an Oracle RAC environment. When OGG is installed in a distributed file system in a RAC environment, at any given time the OGG processes run in a single node of the cluster. To prevent accidental startup from another node of the cluster while the OGG processes are already running in another node of the cluster, distributed file locking is essential.
The ACFS file system supports distributed file locking from Oracle 126.96.36.199+ onwards. To illustrate the scenario better lets take an example. Lets say we have a 2 node(NodeA and NodeB) RAC cluster running Oracle 188.8.131.52. OGG is installed in an ACFS file system which is shared between the nodes. The OGG processes are running in node A. Now if we go to node B and check the status of the OGG processes (info all) it will show that processes are running. This might confuse some users as they might think that OGG is simultaneously running on both nodes. This is not true and can be verified by simply checking at the OS level ("ps -ef|grep <ogg process name>"). The "info all" command actually checks the status by querying the shared "pcs" file. Since in a distributed environment the OGG binaries are shared, the "info all" command will always show a consistent view no matter which node it was issued from (active or passive).
In the above scenario while OGG is running in node A if the user logs into node B and issues a "start/stop ER *" the command will get the IP address and port of the manager from the shared "pcs" file. Since in this case the manager is running in node A the TCP message to start/stop will go to node A. In order to start the processes in node B OGG manager first needs to be stopped in node A and then restarted in node B.
In a distributed file system environment where file locking is supported (e.g ACFS) the OGG processes can be managed from any node. But all the OGG processes would always run in the same node as OGG manager. To relocate the OGG processes to a different node of the cluster the manager should first be stopped in the active node and then restarted in the new node. (Please note that typically in a RAC environment Oracle Clusterware is used to manage OGG for high availability purposes)