This document will walk you through how to integrate Oracle GoldenGate Cloud Services (GGCS) with Oracle Event Hub Cloud Services (EHCS). The enterprise edition of GGCS supports Oracle GoldenGate Adapter for Big Data which can be used to stream real time data in to Confluent Kafka, running on EHCS. This document covers provisioning of EHCS instance(s) and configuration of the replicat or apply process of Oracle GoldenGate Adapter for Big Data to stream real time data changes to the Confluent Kafka topic.
Oracle GoldenGate adapter for Big Data support for Confluent Kafka is available only in version 184.108.40.206.0. The current version of GGCS supports Oracle GoldenGate adapter for Big Data version 12,2.0.1. As part of this testing, the latest version of Oracle GoldenGate adapter for Big Data was installed on GGCS instance and the necessary client files needed for connecting to a Confluent Kafka were copied on to GGCS instance.
The following topics are not included in the scope of this article:
It is assumed that an Enterprise Edition GGCS is already provisioned and available for use.
The information provided in this article are for educational purposes only. They are not supported by Oracle Development or Support, and come with no guarantee or warrant for functionality in any environment other than the test system used to prepare this article.
The following are the high-level steps involved in the integration of GGCS with EHCS:
When you login to Oracle Cloud Services (OCS), you will see the Cloud services Dashboard as shown in Figure 1. This page shows the different subscriptions you have access to. If you have subscribed to EHCS, you should see the Entry "Event Hub - Dedicated".
Figure 1: Screen Image showing Event Hub Service type
Clicking on "Event Hub -Dedicated", which will take you to Event hub Services Detail screen as shown in Figure 2.
Figure 2: Screen Image showing Event Hub Service Detail Page
Click on Open Service Console tab on right side, to get to the "Oracle Event Hub Cloud Service - Platform" page as shown in Figure 3
Figure 3: Screen Image showing Event Hub Cloud Service - Platform
Create a new EHCS instance by clicking on Create Service tab. You will see a screen as shown in Figure 4.
Figure 4: Screen for Entering details regarding the service
For Service name, enter a name that is unique within the tenant domain that will be used to identify this service and you may add a description in the description field. Provisioning status updates will be sent to the email you enter in the Notification Email field. For Region, select the name of the compute region and click Next button, to enter the service details page, as shown in Figure 5.
Figure 5: Screen for Entering Service details
Available deployment types are Basic and Recommended. Recommended deployment runs Kafka Brokers and Zookeeper on different Nodes. Basic Deployment runs Kafka Broker and Zookeeper on same Node(s). You can scale out to add dedicated Kafka Brokers in both cases. For high availability, minimum of 3 Zookeeper and 2 Kafka Broker Nodes are recommended. Click the Edit button to add the RSA public key for SSH access. Enter the number of Kafka brokers needed - Kafka Topics and Partitions are distributed across Brokers running on different Nodes. Select the Compute shape needed and enter values for usable topic storage value in GB. Actual allocated physical storage will be twice the value specified as Topic data is replicated. To enable REST API access, click on the small box next to Enable REST Access. Enabling REST access configures one or more additional servers running REST Proxy and other supported services. Using this option, you can Produce/Consume via REST APIs. If you select this, you will be prompted for additional information regarding servers for REST access. Enter the number of nodes for Zookeeper and the compute shape to use for Zookeeper servers. Recommended number for Zookeeper Nodes is 3. Zookeeper requires that you have a quorum of servers up, where quorum is ceil(N/2). For a 3 server Cluster, that means 2 servers must be up at any time, for a 5 server ensemble, 3 servers need to be up at any time. After completing the entries click on Next button which will show the confirmation page with your selections as shown in figure 6.
Figure 6: Screen showing Confirmation page
Click on Create button to create the service which will take about 20 minutes or so. You will see the services page which will have an hour glass and the status will show Creating service as shown in Figure 7. Once the provisioning is complete, an email will be send to the specified email address and Creating service status will vanish..
Figure 7: Screen showing Service being provisioned
Click on service name to see the service details as shown in figure 8.
Figure 8: Screen showing Service details
This page shows the Public IP address of the server and also the Connect Descriptor which are needed during the configuration of replicat process for the Oracle GoldenGate adapter for Big data on GGCS instance..
The next step is the creation of Kafka Topic which is needed for streaming data in to Kafka. By default the Confluent Kafka broker is not configured for creating Topics dynamically. This means we have create a Topic before publishing data to Kafka. The Figure 7 above shows the services screen. Click on the selection next to Oracle Event Hub Service - Platform, and you will see the page as shown in Figure 9.
Figure 9: Screen showing all Platform Services
Click on Oracle Event Hub Cloud Service - Topics, which brings up he page showing summary of Topics defined - Figure 10.
Figure 10: Screen showing Summary of Topic Defined
To create a new Kafka topic, Click on Create Service and enter details of the topic as shown in Figure 11.
Figure 11: Screen showing Topic Details
For service name, choose a name that is unique within the tenant domain that will be used to identify this new service, with an optional description. Provisioning status update(s) will be sent to the e-mail specified. Select the Oracle Event Hub Cloud Service - Platform to host this service. Select a value between 1 and 256 for number of Partitions and select a value between 24 and 168 ( 7 days) for Retention period. Click on Next button which will show selection confirmation page and click on Create button which will create the topic. The topic created is shown in figure 12.
Figure 12: Screen showing Topics Defined
Click on Topic name to see the details as shown in Figure 13.
Figure 13: Screen showing Topics Details
Please note down the Topic name as given on this screen - it has the Cloud identity domain (usoracle86702) as part of the topic name. The topic name needs to configured as usoracle86702-ateam-test-topic in the Kafka properties file for GoldenGate.
The next step is the creation of access rule in order for GGCS to publish data to the Kafka topic running on EHCS. The port 6667 is closed by default to all traffic. We need to open this port for TCP traffic. On Service Overview page as shown in Figure 14, click on Access Rules.
Figure 14: Screen showing where to select Access Rules
This will open up the Access Rules screen as shown in Figure 15 where you can create new rules.
Figure 15: Screen showing Access Rules
Click on Create Rule and add a new rule as shown in Figure 16
Figure 16. Screen showing Adding a new Access Rule
Enter a name for the new rule and an optional description. For Source, enter the hosts from which traffic should be allowed.
Valid values are:
For destination enter the service component to which traffic should be allowed. List the ports for this access rule in the destination ports field and TCP for protocol.
Click on Create, which will create the new rule.
It is assumed that you have access to an Enterprise edition of GGCS instance. The current version of GGCS supports GoldenGate Adapter for Big Data Version 220.127.116.11. The connector for Confluent Kafka (used in EHCS), is available only in the latest version of the GoldenGate Adapter for Big Data (18.104.22.168.0). So we need to install the new version in GGCS in order to complete this exercise. This step becomes obsolete when new version of GGCS is released with GoldenGate Adapter for Big Data, version 22.214.171.124.0 or above.
The latest version the GoldenGate Adapter for Big Data can be downloaded from edelivery.oracle.com as a zip file. The zip file then needs to be copied to the GGCS server using scp (secure copy) command. On GGCS server, unzip the file which gives a tar file. GoldenGate adapter for Big data is to be installed in the /u02/data/ggbigdata directory on GGCS instance. With the help of sudo command make a backup of the current ggbigdata directory and then overlay the new version of software in to the directory.
Next, we need to copy the client libraries needed for connecting to Confluent Kafka. This is not needed when GGCS supports GoldenGate Adapter for Big Data version 126.96.36.199.0 or above, as those will be already available on the GGCS instance. You can use yum install or copy the required files from an existing installation of Confluent Kafka. Copy the files needed to be to the /usr/share/java/kafka/ directory.. The list of files needed are shown below (this could change depending on the formatting used for Kafka- this list is for json format).
This step is to capture real time data changes using the Oracle GoldenGate extract process and writing it out to a trail file in the GGCS instance. This trail will be used by the replicat process of the GoldenGate Adapter for Big Data to stream data to the EHCS. There are different ways to do this. One way is to create a remote extract process to connect to a Oracle Database in DBCS or on-premise. Another way is to use and extract pump from on-premise to send the trail file to GGCS instance using SSH proxy. In this example we are using a remote extract process to capture data changes from an Oracle Database in DBCS.
Use SSH login to connect to GGCS instance and use sudo command to login as oracle. The Enterprise Edition of GGCS supports multiple GoldenGate environments - by default the environment variables are set for GoldenGate for Oracle Database 12c. Follow the steps shown below:
The extract parameter file is given below:
Start the extract process using the start command.
Login to GGCS using SSH and follow the steps shown, use sudo command to become oracle user, and set the environment variables for GoldenGate Adapter for Big Data. Using GGSCI, configure and start the replicat process as shown below: (while adding replicat make sure you are pointing to the trail file created in previous step).
The replicat parameter file is given below.
Next, we need to create the 2 properties files for Confluent Kafka connection. They should be created in the dirprm directory and file names are kc.props and kafkaconnect.properties. The file kc.props is referenced in the replicat parameter file and kafkaconnect.properties file is referenced in kc.props file. The file kc.props contains information regarding Kafka topic name and data stream format. The file kafkaconnect.properties is the properties file for Kafka producer. The contents of the 2 files are shown below:
The kc.props file contains the topic name as
The kafkaconnect.properties contains the IP address of EHCS instance and the port number as
Next start the replicat process using the ggsci command "start replicat RTEST"
If you want to see the data streaming on EHCS server, then do the following.
You need to login to the EHCS instance using SSH. By default port 22 is not open on EHCS. You need to modify the Access Rule (1st rule shown on Figure 15) in order to provide access to port 22. Login to EHCS instance and using SSH and follow the steps shown below:
You need to start some transactions in your Oracle Database, connected to the GoldenGate extract process and look at the data flowing through the extract and replicat processes, using the "STATS" command in ggsci. You should see data streaming on the terminal window opened on EHCS instance (shown below).
This article walked through the steps to on how to integrate GGCS with EHCS.
For more information on what other articles are available for Oracle GoldenGate please view our index page.