Network separation of Production and Non-Production Environments using the Oracle Private Cloud Appliance and Oracle VM

Overview

We are currently seeing rapid adoption of Fusion Applications in the SaaS@Customer model. This and the strong Fusion Applications On-Premise installation base have been further secured by establishing strict rules for separation of Production Systems and Non-Productions Systems. Usually the mandate is to separate the network traffic on the logical level as well as the physical layer. Oracle VM technology offers a range of features to achieve different levels of separation. The Oracle Private Cloud (PCA) offers a preconfigured environment ready for rapid deployment of such a model. A typical Fusion Applications On-Premise installation or Fusion SaaS@Customer solution would have at least two instances – one for Production use and one or more for Non-Production use. This article is a guideline on how to configure the network infrastructure in this typical deployment.

This article focuses on network traffic separation and assumes that the Oracle VM Server / Compute Nodes are already separated via separate Server Pools. This can easily be achieved on the Oracle Private Cloud Appliance (PCA) using the tenant group feature. It offers out of the box hardware separation on the Server Pool layer – https://blogs.oracle.com/virtualization/private-cloud-appliance-pca-tenant-groups-v2.
On standard x86 hardware (e.g. Oracle X7-2s) separate Server Pools have to be created and configured manually using Oracle VM Manager https://docs.oracle.com/cd/E27300_01/E27312/html/vmgsg-svrpool-create.html .

 

Network Overview

The following diagram depicts the proposed architecture with all relevant logical network connections that are being used in this article. This architecture is intentionally kept basic for simplicity, but it is fully sufficient to separate Production and Non-Production traffic across the entire environment. The network infrastructure is based on four bond interfaces per Compute Node based on 8 physical 10GbE network interfaces. This is the recommended minimum configuration for environments that run Real Application Clusters. The Oracle Public Cloud Appliance (PCA) offers 4 bonds with each two 10GbE interfaces out of the box (bond0,bond2,bond3,bond4) and a bond based on the Infiniband interfaces (bond2). Click on the diagram for a higher resolution.

 

The first table is showing the networks that are shared between the server pools. There is no need to change them for additional server pools. In PCA environments these should not be changed.

Name Channel VLAN Bond
Private Internal Management Network
(Default 192.168.140.0
in PCA)
Server Management & Live Migration bond0
Internal Storage Network
(Default 192.168.40.0
in PCA)
Storage & Cluster Interconnect bond1
Infiniband in PCA
External Management Access (Default mgmt_public_eth in PCA) Virtual Machine 101 bond2

 

The following networks are created as part of this exercise to allow end user access to the VMs. VLANs need to be configured on the switches as well. More details about VLANs can be found here: https://docs.oracle.com/cd/E50245_01/E50249/html/vmcon-network-vlans.html

 

Name Environment Traffic Channel VLAN Bond
prod_1_pub_VM_1001 prod_1 Public Virtual Machine 1001 Bond2
nonprod_1_pub_VM_ 2001 nonprod_1 Public Virtual Machine 2001 Bond2
prod_1_priv_IC_1101 prod_1 Interconnect Virtual Machine 1101 Bond3
nonprod_1_priv_IC _2101 nonprod_1 Interconnect Virtual Machine 2101 Bond3
prod_1_pub_BKP_1201 prod_1 Backup (optional) Virtual Machine 1201 Bond4
nonprod_1_pub_BKP _2201 nonprod_1 Backup
(optional)
Virtual Machine 2201 Bond4

 

Should additional separation be required additional networks can easily be created and the names incremented based on the VLAN IDs. Note the is a limited of VLAN IDs to go to a maximum ID of 4094. The following table shows the configuration for a second additional nonprod Server Pool.

Name Environment Traffic Channel VLAN Bond
nonprod_2_pub_VM_2002 nonprod_2 Public Virtual Machine 2002 Bond2
nonprod_2_pub_VM_ 2102 nonprod_2 Interconnect Virtual Machine 2102 Bond3
nonprod_2_pub_BKP _2202 nonprod_2 Backup Virtual Machine 2202 Bond4

 

Virtual Machine Network Channel

These are the networks that are created on the per Server level.

Public Access Network

This network will be mainly used to communicate between the outside world and the Virtual Machines. Users will access the hosted applications via this network. If further separation is required, an additional network of this type can be created and the VLAN number incremented. This network will also be used if external shared storage mounts have to be accessed, e.g. NFS Shares.

In this example configuration prod_1_pub_VM_1001 and nonprod_1_pub_VM_ 2001 are being created on the bond2 interfaces of the corresponding Oracle VM Server to make sure that no traffic from the production Server Pool is visible in the non-production Server Pool.

 

Interconnect Network

This network is specifically used for communication between the VMs for Application Cluster or Database Cluster like Oracle Real Application Cluster. It is very important to separate the traffic for this network as cluster can react adversely to heavy load on their heartbeat networks.

In this example prod_1_priv_IC_1101 and nonprod_1_priv_IC _2101 are being created on the bond3 interfaces of the corresponding Oracle VM Server to separate the Interconnect related traffic from the public application traffic to avoid unpredictable response times as well as cluster interference.

Backup Network

This network is specifically used for communication between the VMs and the backup solution. It is good practice to separate backup storage from Interconnect and Public Access Networks to avoid performance degradation when backups are executed.

In this example prod_1_pub_BKP_1201 and nonprod_1_pub_BKP _2201 are being created on the bond4 interfaces of the corresponding Oracle VM Server to separate the backup related traffic from the public application traffic.

Internal Management Networks

These Networks come out of the box with the PCA. In Oracle VM on e.g. X7-2 these need to be configured on top of the default configuration.

Server Management Network Channel

The Server Management Network is preconfigured at installation time. Generally, it should not be changed. No user/VM data is transported via the Server Management Network – so a separation here would give limited benefit in terms of isolation. The communication via this Network Channel is only running between the Oracle VM Manager and the Oracle VM Server. In the PCA this network defaults to 192.168.140.0/24 and is shared with the Live Migration Channel. As this is a private network the additional network mgmt_public_eth can be used to access the Oracle VM Server from the Data Centre directly. A VLAN e.g. 101 can be applied via the Network Settings tab in the PCA Console – make sure that the relevant switches are configured accordingly.

Live Migrate Network Channel

This channel is used to migrate VM between Oracle VM Server. This can be a quite heavily used network, when VMs with large RAM are migrated between server. In the PCA this network defaults to 192.168.140.0/24 and is shared with the Private Management Channel on bond0– this should not be changed.

Storage Network Channel

A common misconception is the usage of the Storage Network Channel. If assigned it simply allows to add an additional IP Address to ports on the Oracle VM Server for NFS or iSCSI traffic. It does not automatically separate storage traffic and route it via this network. Storage Network Channel Networks cannot be attached to VMs without the additional Virtual Machine Network Channel. In the PCA this network is used for the communication to the internal ZFS Appliance with Infiniband via iSCSI and defaults to 192.168.40.0/24. It is shared with the Cluster Heartbeat Network Channel.

Storage channels are used to mount storage on the VM as devices. It is not used to mount storage like NFS shares directly in the VMs that’s being handled through the Virtual Machine Channel.

Cluster Heartbeat Network Channel

Oracle VM is using OCFS2 for the Server Pool cluster. OCFS is using this network channel for heartbeat communication between Oracle VM Server in a clustered Server Pool. In the PCA this network defaults to 192.168.40.0/24 and is shared with the Storage Network Channel – this should not be changed.

Important Note for PCA User

If the PCA is being used it is strongly recommended not to change the existing default networks (vm_private, vm_public_vlan, mgmt_public_eth, 192.168.40.0, 192.168.40.0) but rather build additional networks on top of the out-of-the-box configuration.

Conclusion

A relative simplistic design is sufficient to enable network separation for workloads in Oracle VM based environments.

The Oracle Private Cloud Appliance is the preferred solution to host platform services for Fusion SaaS@Customer and On Premise installations of Fusion Applications due to its preconfigured rapid deployment options.

 

Reference / Further Reading

Looking “Under the Hood” at Networking in Oracle VM Server for x86 https://www.oracle.com/technetwork/articles/servers-storage-admin/networking-ovm-x86-1873548.html

PCA Tenant Groups: https://blogs.oracle.com/virtualization/private-cloud-appliance-pca-tenant-groups-v2

RAC Template Deployment on PCA: https://www.oracle.com/technetwork/server-storage/private-cloud-appliance/deployment-of-oracle-rac-on-pca-4013267.pdf

 

Add Your Comment