Best Practices from Oracle Development's A‑Team

ISV Architecture Validated Design

Tal Altman
Sr. Manager

This is the second blog in a series of blogs regarding the ISV Validated Design. 

This blog series will contain the following topics

  1. ISV Home Page
  2. ISV Architecture Validated Design 
    • Requirements, Design, Solution
    • Life of a packet
    • High Availability (HA) Concepts
  3. Core Implementation
  4. Failover Implementation – You can choose from the two options below for implementation
  5. Operations
    • How to add a customer to an existing POD
    • How to create a new POD
    • References, key files and commands

Oracle offers a wide array of software and services. Oracle Cloud is an Infrastructure as a Service (IaaS) offering that is utilized by Oracle's customers. Some of these customers, known an Independent Software Vendors (ISVs) want to leverage the Oracle Cloud to host a "managed service" as a turnkey cloud based solution for their End-Customers. This works well out of the box if the ISV has a small set of customers. What happens if the ISV wants to scale to 40, 60, 100, or larger amounts of customers? 

Oracle customers want to host all of their end-customers in a single Oracle Cloud Infrastructure (OCI) Tenancy. Inside of the tenancy they want to host multiple customers. Each end customer has requirements for logically separate compartments containing their dedicated resources. 

Note - Please work with your Oracle Account team to periodically review OCI Platform limits. Resources such as Dynamic Routing Gateways will require a "Service Limit Increase" for this architecture to work. 



  • ISV's management network must be able to reach every customer VCN to accomplish a single pane of glass to manage all environments 
  • ISV design should be simple and easy to administer
  • ISV solution should work with or without NAT
  • ISV could have 100+ customers
  • ISV needs a small set of connections to OCI to access Management hosts and Customer VCNs
  • End Customers should have access to their resources via VPN or FastConnect
  • End Customers should not reach or have access to other End Customer VCNs


A typical End Customer deployment:



Design and Scaling concepts


We will utilize a combination of Local Peering Gateways and OCI VNIC attachments to achieve the scale the ISV is looking for

To scale we will utilize a "pod" design. 

  • A POD contains a "management" network that can reach 10 to 20 End Customers.

    • PODs will leverage Local Peering Gateways (LPGs)  to connect to each End Customer network within the POD

    • A POD is a logical grouping of customers.

  • Supports 1 Management VCN peering with up to 10 others by default 

  • Can extend ratio to 1 to 20 pretty easily if needed 

Management Servers from the ISV are deployed in the ISV’s management VCN. 

  • Customer VCNs are completely segregated and isolated from one another. 

  • Customer VCN only peers with ISV (1:1 ratio) 

  • Security Policies and Route tables should be inspected to limit traffic as needed 

  • ISV Subnet could be small (such as a /28 block)…



Small Scale ISV Connectivity Overview

  • In this architecture a customer and the ISV both will utilize a VPN or FastConnect to reach their VCN resources.
  • Local Peering Gateways will provide the connectivity between the VCNs
  • This can only scale to about 20 end-customers before you run into OCI limitations.
  • In a larger scale design we will also utilize a virtual router to provide higher scaling points.


Medium Scale Design

  • In the medium scale design the ISV would put management servers in each POD
  • The ISV would then utilize VPN or FastConnect to reach each POD.
  • This can scale up to about ~15 PODs. 
    • Each POD can host 10 to 20 Customers
    • Total scale = 15 * 20 = 300 VCNs
  • This design has the following benefits
    • Easy to troubleshoot
    • Utilizes native features, and doesn't require any unix/linux/windows host configurations
  • This design has the following challenges
    • The ISV has to setup multiple management servers and VPN/FastConnects
    • No single pane of glass



Large Scale Design

  • In the large scale design we add a set of virtual routers
  • The virtual routers are hosted in a dedicated subnet 
  • We utilize routing features to forward traffic to a Private IP
  • Depending on the Compute instance shape you can have multiple VNICs.
  • If you need more VNICs you can add more compute instances or start with a larger compute instance shape.



Limitations with the design

 Key things to consider:

  • An OCI Tenancy can have up to 300 VCNs per region (Customers, Management, and ISV VCNs….) 
  • If you need scale into multiple regions, you will have to rebuild this architecture in each region.  
    • If you only need one management network globally, you could attempt to utilize Remote Peering Connections to peer your management network to a remote region's infrastructure.
  • Forecast growth potential…. DRGs, FastConnect, VPN, and Load Balancers, etc 
  • Be careful with IP address overlap between the ISV and your End Customers
  • There is a limit to the number of virtual NICs (vNICs) a virtual machine or bare metal host can have. See the table below.


HA Considerations

  • For high availability you will want to have more than one virtual router in case of a configuration or OS issue
  • You can utilize a Secondary IP which can "Float" between two virtual routers
  • Utilizing HA software the Floating IP can be moved (via commands with the OCI CLI).
  • Route tables should utilize the Floating IP address so they don't have to be updated each time a failure occurs.
  • There should be a Floating IP facing the ISV Network, and EACH POD.
  • The benefit is that no routing changes need to be made.



Be the first to comment

Comments ( 0 )
Please enter your name.Please provide a valid email address.Please enter a comment.CAPTCHA challenge response provided was incorrect. Please try again.Captcha