ISV Implementation Details - Part 3A - Routing and Security

February 4, 2020 | 5 minute read
Tal Altman
Sr. Manager
Text Size 100%:

This is the 3rd blog, (Part 3A) in a series of blogs regarding the ISV Validated Design. 

This blog series will contain the following topics

  1. ISV Home Page
  2. ISV Architecture Validated Design 
    • Requirements, Design, Solution
    • Life of a packet
    • High Availability (HA) Concepts
  3. Core Implementation
  4. Failover Implementation – You can choose from the two options below for implementation
  5. Operations
    • How to add a customer to an existing POD
    • How to create a new POD
    • References, key files and commands

 

Introduction

 

In the last blog entry we reviewed a few different architectures that could help an ISV scale on Oracle Cloud Infrastructure. In this blog entry we will focus on how to implement and setup OCI networking constructs to support the Large Scale design shown below. We will focus on the basics building the networks for the management servers, the virtual routers, the first Customer POD, and two customers (Leaf customer #1, and Leaf customer #2). In a subsequent blog on operational steps I will go into further details on how to add a new POD to an existing set of virtual routers.

 

Large Scale Architecture

 

Implementation Details

 

To validate this architecture we decided to build the following Virtual Cloud Networks (VCNs), Subnets, and Instances. A picture and table can be found below. In our validation we built a set of linux instances with multiple network cards. We are utilizing a secondary Virtual Network Interface Card (VNIC) attachment to our virtual routers so that each router has access to both networks. In practice you can substitute the Linux instance with an ecosystem partner, or commercial virtual router if your more comfortable with that. The concepts applied in this blog could vary from one vendor to the next, or from one Linux distribution to the next.

Network diagram:

Network Diagram

 

Instances built to support this architecture:

Instances and their IP addresses

 

Routing

Routing is probably the area which is the most complex topic in this design. In summary we have to add routes for networks that aren't locally attached. For example from the ISV's management servers, it should know that to reach Customer #1 that we must traverse the Virtual Router's floating IP address. 

In this architecture we traverse the following networks:

  • ISV Management subnet
  • ISV vRouter subnet
  • virtual Routers (Linux instances)
  • Local Peering Gateway in ISV-POD1
  • Customer subnets

 

 

A few notes about routing in each network....

1. In the ISV Management and vRouter subnets

  • You'll need to add static routes for each customer, where the next hop is the floating IP address assigned to the Virtual Router pool.

2. Virtual routers should have routes for:

  • the ISV management servers, next hop is the default gateway of the virtual router subnet
  • End Customer networks, next hop is the virtual router's NIC attached to the POD where the customer is attached to...

3. End Customer Networks must be configured as follows

  • Add a route for the central ISV management VCN where the next hop is the LPG
  • POD VCN should have an LPG route table
  • LPG Route table should forward traffic for ISV VCN to the Private IP of the virtual router
     

End to End Routing and Security Details

In this section I show specific examples of the route table entries and security list modifications needed to provide end to end connectivity from the ISV Management servers to the Customer VCNs. Logically there are multiple places in this design that require updating including but not limited to the ISV's VCN network, the Linux virtual routers, the POD networks, and finally the Customer's VCNs as well.

Routing Details

 

ISV Networks

Linux Virtual Router details*

  • We will deep dive on Linux configuration in another blog entry.

Linux routing configuration

POD, Customer Routing and Security Details

Pod networking details

 

 

 

Security Lists

  • The best practice is to have your security lists wide open while building, testing, and validating end to end connectivity from one network to the next. You can then start locking down the security lists as needed.
  • For the most part you'll need to make sure that ICMP is enabled for all codes and types. This way you can confirm if ICMP Echo and Echo Replies are returned. You may also have some ICMP redirect messages and other codes thrown depending on your network or host configuration.
  • You might want to enable other protocols, such as SSH, HTTP, SSL, etc as appropriate for your applications to talk to one another.
  • If implementing Clustering make sure that the appropriate ports are open for the cluster software. A screen shot can be found below for the clustering security list. These ports are only needed on the vRouter subnet.
    • Corosync/Pacemaker uses: TCP ports 2224, 3121, and 21064, and UDP port 5405.
    • The default transport is UDP, on ports 5404, 5405, and 5406. Usually port 5405 is used. 

Security-list

 

Next Steps

Congratulations, you made it through the basics of building your networks and security lists. Since the Linux hosts are not deployed and configured yet, you can't test end to end connectivity. In our next blog, we will cover the implementation details of how to configure Linux based Virtual Routers.

 

Tal Altman

Sr. Manager


Previous Post

Connect to GNS3 VM

Catalin Andrei | 4 min read

Next Post


Connecting Oracle Data Integrator Marketplace Studio to Autonomous Databases

Dayne Carley | 3 min read