TCP or HTTP: Which one to use for Listener Protocol in OCI Load Balancer?

September 10, 2019 | 6 minute read
Text Size 100%:

The OCI load balancing service distributes client traffic to a set of servers. The service only supports TCP and HTTP based traffic. So, what does it mean to support TCP and HTTP traffic? After all, does not HTTP use TCP as the transport protocol? How does the Load Balancer treat TCP traffic differently from HTTP traffic? How do you configure the Load Balancer to handle traffic of either type?  And more importantly, what are the criteria to choose the protocol type – TCP, HTTP - in the Load Balancer configuration? Read on to find more!

Listener

The primary entry point into a Load Balancer for the incoming traffic is a Listener.  A Listener is configured to “handle” either HTTP or TCP traffic. This is achieved by choosing either TCP or HTTP as the “Protocol” in the Listener configuration:

Although a Listener can handle a single protocol type, a Load Balancer can have multiple Listener with different protocol types – HTTP, TCP – (as long as the ports are unique), thus supporting both types simultaneously.  The next two sections look into how client traffic is load balanced when the “Protocol” field is set to either TCP or HTTP in the Listener configuration.

Load Balancing TCP Traffic

To demonstrate this scenario, I will be using the following setup:

To keep things simple, the Apache servers use HTTP over port 80.  Accessing the default page on Apache#1 returns “This is Apache httpd1 nossl running on OCI”.  Similarly, Apache#2 returns “This is Apache httpd2 nossl running on OCI”.  Also, the load balancing policy is round-robin with equal weight.

To handle TCP traffic, choose “TCP” from the “Protocol” dropdown in the Listener configuration.

 

 Next, I issue the following request:

The response is from Apache #1.  I reload the page two more times but the same page is returned.  Looking at the Wireshark traces, we see the following:

We can observe from the above trace that after TCP handshake (#65-67), the browser sent 3 requests to the Load Balancer (#68, 88, 98) and all 3 requests were sent within that connection.  The corresponding responses came from Apache#1 (#70, 92, 99) as highlighted in the above trace: #70 was a HTTP 200 OK response with the content “This is Apache httpd#1 …” while the next two responses were HTTP 304 Not Modified.   After the third request, the connection was closed (initiated by the server).

After the connection closure, if I reload the page, this time I get a response from Apache #2:

Reloading once more still gets me same content from Apache#2.  The Wireshark trace reveals the following:

After a new connection has been opened (#92-94), two requests are sent to the server (#95, 112) and finally the connection gets closed (#132-138).  Both responses (#96, 113) are from Apache#2.

The takeaways from this section are as follows:

  1. The Load Balancer routes all requests within a single TCP connection to the same Backend (Apache#1 or 2).
  2. The Load Balancer alternates distribution (because the traffic distribution policy is round robin here) of the incoming connections between the two Backend.
  3. The net effect is that the TCP connections are "tunneled" through the Load Balancer.  The application payload is opaque to the Load Balancer.

Load Balancing HTTP Traffic

To handle HTTP traffic, choose “HTTP” from the “Protocol” dropdown in the Listener configuration.

Issuing the following request results in getting the response from Apache#1:

Reloading the page however, fetches the result from Apache#2:

Every subsequent request now alternates between the two Apache servers.  Let us look into the Wireshark traces.

We can see above that after a TCP connection has been established (#146-148), two requests are being made within that same TCP connection. The first response (#151) is coming from Apache#1.  The subsequent response (#205) is from Apache #2:

The takeaways from this section are as follows:

  1. The Load Balancer distributes incoming requests within the same TCP connection, alternately between the two Backend.
  2. Each request is handled independent of the connection.
  3. This also tells us that unlike TCP tunneling, in this case the application payload is getting reconstructed at the Load Balancer before being transported further to the destination.  This allows the Load Balancer to inspect the HTTP requests and responses, apply filters as defined, route messages, etc.

TCP or HTTP?

Now that we have a clear understanding what is happening under the hood, let us evaluate the criteria for choosing either TCP or HTTP for the Listener protocol.

  1. Start with the TCP payload: anything but HTTP, TCP is the choice.
  2. For HTTP payload:
    • If SSL is used and a fully encrypted channel is desired between the client (Browser) and the server (Apache), TCP is the choice.  This is SSL tunneling mode - http://www.ateam-oracle.com/load-balancing-ssl-traffic-in-oci
    • For other SSL use cases, HTTP is the choice.
    • If HTTP based request and response processing are desired at the Load Balancer, then choose HTTP.  This includes HTTP header handling, request routing, rule sets and session persistence.

Summary

In this blog, we looked into OCI Load Balancer’s Listener configuration with respect to the Protocol field.  We saw the behavior of the Load Balancer for both TCP and HTTP as Listener Protocol.  Based on the observation, we made recommendations for choosing TCP or HTTP for the Protocol field.

 

 

 

Amit Chakraborty

Amit is a Solutions Architect focussing on Cloud Security including Identity, Governance, Network Security and Architecture. Amit advises customers and helps them design and implement security solutions on the Oracle Cloud Infrastructure platform. Amit advises different levels of customers from executives to architects and developers. Before joining Oracle, Amit worked in software engineering as an architect and developer working on mobile, security, cloud, web, internet and wireless technologies. With a strong background in software engineering and Computer Science, Amit brings a unique perspective into solving customer security needs in the cloud.


Previous Post

Service Callouts in API Platform

Andy Knight | 3 min read

Next Post


Using Ansible to automate your OBIEE installation on Oracle's PCA

Ingolf Loboda | 7 min read