In this blog I give an overview for the configuration of load balancers and I will do a practical demonstration that will use the definitions that are presented.
In the youth of the Internet, web services were hosted on individual servers. The service was accessible via a public IP address that was configured on the server. With the rapid expansion of the technology, one server did not have enough resources to process all the requests that were coming from the clients. That was one of the first problems that engineers need to resolve.
The quick solution was to expand the server and create a cluster. This is a short time solution because you would need to do a lot of configurations in order to have a cluster that would have both of the members active.
One solution to the problem is a load balancer. It is configured with the public IP address of the web service, and it will respond to the requests from the clients. In the background, there is a pool of servers that have the same configuration and are able to process the requests.
Load balancers are distributing the requests from the clients to the pool of servers based on an algorithm (policy).
There are multiple Load balancer "Shapes" (S - 100Mbps, M - 400Mbps, L - 8000Mbps). The Shape is the total throughtput that can be handled by the load balancer.
The public IP that holds the service is called “Listener” and the servers from the pool are called “Backend Servers”. The load balancer periodically checks the status of the individual backend server and if the server is not responding, it will not send additional traffic to is and it will exclude it from the pool.
The entity that is defined by a list of servers, a load balancing policy and a health check is called a “Backend set”. This instructs the load balancer on how to distribute the requests from the clients to the backend servers.
With the Cloud, the load balancer is in the center of the elastic infrastructure. It is possible to resize (autoscale) the backend servers dynamically based on the CPU and memory utilization of the individual servers in the pool.
For more information about OCI load balancer please consult the documentation.
Now with the basic definitions in place, let’s start building an LB.
First, we need to choose between a Public or a Private LB. because in this blog I will create a public service accessible from the Internet, I will create a Public LB.
Now we need a public subnet where the LB will reside. Personally, I am in favor of individual subnet for the LB. This will ensure that the LB has its own routing and security. For this blog, I’ve created a VCN that has 192.168.0.0/16 as CIDR space and in it, I’ve created two subnets: one for the backends set (192.168.0.0/24) and one for the LB (192.168.1.0/29).
Start by creating the name of the LB, the shape and the networking details:
Choose the load balancing policy (in this case it will be a Round Rubin) the backends servers and the health checks. Notice the usage of the HTTP health check on the /index.html:
For the backends, create two Linux VMs with Nginx. This is pierce of software that amoung other funtionalities can be a web server.
Configure the listeners (HTTP) and the port (80).
The LB looks like this:
On the backend servers, the Nginx uses the following folder as webroot folder: /usr/share/Nginx/html
There are three files: index.html, dev.html, test.html.In order to test the basic LB functions we will manually request the HTML files:
At this point, the LB has a basic configuration in order to distribute requests to the backend servers.
For this test I created two free domains: caandrei-test.tk and caandrei-dev.tk. I use them in order to demonstrate more advanced LB configurations.
Requests that are coming to the LB for caandrei-test.tk going to the test.html and caandrei-dev.tk to the dev.html. For this the following configureation is needed:
Create "hostnames". These are applied to a listener in order to enhance the request routing. More informations can be found in the documentation.
Create the path route sets. This feature will allow for different applications that are served by the same LB, to identify the correct backend set. In our example, we have only one backend set.
Create the path route set for the test. There will be 2 path route sets:
Create the ruleset. This function will allow you to:
Restrict access to the LB from the certain CIDR block;
Specify the allowed http methods;
Specify URL redirects;
Specify request header rules;
Specify response header rules.
For this exercise, I use only the URL redirects. This will redirect the requests towards the domain name to an HTML file. The rule set for the dev looks like this:
Create the rule for test, there will be 2 rules:
Modify the listener to incorporate: Hostnames, Path Route Set and Ruleset.
Create a second listener for the test:
I will connect from a browser to the http://caandrei-test.tk. Notice that I do not have yet any redirect rules and the request will go the default html (index.html)
Edit the listener and add the ruleset:
Repeat the web browser test:
Notice that the request went to the test.html
This article was an introduction to configure the Load Balancers and presented a technical demonstration of the definitions learned.