In this post I want to test the performance of an OCI Load Balancer. Bellow you can see the topology used:
I will choose an 100M shape LB and use a linux VM as back-end. From the Internet a client connects to the server.
The LB will use a TCP port 80 listener and the back-end server will use TCP port 80.
In the first test I will install iperf on the back-end server as server on tcp port 80.
The client connects to the server via the Load Balancer and using iperf the network throughput will be calculated.
besides the client connections we can observe also the health-checks made by the load balancer.
With this iperf test I validated the shape of the LB.
In the second test I will use a http benchmark tool in order to validate the number of connections per second.
On the server i will install nginx, a light web server. On the client, i will install wrk, a http benchmark tool.
Nginx is easy to install via sudo yum install nginx. After the installation the service is started with sudo systemctl start nginx.
On the client, wrk is downloaded and compiled using this tutorial: https://github.com/wg/wrk/wiki/Installing-Wrk-on-Linux
sudo yum -y groupinstall 'Development Tools'
sudo yum -y install openssl-devel git
git clone https://github.com/wg/wrk.git
cd wrk
make
sudo cp wrk /usr/local/bin/
From the client the benchmark is ran using 12 threads and 4000 connections for 300 seconds. below is the result.
Checking the metrics page of the Load Balancer, the number of connection reached 1000 average and the throughput in 705MB/minute.
Multiplying the result by 8 and divide it by 60 we will get the LB throughput: 94Mbps
With this tests we validated the LB shape with two methods: an iperf on tcp port 80 and a with http number of connection per second. The number of connections in a production environment will vary. In my tests i used the default page of the NGINX which is very light.