Text Size 100%:

Purpose

This article is meant to provide steps to test latency and throughput between servers and/or datacenters. The tests described here are not meant to test application performance from and end user perspective.

 

Overview

This post will provide an overview of some basic throughput and latency testing.

Before testing, check your route. Is it static? Is it dedicated? If you using the open internet, results are going to be subject to more fluctuation.  Something like FastConnect is going to give you a more reliable connection.

Are you planning to use VPN? For example, your end state is a FastConnect pipe, with VPN between sites. If so, test with this setup in place. Doing your testing without the VPN in place could skew results.

Remember tests should measure round trip times – not one way times. TCP requires packets to travel both ways.

Test both bandwidth and latency. If you just ping between sites with no real traffic, you may not be seeing a realistic measure of your performance. If your real application is consuming most of the available bandwidth, your latency may go up.

In order to utilize the tools and tests described in this document, you may need to adjust firewall settings.

These instructions assume you are on a linux based operating system.

IMPORTANT – Before testing any cloud based service, check your hosting policy and any applicable rules to ensure your testing will not violate any policies. If you are unsure, check with your sales representative before executing any tests. Some tests can induce load on shared services.

 

Tools

iperf3

iperf - https://github.com/esnet/iperf

iperf can be used to collect latency and bandwidth statistics for both TCP and UDP. It uses a client server model, where data can be analyzed from both ends. Among stats it can collect – throughput, jitter, and packet loss. This is basically a tool to measure overall link quality. It does not measure application performance.

Output is text based, but modules are included that include data plotting tools.

This document refers to using iperf3.x

iperf is often available via your linux package manager. i.e. yum install iperf

 

MTR

Is often already on linux operating systems. If it is not, use your package manager to install it (yum install mtr, apt-get install mtr, apt-get install mtr-tiny etc…)

MTR is useful for looking at general ping latency and lost packets. It performs a combination of ping and traceroute in one command. It does not measure application performance.

 

JMeter

http://jmeter.apache.org/

Java based tool for running load tests. This can be used to measure performance of applications.

 

Quick reference

The following is a simple list of steps to collect throughput and latency data.

  • Run MTR to see general latency and packet loss between servers.
  • Execute a multi-stream iperf test to see total throughput.
  • Execute UDP/jitter test if your setup will be using UDP between servers.
  • Execute jmeter tests against application/rest endpoint(s).

 

MTR command

mtr --no-dns --report --report-cycles 60 <ip_address>

--no-dns This tells mtr to not resolve DNS names. We don’t care for just testing ping latency and packet loss.

--report generates a report

--report-cycles 60 tells mtr to hit each ip along the route to your destination for 60 seconds.

Replace <ip_address> with the host you want to test.

The output of this command will help you see the latency and packet loss on all hops to your destination.

Note that you may need to run this command with sudo (as root).

MTR Output

Sample MTR output:

 

[root@localhost ~]# mtr --no-dns --report --report-cycles 60 192.168.1.128
HOST: localhost Loss% Snt Last Avg Best Wrst StDev
1. 10.0.2.2 0.00% 60 0.2 0.3 0.2 0.8 0.1
2. 192.168.1.1 0.00% 60 1.1 1.1 0.9 3.3 0.4
3. 192.168.1.128 20.00% 60 232.3 226.8 219.9 242.6 2.9

 

Hops 1 and 2 show zero percent packet loss, and an average latency of 0.2ms.

Hop 3 shows a 20 percent packet loss, and latency jumps to an average of 226.8ms.

Hop 3 shows a potential problem. If you see packet loss and latency spikes, this is something to investigate.

 

Sample iperf commands

Note you must run both a client and server to use iperf.

Run the iperf server

On the machine/VM you want to act as the server, run “iperf -s”

The default port iperf will bind to is 5001. You can change this with the -p option, and specify the port you wish to use.

If you are on a host with multiple interfaces, you can use the -B option to bind to a specific address.

iperf server output

You should see output similar to the following:

[root@localhost ~]# iperf -s

------------------------------------------------------------

Server listening on TCP port 5001

TCP window size: 85.3 KByte (default)


 

Basic bandwidth measurement

From the machine you are using as the client, execute:

iperf -c 192.168.1.22

Replace the 192.x.x.x IP address with the IP address of the machine you are running the iperf server on.

The results of this command will show you the overall bandwidth stats from the client to the server.

If you are on linux, consider using the -Z option, which is the zero copy method of sending data. It will use less CPU resources.

If you are on a host with multiple interfaces, you can use the -B option to bind to a specific address.

Note that this command will execute a single stream – meaning one big pipe from the client to the server. You may not be able to saturate your network link with a single stream.

Sample output

Output will be similar to the following:

[oracle@localhost ~]$ iperf -c 10.0.2.15
------------------------------------------------------------
Client connecting to 10.0.2.15, TCP port 5001
TCP window size: 2.50 MByte (default)
------------------------------------------------------------
[  3] local 10.0.2.15 port 54124 connected with 10.0.2.15 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec  64.7 GBytes  55.5 Gbits/sec

 

This shows 64.7 GBytes of data was transferred in 10 seconds with an average bandwidth of 55.5 Gbits per second. This is a single stream test.

 

Test with multiple streams

From the machine you are using as the client, execute:

iperf -c <server_IP> -P 4

This will generate 4 streams instead of the default of 1. Note that while increasing the number of streams may improve your overall throughput, there is a point of diminishing returns. CPU resources and other system factors will eventually be a bottleneck. If you set the number of streams too high, you will see poorer results.

If you are on linux, consider using the -Z option, which is the zero copy method of sending data. It will use less CPU resources.

If you are on a host with multiple interfaces, you can use the -B option to bind to a specific address.

Sample output

[oracle@localhost ~]$ iperf -c 10.0.2.15 -P 4
------------------------------------------------------------
Client connecting to 10.0.2.15, TCP port 5001
TCP window size: 2.50 MByte (default)
------------------------------------------------------------
[  6] local 10.0.2.15 port 54128 connected with 10.0.2.15 port 5001
[  3] local 10.0.2.15 port 54125 connected with 10.0.2.15 port 5001
[  4] local 10.0.2.15 port 54126 connected with 10.0.2.15 port 5001
[  5] local 10.0.2.15 port 54127 connected with 10.0.2.15 port 5001
[ ID] Interval       Transfer     Bandwidth
[  6]  0.0-10.0 sec  18.1 GBytes  15.6 Gbits/sec
[  3]  0.0-10.0 sec  28.6 GBytes  24.5 Gbits/sec
[  4]  0.0-10.0 sec  19.1 GBytes  16.4 Gbits/sec
[  5]  0.0-10.0 sec  19.1 GBytes  16.4 Gbits/sec
[SUM]  0.0-10.0 sec  84.8 GBytes  72.8 Gbits/sec

 

This is a test with 4 streams at once.

The output shows the bandwidth for each stream as 15.6, 24.5, 16.4 and 16.4 Gbits per second. The last line, the SUM, shows the total transfer and bandwidth for all 4 streams. This is the overall bandwidth achieved for this test.

 

Measure bidirectional bandwidth

From the machine you are using as the client, execute:

iperf -c <server_IP> -r

Replace the 192.x.x.x IP address with the IP address of the machine you are running the iperf server on.

The results of this command will show you the overall bandwidth stats from the client to the server, AND from the server to the client. This test is useful if you have a lot of bidirectional traffic occurring.

This command will execute a single stream.

Sample output

 

[oracle@localhost ~]$ iperf -c 10.0.2.15 -r
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
[  4] local 10.0.2.15 port 5001 connected with 10.0.2.15 port 54131
------------------------------------------------------------
Client connecting to 10.0.2.15, TCP port 5001
TCP window size: 4.00 MByte (default)
------------------------------------------------------------
[  5] local 10.0.2.15 port 54131 connected with 10.0.2.15 port 5001
[ ID] Interval       Transfer     Bandwidth
[  5]  0.0-10.0 sec  67.4 GBytes  57.9 Gbits/sec
[  4]  0.0-10.0 sec  67.4 GBytes  57.8 Gbits/sec

 

This shows 2 connections, one going each way between client and server.

You can match up the connection with the output based on the [ID] at the start of each line.

ID [4] bandwidth was 57.9 GBits/sec – This is the connection from the server to the client

ID [5] bandwidth was 57.8 GBits/sec – This is the connection from the client to the server

 

UDP Jitter test

Restart your iperf server with the -u option to let it accept UDP packets.

iperf -s -u

You should see output similar to the following:

[root@localhost ~]# iperf -s -u
------------------------------------------------------------
Server listening on UDP port 5001
Receiving 1470 byte datagrams
UDP buffer size:  256 KByte (default)
------------------------------------------------------------

 

On the client, execute

iperf -c <server_IP> -u

The report produced will show you something like the following:

[oracle@localhost ~]$ iperf -c 10.0.2.15 -u
------------------------------------------------------------
Client connecting to 10.0.2.15, UDP port 5001
Sending 1470 byte datagrams
UDP buffer size:  256 KByte (default)
------------------------------------------------------------
[  3] local 10.0.2.15 port 25171 connected with 10.0.2.15 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec  1.25 MBytes  1.05 Mbits/sec
[  3] Sent 893 datagrams
[  3] Server Report:
[  3]  0.0-10.0 sec  1.25 MBytes  1.05 Mbits/sec   0.127 ms    43/  893 (0.05%)

 

Bandwidth was 1.05 mbits/sec

43 datagrams were lost, or 0.05% of total datagrams sent.

Jitter was .127ms. Jitter is the measure in the variation of packet arrival time interval. In a perfect network, packets arrive in the same interval i.e. packets arrive every 2ms. Jitter cam cause packet loss and network congestion. In audio and video transmission, a lot of jitter can interfere with the quality of the transmission.

 

A note on window size

TCP window size is an important factor in overall network performance. The window size controls how many packets are transmitted before an ACK needs to be sent. If the TCP window is too small, the sender will reduce the speed at which packets are sent so it does not overwhelm the receiver.

It is possible to tune the TCP window size with iperf. However, for purposes of this article, we are assuming your operating system will autotune TCP for you.

If you want to play with window sizes on your own, or are testing a link above 10Gbps, a good place to start is here: http://fasterdata.es.net/host-tuning/

 

JMeter command line

You will need to first create a JMeter test for your specific application.

Once completed, place the tests corresponding .jmx file on the server you wish to execute your tests from.

Execute jmeter from the command line with the -n switch, which tells jmeter to run in non-gui mode.

jmeter -n -t <path_to_jmx_script> -l <file_to_save_results_in>

The -l option will save test run results to a jtl file. You can import this file into jmeter in GUI mode if you wish to review the results that way. There will also be text output in the console when you execute the test.

All the test parameters are contained in the jmx script. If you are comfortable doing so, you can edit this file by hand to change test parameters. For example, change the ramp up time or number of threads.

 

 

Michael Shanley


Previous Post

Accessing RSS Feeds in JavaScript

John Featherly | 5 min read

Next Post


Oracle File System – Storing files in the DBCS via NFS

Roland Koenn | 6 min read