Transfer rates using IPSec over Internet

March 3, 2020 | 4 minute read
Andrei Stoian
Master Principal Cloud Architect | North America Cloud Engineering
Text Size 100%:

In this blog post we will discuss and analyze the relative performance that we can obtain by running IPSec over Internet, one common scenario that we are implementing in OCI to achieve bidirectional connectivity between our Customers premises and OCI. I used the term "relative" because the transfer rates depends on many factors when using the Internet as underlying IP transport infrastructure and we can mention: distance between the endpoints, delay, latency, jitter, type of data that needs to be sent/received and most of the parameters cannot be controlled by one or another party.

For more information about IPSec service offered by OCI called VPN Connect service please follow the link: https://docs.cloud.oracle.com/en-us/iaas/Content/Network/Tasks/managingIPsec.htm

The IPSec configuration part is not covered in this topic. The CPE used is a Linux system running LibreSwan. The LibreSwan configuration to peer with OCI is listed at this link: https://docs.cloud.oracle.com/en-us/iaas/Content/Network/Reference/libreswanCPE.htm

In our tests we will use iperf3 (https://iperf.fr/) to check the maximum achievable bandwidth between the endpoints. The tests will be performed by using one stream and multiple streams. In IP Networks using multiple streams will increase the bandwidth utilization and will hugely improve the amount of data sent/received.

We can add as a testing way the rsync (https://linux.die.net/man/1/rsync) file transfer protocol with parallel copies to observe the network performance, however we will use only iperf3 for the current tests.

In order for the results to be as accurate as possible the setup consists of:

- On-premise location: Ohio

- OCI DC1: US-East (Ashburn)

- OCI DC2: US-West (Phoenix)

Case 1: Ohio to US-East (Ashburn)

Network Topology

 

On-Premise CPE is using the default setting related to MTU which is 1500.

[root@ip-10-0-1-139 ec2-user]# ping 172.31.0.2
PING 172.31.0.2 (172.31.0.2) 56(84) bytes of data.
64 bytes from 172.31.0.2: icmp_seq=1 ttl=62 time=20.2 ms
64 bytes from 172.31.0.2: icmp_seq=2 ttl=62 time=19.9 ms
64 bytes from 172.31.0.2: icmp_seq=3 ttl=62 time=19.3 ms
^C
--- 172.31.0.2 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 5ms
rtt min/avg/max/mdev = 19.337/19.815/20.208/0.378 ms

We have an ICMP RTT of 19.815ms.

We will start the iperf3 server on 172.31.0.2 and the client on 10.0.1.139 with default parameters and one stream of data:

As we can observe we have a very good transfer rate in this direction.

Please note, the transfer rates might change between attempts.

Let's try with 15 iperf3 parallel tests:

At the end of the test we have obtained:

We have increased the bandwidth utilization from 1.13 Gbps to about 1.7 Gbps.

Please note, the transfer rates might change between attempts.

Case 2: Ohio to US-West (Phoenix)

Network Topology

[root@ip-10-0-1-139 ec2-user]# ping 172.29.1.3
PING 172.29.1.3 (172.29.1.3) 56(84) bytes of data.
64 bytes from 172.29.1.3: icmp_seq=1 ttl=62 time=62.0 ms
64 bytes from 172.29.1.3: icmp_seq=2 ttl=62 time=61.9 ms
64 bytes from 172.29.1.3: icmp_seq=3 ttl=62 time=61.9 ms
64 bytes from 172.29.1.3: icmp_seq=4 ttl=62 time=61.9 ms
^C
--- 172.29.1.3 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 6ms
rtt min/avg/max/mdev = 61.874/61.929/62.006/0.308 ms

We have an ICMP RTT of 61.92ms. This is actually 3 times higher that in the previous case when we have tested from Ohio to Ashburn.

We will start the iperf3 server on 172.29.1.3 and the client on 10.0.1.139 with default parameters and one stream of data:

There is a big difference between the results, Ohio-Ashburn, Ohio-Phoenix.

Please note, the transfer rates might change between attempts.

Let's try with 15 iperf3 parallel tests:

At the end of the test we have obtained:

With multiple streams we still can get enough bandwidth for this connection.

Please note, the transfer rates might change between attempts.

As a conclusion, the distance between the endpoints is an important factor to be considered. The results you are obtaining can be similar to very different (distance, Service Providers in between and some other factors).

Andrei Stoian

Master Principal Cloud Architect | North America Cloud Engineering


Previous Post

AWS S3 to OCI access best practices

Mithun Devkate | 4 min read

Next Post


Migrate Database backup from AWS s3 to OCI Object storage using rclone and restore it with help of Oracle storage gateway

Abhijit Godbole | 5 min read