You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 2 Next »

The DTN (Data Transfer Node) Focus Group has performed several tests on GTS (GÉANT Testbed Service) in order to get useful results that the NRENs can compare and replicate for their own tests. The following matrix shows a summary of the set-up or each test, the parameters tuned, the software installed and the performance achieved, as well as the links to the information on how to install each software in GTS. Finally, some comments related to the setup and the test are included.




Setting up DTN tests on GTS 

The “GÉANT Testbeds Service” (GTS) provides the user with definite experimental networks at the network research community. The aim of GTS is the testing of novel networking and telecommunications concepts, at scale, and across a geographically practical European footprint.​ In terms of the GTS Users, GTS is intended to aid research teams exploring novel SDN-based solutions and requiring a high performance distributed infrastructure. GTS can furthermore be utilized by application and software development teams needing an isolated testbed to demonstrate their designs without affecting live internet traffic. GTS is rationally isolated from the production GÉANT network to guarantee the integrity of live applications​ and can support multiple isolated networks concurrently permitting teams to work without affecting each other [GTS]. 

The following figure shows the GTS nodes setup map in Europe:

GTS Topology  DTN Test Locations in GTS

GTS GUI with two DTN hosts and a link

The tests run in the GTS testbed were:

  1. Virtual Machines, short distance (AMS-AMS):
    1. 1 CPU
    2. 2 CPU
    3. 4 CPU
  2. Virtual machines, long distance (AMS-LON):
    1. 1 CPU
    2. 2 CPU
    3. 4 CPU
  3. Bare metal servers, short distance (HAM-PAR).
  4. Bare metal servers, long distance (LON-PRA):
    1. R430
    2. R520
  5. Dockerised environment on bare metal servers, short distance (HAM-PAR).
  6. Dockerised environment on bare metal servers, long distance (LON-PRA).




The DTN Focus Group has performed several tests on GTS in order to get useful results that the NRENs can compare and replicate for their own tests using up to four Bare Metal Servers (BMS) with two different setups. Both BMS setups were connected directly with 10Gbps links. It has also produced examples of tests and setups in Bare Metal Server (BMS) and virtualised environments using both VMs (provided as setup from the GTS testbed administration page) and Dockers.


Simplified tables:



Virtual machine1 CPU2 CPU4 CPU
Nodes/ToolsAMS-AMSAMS-LONAMS-AMSAMS-LONAMS-AMSAMS-LON
iPerf9.90 Gb/s9.90 Gb/s9.90 Gb/s9.90 Gb/s9.90 Gb/s9.90 Gb/s
gridFTP8.30 Gb/s8.36 Gb/s8.86 Gb/s8.47 Gb/s8.50 Gb/s7.51 Gb/s
FDT9.32 Gb/s7.90 Gb/s9.19 Gb/s8.49 Gb/s8.98 Gb/s7.77 Gb/s
Xrootd2.60 Gb/s2.60 Gb/s2.60 Gb/s2.60 Gb/s2.60 Gb/s2.60 Gb/s
Hardware testingDocker
R430R430
Nodes/ToolsHAMB-PARLON-PRA
iPerf9.2 Gb/s9.0 Gb/s
gridFTP8.53 Gb/s8.50 Gb/s
FDT8.87 Gb/s8.70 Gb/s
Xrootd8.00 Gb/s8.00 Gb/s
Hardware testingBare Metal Servers (BMSs)
R430R520 R430
Nodes/ToolsHAMB-PARLON-PRALON-PRA
iPerf9.41 Gb/s9.32 Gb/s9.43 Gb/s
gridFTP8.58 Gb/s3.30 Gb/s8.52 Gb/s
FDT9.39 Gb/s4.12 Gb/s9.39 Gb/s
Xrootd8.00 Gb/s3.13 Gb/s7.99 Gb/s




More test details:

The tests started with DTNs using several software tools installed in virtual machines (VMs) with 1, 2 or 4 CPUs and followed with the same software tools in Bare Metal Servers (BMSs).

Hardware: two types of Bare Metal Servers (BMS) were used:

  • Dell R430 (2x20C/40T Intel® Xeon® E5-2660v3 @ 2,6 GHz 25 MB cache, 128 Gb ECC DDR4 2133 MHz RAM, 6xSSD, 372 GB, 6,0 Gb/s HDD)  
  • Dell R520 (1x8C/16T Intel® Xeon® E5-2450v2 @ 2,5 GHz 10 MB cache, 32Gb ECC DDR3 1600 MHz RAM, 2xSSD, 372 GB, 6,0 Gb/s HDD). 

O/S used For Client and server: Ubuntu-18.04-server

Performance metric: Performance measured as the effective transfer rate (Data File Size / Elapsed Real Time). For the measuring of the transfers, this investigation measured the time that a specific huge (i.e. 250GB, 420GB) transfer is taking and it calculated the bandwidth utilization, based on the size of the file and the time it took to move from the client to server.

Parameters tuned:

sudo sysctl -w net.ipv4.tcp_rmem="4096 16777216 2147483647"

sudo sysctl -w net.ipv4.tcp_wmem="4096 16777216 2147483647"

In the first investigation, DTNs on virtualised environments using VMs were tested. For the direct measurement of the links between the client site and the server, ipref was used. The results in virtualised environments showed that all programs evaluated achieve high bandwidth utilization except XRootd. The reason for not achieving high bandwidth utilization was the virtualisation itself. Xrootd is highly dependant on hardware resources and the virtualised environment with the virtual hypervisor on the physical system and the lack of direct access of Xrootd to resources (i.e. the physical link, not direct access to disks and buffer memory) had as result the low bandwidth utilization. On the other hand, the rest of the DTN services and client programs investigated (i.e. FDT, gridFTP) achieved higher bandwidth utilization. This first experiment was executed in virtual servers with different distances between the client and the server. In this case, both VMs were in Amsterdam for the shortest distance tests and one VM was in Amsterdam and the other one in London for a the longest distance test.Additionally, our investigation shown that in VM testing better results are achieved with VM with 2xCPU than 4xCPU. The aforesaid differences in results are due to different reasons. More specifically, it is due to different server architecture, Hyper-V VM Load Balancing, or that DTN tools do not know "how to" better use the CPU resources. It is important to note that this investigation did  not have any access to change VM parameters.

As mentioned before, the results may be affected by the fact that the links are not dedicated and other tests could have been running in GTS simultaneously. This could explain why the tests run between Amsterdam and London with 4 CPUs offered worse results than all the rest, as they were run in different days. When doing the tests, they were repeated at different time slots, to see if a pattern could be found regarding performance, but no significant changes were found.



  • No labels