You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 5 Next »

There are many factors that affect the performance of data transfers. To achieve a high level of performance, ensuring the DTN elements of a Science DMZ deployment are well configured and well-tuned is of key importance.

Each use case should be examined in terms of the specific requirements involved. It is important to keep in mind that changes that improve performance in one data transfer scenario might negatively affect another, as well as to understand what effect these changes have on the end to end transfer performance as a whole. For example, TCP tuning parameters for long-distance transfer are generally different to those for short-distance. Specific values for such parameters are available from various sources. ESnet provides general performance advice [FASTERDATA] which includes settings suitable for certain DTN setups, e.g. Linux tuning information [ESnet]. Here, areas of interest are discussed with references on where to get additional information.

 

DTNs usually mount to a connected file system, whether it is a Storage Area Network (SAN) or High Performance Computing (HPC) network, with a network interface to either transmit or receive data files. Dedicated software transfer tools like GridFTP [GridFTP], XRootd [XRootd], XDD [XDD], FDT [FDT], BBCP [BBCP], FTS [FTS-CERN], etc. are best installed on a DTN instance to achieve better input/output performance concerning data transfer.

Since DTNs are placed in a high-performance network “on ramp” at sites, as shown in Figure 1.1 above, for security reasons, only software for dedicated data transfers are installed on the servers with "allow" access only to the endpoint sites (not open to the normal Internet traffic), and any in-network filtering is typically performed by efficient ACLs, rather than full, heavyweight stateful firewalls (of a type that would protect the rest of the site network for its day to day business traffic).

Please note that while following the next practices for tuning on the DTN endpoints for a data transfer, it is also important that the local network architecture is also appropriately designed.

In the following sections, we describe examples of DTN tuning for Linux DTNs. More information and detail, for example on pacing and MTU settings, can be found in [FASTERDATA2].









Networking

Various kernel parameters affect network settings. These kernel parameters can be inspected and modified using the sysctl tool or the files under /proc/sys/. Below, they are referred to using the sysctl name.

Socket Parameters

The following are some of the settings that affect socket networking parameters for all protocols.

net.core.rmem_default

net.core.wmem_default

net.core.rmem_max

net.core.wmem_max

The rmem parameters refer to the socket receive buffer size and the wmem refer to the send buffer size. Under TCP socket parameters, a sending host will typically need to buffer data to support the bandwidth delay product (BDP) of the link, i.e. enough memory for all bits in flight; the higher the RTT of the link, the more memory that is required (see the TCP Throughput Calculator [TTC] resource for an example of a BDP and buffer size calculator). As the names imply, those ending in default set the default value used, and those ending in max set the maximum value that can be requested by a program. A privileged program can set a buffer size beyond the maximum value [socket].




  • No labels