This page focuses on the implementation of a dockerised environment to support specific file transfer services (i.e. Xrootd, GridFTP and FDT).
In parallel, the participating NRENs who are interested in the development of the data transfer service can deploy their own DTN servers following some general specifications. More specifications and guidelines can be provided on our main page and a good reference is: http://fasterdata.es.net/science-dmz/DTN/reference-implementation/.
Once this code is ready for testing, the participating NRENs who have servers will be able to do the initial testing and evaluation of the DTN software, which would include criteria like:
Docker is a set of platforms as a service (PaaS) product that uses OS-level virtualisation to deliver software in packages called containers. Containers are isolated from one another and bundle their own software, libraries and configuration files; they can communicate with each other through well-defined channels. All containers are run by a single operating system kernel and therefore use fewer resources than virtual machines. The service has both free and premium tiers. The software that hosts the containers is called Docker Engine. It was first started in 2013 and is developed by Docker, Inc.
The docker architecture is very straightforward, it is a client-server architecture. The client communicates among the Docker daemon (that can be installed on the identical or separate remote machine), which does the building, running, and distributing of the Docker containers. The Docker client and daemon communicate using a Rest API, over UNIX sockets or a network interface.
Figure 1.1 Docker architecture [from https://docs.docker.com/get-started/overview/]
The architecture components of Docker are the following:
Figure 1.2 Containers vs VMs [from https://www.weave.works/assets/images/bltb6200bc085503718/containers-vs-virtual-machines.jpg]
Differences are in terms of scale and portability:
Virtual machines and containers diverge in numerous behaviours, but the principal difference is that containers provide a technique to virtualize an OS so that different workloads can run on a solitary OS instance. Amid VMs, the hardware is being virtualized to run multiple OS instances. Containers’ speed, agility, and portability make containers a major pillar to technological achievements.
Within containers in docker you can manage the memory consumption and CPU consumption. In our case we do not restrict the containers from memory and CPU. Thus, the containers have a pre-set CPU and memory value, but if they need more, they can directly request it from the host server. In our containers also we configure the services of gridFTP,XROOTD and FDT with other ports than the default ports. Also, we used “ctop” for getting statistics from our containers (bandwidth used, CPU used, memory used). In addition, a client script is created of transferring huge file. This script is monitored with the use of “bmon” and “iftop” tools for measuring bandwidth usage of the links.