Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

This page focuses on the implementation of a dockerised environment to support specific file transfer services (i.e. Xrootd, GridFTP and FDT).

In parallel, the participating NRENs who are interested in the development of the data transfer service can deploy their own DTN servers following some general specifications. More specifications and guidelines can be provided on our main page and a good reference is: http://fasterdata.es.net/science-dmz/DTN/reference-implementation/.

Once this code is ready for testing, the participating NRENs who have servers will be able to do the initial testing and evaluation of the DTN software, which would include criteria like:

  • Performance of data transfers for different data workflow characteristics: large bulk transfers, lots-of-small-file transfers, and streaming data transfers.
  • Ease of use for end-users.
  • Trust negotiation between the end hosts and security of the transfer.

Docker is a set of platforms as a service (PaaS) product that uses OS-level virtualisation to deliver software in packages called containers. Containers are isolated from one another and bundle their own software, libraries and configuration files; they can communicate with each other through well-defined channels. All containers are run by a single operating system kernel and therefore use fewer resources than virtual machines. The service has both free and premium tiers. The software that hosts the containers is called Docker Engine. It was first started in 2013 and is developed by Docker, Inc.




Docker architecture [https://docs.docker.com/get-started/overview/]

The docker architecture is very straightforward, it is a client-server architecture. The client communicates among the Docker daemon (that can be installed on the identical or separate remote machine), which does the building, running, and distributing of the Docker containers. The Docker client and daemon communicate using a Rest API, over UNIX sockets or a network interface.

docker architectureImage Added

Figure 1.1 Docker architecture [from https://docs.docker.com/get-started/overview/]





Docker architecture components

The architecture components of Docker are the following:

  • Docker daemon (system service called “dockerd”) listens for requests and manages Docker objects such as images, containers, networks, and volumes alone or in coordination with other daemons that are installed at other machines.
  • Docker client (docker) a command line tool for connecting at Docker daemon. The docker command uses the Docker API.
  • Docker registry stores Docker images (anyone can setup their own registry in premises). An example is the “Docker Hub” which is a public registry that anyone can use (Docker is configured to look for images on Docker Hub by default). There are some pre-specified commands in order to access the registry i.e. the docker pull/push and docker run commands. The aforesaid commands are making the required images to be pulled from/pushed to your configured registry.
  • Docker objects are widely used in Docker. Docker is creating and using images, containers, networks, volumes, plugins, and other objects. The aforesaid components are handled as objects by Docker.
    • Image: is a read-only template with instructions for creating a Docker container. Maybe an image is based on another image. You might create your own images with the use of Dockerfile.  
    • Container: is a runable instance of an image. You can create, start, stop, move, or delete a container using the Docker API or CLI with the use of Docker client. You can connect a container to one or more networks, attach storage to it, or even create a new image based on its current state.

Containers vs. VMsImage Added

 Figure 1.2 Containers vs VMs [from https://www.weave.works/assets/images/bltb6200bc085503718/containers-vs-virtual-machines.jpg]





Containers vs. VMs

Differences are in terms of scale and portability:

  • Containers are small in disk size. They don’t package anything bigger than an app and all the libraries and files fundamental to run. The lightweight nature of containers with the utilize of a shared operating system makes the move across multiple environments effortless.
  • VMs are huge in disk size. They include their own OS, permitting them to perform numerous resource-intensive functions at once. The expanded resources offered to VMs permit them to abstract, split, duplicate, and emulate entire servers, OSs, desktops, databases, and networks.

Virtual machines and containers diverge in numerous behaviours, but the principal difference is that containers provide a technique to virtualize an OS so that different workloads can run on a solitary OS instance. Amid VMs, the hardware is being virtualized to run multiple OS instances. Containers’ speed, agility, and portability make containers a major pillar to technological achievements.

Within containers in docker you can manage the memory consumption and CPU consumption. In our case we do not restrict the containers from memory and CPU. Thus, the containers have a pre-set CPU and memory value, but if they need more, they can directly request it from the host server.  In our containers also we configure the services of gridFTP,XROOTD and FDT with other ports than the default ports. Also, we used “ctop” for getting statistics from our containers (bandwidth used, CPU used, memory used). In addition, a client script is created of transferring huge file. This script is monitored with the use of “bmon” and “iftop” tools for measuring bandwidth usage of the links.