Thursday, November 26, 2020

Docker in a Nutshell

File:Docker (container engine) logo.png - Wikimedia Commons

Docker is a tool that allows to build, distribute, and execute software with great ease almost anywhere. It is based on the creation of elements called "containers", which resemble virtual machines but are much more lightweight. Containers work directly with the host system's kernel and optimally manages resources and dependencies to execute software inside it. In this post we will explore a little bit the concepts and use of Docker to easily build, distribute, and execute a piece of software.

Refer to the official documentation on https://docs.docker.com/ 


Containers


Docker's containers comprise the software application to be executed, its dependencies, and its necessary physical resources. We can understand them as "lightweight virtual machines", but in the mode of isolated processes that run natively on the host system.

Containers help standarize the way to deliver software products. They are flexible for any type of application, portable for running the software the same way in every machine, scalable for managing resources or multiple instances, and secure for allowing access to only the necessary parts of the system.


Images


Images correspond to the executable artifacts of software that can be materialized as a running container. We can understand images as blueprints for containers, meaning that we can create as many containers as we want based on an image.

An image will determine a set of operations to create a container based on source code, CLI commands, and other images.


DockerHub


Just like GitHub allows to store and share code repositories, DockerHub allows to store and share images, facilitating the distribution of the building elements of containers.

To get an image it is enough to run the following command:

    docker pull name-of-image

In order to be able to upload an image to DockerHub it is necessary to have an account and then run the following commands:

    docker login

    docker push name-of-image


Creating an Image


To create an image based on a project we need a file called "Dockerfile". This file contains several instructions, one per line, that will give form to the image. Each one of these instructions are called layers and are efficiently interpreted by Docker to identify other required images, avoid repetition through a cache, and allow a sequential flow of execution.

This is an example of a Dockerfile:

FROM node:12 ## Dependency on the Node version 12 image

COPY ["package.json", "package-lock.json", "/usr/src/"] ## Copy files to folder

WORKDIR /usr/src ## Set a folder as wokring directory for the container

RUN npm install ## Simple execution of CLI commands

COPY [".", "/usr/src"]

EXPOSE 3000 ## Expose ports

CMD ["npx", "nodemon", "index.js"] ## Execute command as main process

Then, to build the image, standing on the location of the Dockerfile:

    docker build -t image-name .

The last argument of the above command is the Docker build context, in other words the folder where the Docker file is located. If a file must be specified, use the -f option to set it.

The image name can follow this convention:   dockerhub-user-name/image-name:tag-name

Although the user name and tag are optional, they should be used for DockerHub.


Running an Image


This command runs an Image, creating an instance of a running Container:

    docker run [options] image-name

Where options can be:

    -d              to run in detached mode (run in the background)
    --name      to specify a unique name for the container
    -p              to specify port mapping  (local-port:container-port)
    --env         to set an environment variable
    -it              used for some interactive images like Ubuntu, shells, etc.
    --rm           to automatically remove the container after it stopped

For instance:

    docker run -d --name myapp -p 3000:3000 --env MONGO_URL=mongodb://mydb:27017/test myimage

Will run a container named myapp from the image myimage, mapping the host machine's port 3000 to the container's port 3000, and setting the MONGO_URL environment variable. Notice that this variable uses mydb as host for the connection string; Docker will try to search for a mydb container to link it there.


Managing Containers


Running containers can be listed using  docker ps  (use the -a option to show all containers)

To gracefully stop a container use the  docker stop container-name-or-id  command. This will send the container's main process a SIGTERM signal first, but if it doesn't respond to that it will send a SIGKILL signal after some time.

To delete a container use   docker rm container-name-or-id (use the -f option to force)

Some running containers can execute commands/programs. To do this:

    docker exec [options] container-name-or-id command


Connecting Containers


Containers can be connected through vitrual networks in order to communicate and interact with each other. We can create a virtual network open to connecting containers using the command

    docker network create --attachable network-name

To connect running containers to this network:

    docker network connect network-name container-name-or-id

Virtual networks can be inspected using

    docker network inspect network-name



Data Access


There are several ways in which Docker allows to have access to files in a system: Bind Mounts, Volumes, direct file copy, and TMPFS Mount (for temporary data).


Bind Mounts

A Bind Mount, or "directory mirroring", is an operation that links a host system's directory with a container's directory, so that whatever happens in one is reflected on the other.

To achieve this, just include the -v option followed by  host-directory-path:container-directory-path, right before the name of the image in a docker run command. For example:

    docker run -d --name my-mongo-db -v /home/username/somefolder:/data/db mongo

Warning: As this method will allow for full read/write access to a directory, it could mean a security risk if misused.


Volumes

These are an evolved alternative to Bind Mounts and are more secure. Volumes are spaces of data within containers that are persisted in Docker and can be managed only by users with privilages.

To create a volume use

    docker volume create volume-name

Then, to use it for the execution of an image, use the --mount option:

    docker run -d --name my-mongo-db -mount src=volume-name:dst=path-to-directory

This way, when another container accesses the created volume, it will find all the changes persisted from the first container inside the directory it points to.


Docker Compose


There is an incredible complement for Docker called Docker Compose that simplifies greatly the creation, communication, and administration of multiple containers. To install refer to https://docs.docker.com/compose/install/ 

Docker Compose requires a docker-compse.yml file structured as follows:

- The version of the Compose file (for support of different features according to the version)
- A list of Services (which will be translated into container instances)
- Volume management, among other features

As an example:

version: "3.8"

services:
myapp:
build: . # Especify the folder of the Docker build context
environment: # Define environment variables
MONGO_URL: "mongodb://mydb:27017/test"
depends_on: # What other services are needed to run this one
- mydb
ports: # Port mapping (host machine:container)
- "3000:3000"
volumes: # Define data access through volumes (automatically created)
- .:/usr/src # Mount all from here to /usr/src
- /usr/src/node_modules # Ignore this folder

mydb:
image: mongo

The above Compose file, working on version 3.8, will create two containers: myapp and mydb.

myapp will require a build previous to running. This can be done using

    docker-compose build 

The above command will build an image using the Dockerfile in the specified directory, named with the convention:  project-folder_service-name

To run, after building the necessary images, execute  docker-compose up  which will start all the services as containers and automatically communicate them through a virtual network. It also supports a detached mode witha -d option.

To shut down gracefully and remove all containers, execute  docker compose down.

It is also possible to specify the number of instances of containers from a specific service using

    docker-compose up -d --scale service-name=number-of-instances 

(If multiple instances require different ports, it is possible to specify port ranges in the Compose file. For example: 3000-3001)


No comments:

Post a Comment