knowledge-kitchen

Containers - A Brief History of Virtualization from Bare Metal to Docker Containers

Standardized efficient highly-portable boxes within which to run software.

  1. Overview
  2. A Brief History of Virtualization
  3. Containers in Detail
  4. Docker Setup
  5. Docker Commands
  6. Try Docker
  7. Docker Networking
  8. Docker Volumes
  9. Communications Among Containers
  10. Docker Devices
  11. Building Docker Images
  12. Deploy A Containerized App
  13. Starting/Stopping Multiple Containers Together
  14. Conclusions

Overview

Concept

A container is a small, standardized, portable, self-contained virtual environment within which application code can run.

A Brief History of Virtualization

Concept

Virtualization is a collection of software technologies that enable software applications to run on virtual hardware and/or virtual operating systems. Examples include virtual machines and containers.

Bare metal machines

In ancient times, a server was a physical device affectionately called a “box” or a “machine”.

Bare metal machines (continued)

Bare metal machines give their users complete control over the machine

Bare metal machines (coninued again)

A bare metal machine

Bare metal machines (continued once more)

These boxes had (and still have) some inconvenient properties.

Blade and rack servers

Computer hardware, loaded with the operating system and applications, was eventually reduced to a single hardware card.

Blade and rack servers (continued)

A blade server

Blade and rack servers (continued again)

Blade servers and rack servers are not without their complications.

Virtual machines

With increases in computing power, an entire machine could be virtualized

Virtual machines (again)

Virtual machine

Virtual machines (continued again)

For most use-cases, virtual machines have significant benefits over traditional bare metal, rack, and blade servers.

Virtual machines (continued once more)

But as with all technology, virtual machines have their downsides.

Containers

Containers are a refinement of the concept of a virtual machine.

Containers (continued)

Virtualization evolution

Containers in Detail

Advantages

Containers are virtual environments that contain the bare minimum necessary to deploy an application on any machine - physical or virtual.

Reality check

While containers do not necessarily encapsulate the entire operating system, they can encapsulate the entire operating system or parts of it, excluding the kernel, if desired.

Players

Docker is the leader in the container field. However, it is still an open competitive new field:

The main cloud services providers all support containers:

False Dichotomy

“Kubernetes vs. Docker” is a false dichotomy. Docker and Kubernetes are not exactly competitors…

Image

A container’s configuration is specified in its “image”

Registry

A container registry is a central server that hosts images

Security

Containers do not run in total isolation from one-another, since they share the kernel and at least some of the host operating system with other containers on the same host machine.

-

Automation

Containers can be integrated with automation tools, such as Jenkins, Travis CI, CircleCI, GitHub Actions, or via settings within container registries such as Docker Hub.

Docker Setup

Install and run it

In order to build and run docker images and containers, you will need to install it.

Docker Commands

Manage containers

A few commands to help manage running containers:

Manage images

And… a few more commands to help manage images:

Try Docker

Download and run an image

Download and run a container built from an image stored on the Docker Hub registry.

docker run -ti --rm bloombar/se_welcome:latest

Download and run an image (continued)

The previous command instantiates a container, but your command shell (i.e. terminal) remains focused on your host machine. You can navigate into the container and poke around the files by opening up a [bash](../bash-scripting) shell within the container…. quit the container, if already running, and then run - you will notice it looks and acts like a virtual machine:

docker run -ti bloombar/se_welcome:latest bash

Rather than stopping a container and then running it again, you can also open up a bash shell within the container while it is already running:

docker exec -ti <container_id or container_name> bash

Dockerfile

The configuration of each image is written in a Dockerfile build script - this is what I used to create the example image:

# in Docker, it is common to base a new image on a previously-created image
# Use an official Python runtime image as a parent image to base this image on
FROM python:2.7-slim
# Set the working directory within the image to /app
WORKDIR /app
# the ADD command is how you add files from your local machine into a Docker image
# Copy the current directory contents into the container at /app
ADD . /app
# Install any needed packages specified in requirements.txt
# in Python, a requirements.txt file is a way of indicating dependencies in a way that the package manager, pip, can understand
RUN pip install --trusted-host pypi.python.org -r requirements.txt
# by default Docker containers are closed off to the external world
# Make port 80 available to the world outside this container
EXPOSE 80
# Define an environment variable... this will be available to programs running inside the container
ENV NAME World
# Run app.py when the container launches
CMD ["python", "app.py"]

Dockerfile (continued)

Notice that the Dockerfile for this example image references another image named python:2.7-slim.

Example Dockerfile for Flask-based web app

# in Docker, it is common to base a new image on a previously-created image
FROM python:3.8-slim-buster

# Set the working directory in the image
WORKDIR /app

# install dependencies into the image - doing this first will speed up subsequent builds, as Docker will cache this step
COPY requirements.txt ./
RUN pip3 install -r requirements.txt

# the ADD command is how you add files from your local machine into a Docker image
# Copy the current directory contents into the container at /app
ADD . .

# expose the port that the Flask app is running on... by default 5000
EXPOSE 5000

# Run app.py when the container launches
CMD [ "python3", "-m" , "flask", "run", "--host=0.0.0.0"]

Note that it is currently recommended to not use pipenv within a Docker container.

Example Dockerfile for React.js front-end

# start from the node v16 base image
FROM node:16

# create an app directory within the image...
# WORKDIR /app

# install dependencies into the image - doing this first will speed up subsequent builds, as Docker will cache this step
COPY package*.json ./
RUN npm install

# copy the remaining app source code into the default directory within the image
COPY . .

# expose port 4000 to make it available to the docker daemon
EXPOSE 4000

# define the runtime command that will be executed when a container is made from the image
CMD [ "npm", "start" ]

Note that, for optimization of the image build process, it is common practice to install dependencies first before copying custom source code into an image. See a discussion about this.

Example Dockerfile for Express.js back-end

# start from the node v16 base image
FROM node:16

# create an app directory within the image...
# WORKDIR /app

# install dependencies into the image - doing this first will speed up subsequent builds, as Docker will cache this step
COPY package*.json ./
RUN npm install

# copy the remaining app source code into the default directory within the image
COPY . .

# install dependencies into the image
RUN npm install

# expose port 3000 to make it available to the docker daemon
EXPOSE 3000

# define the runtime command that will be executed when a container is made from the image
CMD [ "node", "server.js" ]

Try running a containerized React.js / Express.js app

We have containerized a simple React.js / Express.js application.

Launch the React.js front-end as a background process on port 4000:

docker run -p 4000:4000 -d bloombar/data-storage-example-app-front-end

Launch the Express.js back-end as a background process on port 3000:

docker run -p 3000:3000 --restart unless-stopped -d bloombar/data-storage-example-app-back-end

Open your favorite web browser and navigate to http://localhost:4000

Docker Networking

Overview

By default, containers do not have access to the host machine’s standard input/output and networking ports.

docker run --rm -ti -p 45678:45678 ubuntu:latest

So now incoming connections to port 45678 of the host machine will forward to port 45678 of the container. The numbers do not need to match.

Multiple ports

To open several ports, simply repeat the -p option:

docker run --rm -ti -p 45678:45678 -p 3000:80 ubuntu:latest

So now incoming connections to port 45678 of the host machine will forward to port 45678 of the container and incoming requests to port 3000 of the host machine will forward to port 80 of the container.

Create Docker Network

Multiple docker containers can be interconnected within a single a docker network. This simplifies the process of having one container communicate with another.

Inspect a Docker network

A few useful commands for managing the networks created by Docker:

Docker Volumes

Overview

By default, Docker containers have no persistent storage and do not have access to the host machine’s file system. Sometimes it is desirable to have a container that has access to the host machine’s file system, for example so data can persist beyond the lifetime of one container.

To attach a storage volume to a container, use the -v option:

docker run -ti ubuntu:latest -v /path/on/host:/path/in/container

Example

For example, MongoDB databases usually store data in a /data/db directory. To map a directory on the host machine’s file system, e.g. /Users/foobarstein/Desktop/mongodb_data, to this container directory, use the following command:

docker run -ti -d -p 27017:27017 mongo:latest -v /Users/foobarstein/Desktop/mongodb_data:/data/db

Communications Among Containers

Overview

When running a system composed of multiple sub-systems, each running in their own container, there are a few simple mechanisms for communicating among containers:

Standard network communication protocols

If one container must send messages to another container, it is simple to set up an API on the recipient container where requests and responses can be handled.

Standard network communication protocols (continued)

For containers running on different host machines, more complex container orchestration tools, like Docker Swarm or Kubernetes, can create an “overlay network” that simplifies communication among them, similar to how Docker Network does for containers on the same host machine.

Shared files

If two or more Docker containers share access to the same file storage volume, they can communicate with one-another indirectly by placing data into those files, the way spies share secrets in a dead drop.

Shared databse

An extension of the shared files solution to inter-container communication would be to have multiple containers sharing access to the same database, where shared data is stored.

Docker Devices

Overview

By default, Docker containers do not have access to devices attached to the host machine.

Access all devices

Docker provides a privileged mode that allows a container to access all devices attached to the host machine.

docker run -ti --privileged ubuntu:latest

Applications running within the container can now access all host machine devices, which on *NIX machines are located in the /dev directory, e.g. /dev/video0, /dev/snd, etc.

… at least on Linux…

Access specific devices

To access only specific devices on the host machine to the container, use the --device option:

docker run -ti --device /dev/video0 ubuntu:latest

If you need to map a host machine device, e.g. /dev/sda to a different device name in the container, e.g. /dev/xvdc, use a colon separating host device name from container device name, e.g.:

docker run -ti --device /dev/sda:/dev/xvdc ubuntu:latest

To support multiple devices, just repeat the --device flag, e.g.:

docker run -ti --device /dev/video0 --device /dev/sda:/dev/xvdc ubuntu:latest

Applications running within the container can now access the specified host machine devices…

at least on Linux…

Attaching Windows or Mac devices to containers

It turns out that the default virtual machine used by Docker, within which containers are run, is unable to provide access to devices attached to the host machine on Mac and Windows! To get around this, a different virtual machine must be used.

Attaching a Raspberry Pi’s devices to containers

Why yes, I’m glad you asked!

You can run Docker containers on the Raspberry Pi!

Building Docker Images

Overview

To build a Docker image and share it with Docker Hub,

Best practices

Use the Dockerfile to outline all steps necessary to configure the image so that containers created from it are set up correctly to run the target application.

Build for multiple processor types

Docker images are by default built for the same type of processor type as the machine on which the docker build command is run, typically processors that follow the x86 instruction set.

This works fine when building images that will run on common desktop/laptop machines. However, many mobile devices and embedded computing devices such as the Raspberry Pi, use ARM processors, whose instruction sets are not compatible with that of x86 processors.

When building Docker images that must be able to run on multiple processor types, Docker’s buildx tool allows building images for multiple processor types in a single command.

e.g.

docker buildx build --platform linux/amd64,linux/arm64,linux/arm/v7 -t <username>/<repository_name> .

Push and pull images from Docker Hub

Built images can be distributed to/from the Docker Hub registry with commands that are designed to emulate git’s interactions with GitHub.

Recall that docker pull is not necessary if simply desiring to run a container from an image, as docker run ... will automatically pull if necessary.

Deploy A Containerized App

Moving containers to production

Because of their small footprint and high portability, containers are simple to deploy on cloud compute providers, such as Amazon AWS, Microsoft Azure, Heroku, or Digital Ocean.

Moving containers to production (continued)

Docker Compose

Scenario

In many projects, a system is composed of multiple sub-systems, each isolated in their own Docker container, but running on the same host. In such cases, we typically want to start and stop all containers together, and make sure they can communicate with one another, as they are interdependent.

The solution

In these situations, we can use Docker Compose – a simple container orchestrator – to start and stop multiple containers together and treat them as a single unit.

Settings files

The services necessary to start all containers are defined in Docker Compose’s configuration docker-compose.yml file. Once defined, a single command can be used to create and start all services necessary to run the application.

Example: MERN-stack web app

An example docker-compose.yaml for a basic MERN-stack web app:

version: "3.8"

services:
  frontend:
    build: ./front-end # build the Docker image from the Dockerfile in the front-end directory
    ports:
      - 3000:3000 # map port 3000 of host machine to port 3000 of container
    depends_on:
      - backend
    command: npm start # command to start up the front-end once the container is up and running
# ... continued on next slide...

Example: MERN-stack web app (continued)

# ... continued from previous slide

backend:
  build: ./back-end # build the Docker image from the Dockerfile in the back-end directory
  ports:
    - 5000:5000 # map port 5000 of host machine to port 5000 of container
  environment:
    DB_HOST: mongodb://db/foobar # set the DB_HOST environment variable to refer to a 'foobar' db in the 'db' service defined later in this docker-compose file
  depends_on:
    - db
  volumes:
    - ./uploads:/uploads # a directory on the host machine where we can store any files uploaded to the back-end container
  command: npm start # command to start the back-end once the container is up and running
# ... continued on next slide...

Example: MERN-stack web app (continued again)

# ... continued from previous slide

db:
  image: mongo:4.0-xenial # use a recent version of the official MongoDB image on Docker Hub
  ports:
    - 27017:27017 # map port 27017 of host machine to port 27017 of container
  volumes:
    - ./db:/data/db # a directory on the host machine where the data in the container's database will be stored

That’s it…. all three services are now configured in the docker-compose.yaml and ready to run.

Starting all containers

Now start up the application - all containers will be booted up with their ports and volumes set.

docker compose up

If you need to rebuild the components before starting them, use the --build flag:

docker compose up --build

To run the containers in “detached mode”, i.e. in the background:

docker compose up -d

Viewing a list of running containers

As you know, Docker allows you to view a list of containers with docker ps or docker ps -a.

To view only those containers started with docker-compose, run:

docker-compose ps

Or, to see all, including stopped containers:

docker-compose ps -a

Networking among containers

All containers started with docker-compose are automatically connected to a single network named after the directory in which the docker-compose.yaml file is located. Each service is named after the key used to define it in the docker-compose.yaml file.

To verify this for yourself, jump into the command shell of one of the running containers started by docker-compose, for example our earlier frontend container:

docker exec -it frontend bash

Following our earlier example, the backend service defined in the docker-compose.yaml example above is now accessible from within the frontend container by referencing the name backend:

curl http://backend:5000/your-favorite-backend-api-route/

Stopping all containers

To stop containers started with docker-compose up, run the opposite command:

docker-compose down

Conclusions

You have now had a brief introduction to how container technology fits into the evolution of computer virtualization.