Container-based virtualization
We will be using a virtual machine in the faculty's cloud.
When creating a virtual machine in the Launch Instance window:
- Name your VM using the following convention:
scgc_lab<no>_<username>
, where<no>
is the lab number and<username>
is your institutional account. - Select Boot from image in Instance Boot Source section
- Select SCGC Template in Image Name section
- Select the m1.large flavor.
In the base virtual machine:
- Download the laboratory archive from here in the
work
directory. Use:wget https://repository.grid.pub.ro/cs/scgc/laboratoare/lab-docker.zip
to download the archive. - Extract the archive.
- Download the
runvm.sh
script. The.qcow2
files will be used to start virtual machines using therunvm.sh
script. - Start the virtual machines using
bash runvm.sh
. - The username for connecting to the nested VMs is
student
and the password isstudent
.
$ # change the working dir
$ cd ~/work
$ # download the archive
$ wget https://repository.grid.pub.ro/cs/scgc/laboratoare/lab-docker.zip
$ unzip lab-docker.zip
$ # start VMs; it may take a while
$ bash runvm.sh
$ # check if the VMs booted
$ virsh net-dhcp-leases labvms
Needs / use-cases
- easy service install
- isolated test environments
- local replicas of production environments
Objectives
- container management (start, stop, build)
- service management
- container configuration and generation
What are containers?
Containers are an environment in which we can run applications isolated from the host system.
In Linux-based operating systems, containers are run like an application which has access to the resources of the host station, but which may interact with processes from outside the isolated environment.
The advantage of using a container for running applications is that it can be easily turned on and off and modified. Thus, we can install applications in a container, configure them and run them without affecting the other system components
A real usecase where we run containers is when we want to set up a server that depends on fixed, old versions of certain libraries. We don't want to run that server on our system physically, as conflicts with other applications may occur. Containerizing the server, we can have a version of the library installed on the physical machine and another version installed on the container without conflict between them.
Containers versus virtual machines?
Both containers and virtual machines allow us to run applications in an isolated environment. However, there are fundamental differences between the two mechanisms. A container runs directly on top of the operating system. Meanwhile, a virtual machine runs its own kernel and then runs the applications on top of that. This added abstraction layer adds overhead to running the desired applications, and the overhead slows down the applications.
Another plus for running containers is the ability to build and pack them iteratively. We can easily download a container from a public repository, modify it, and upload it to a public repository without uploading the entire image. We can do that because changes to a container are made iteratively, saving the differences between the image original and modified version.
There are also cases where we want to run applications inside a virtual machine. E.g, if we want to run a compiled application for an operating system other than Linux, we could not do this because containers can run applications that are compiled for the system host operation. Virtual machines can also run operating systems other than the operating system host.
Docker
Starting a container
To start an application inside a Docker container use the following command:
student@lab-docker:~$ sudo docker run -it gitlab.cs.pub.ro:5050/scgc/cloud-courses/ubuntu:18.04 bash
Unable to find image 'ubuntu:18.04' locally
18.04: Pulling from library/ubuntu
11323ed2c653: Already exists
Digest: sha256:d8ac28b7bec51664c6b71a9dd1d8f788127ff310b8af30820560973bcfc605a0
Status: Downloaded newer image for ubuntu:18.04
root@3ec334aece37:/# cat /etc/lsb-release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=18.04
DISTRIB_CODENAME=bionic
DISTRIB_DESCRIPTION="Ubuntu 18.04.6 LTS"
root@3ec334aece37:/#
The docker
command was run using the following parameters:
run
, start a container;-i
, starts an" interactive "container, which accepts keyboard input;-t
, associates a terminal to the run command;ubuntu: 18.04
is the name of the image we want to use. [Dockerhub] (https://hub.docker.com/) is a public image repository from which we can download already built images;bash
, the command we want to run in the container.
We can also run a non-interactive command in a container as follows:
student@lab-docker:~$ sudo docker run gitlab.cs.pub.ro:5050/scgc/cloud-courses/ubuntu:18.04 ps -ef
UID PID PPID C STIME TTY TIME CMD
root 1 0 0 12:01 ? 00:00:00 ps -ef
The ps -ef
command would show all active processes in the system. We notice that only one command appears in the output above, because we are running in an isolated environment. We will return to this in the "Container Security" section.
However, we do not want to always run containers in the foreground. If we want to run a script that cannot be run in the host environment, and this script will run for a long time, we prefer to run the command in the background.
To start a container in the background, use the -d
option for the docker run
command as follows:
student@lab-docker:~$ sudo docker run -d gitlab.cs.pub.ro:5050/scgc/cloud-courses/ubuntu:18.04 sleep 10000
a63ee06826a33c0dfab825a0cb2032eee2459e0721517777ee019f59e69ebc02
student@lab-docker:~$ sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a63ee06826a3 ubuntu:18.04 "sleep 10000" 7 seconds ago Up 5 seconds wonderful_lewin
student@lab-docker:~$ sudo docker exec -it a63ee06826a3 /bin/bash
root@a63ee06826a3:/# ps -ef
UID PID PPID C STIME TTY TIME CMD
root 1 0 0 02:19 ? 00:00:00 sleep 10000
root 7 0 2 02:19 pts/0 00:00:00 /bin/bash
root 19 7 0 02:20 pts/0 00:00:00 ps -ef
root@a63ee06826a3:/# exit
We can see that the container started by us is still running by running the docker ps
command.
Relevant columns
CONTAINER ID
NAMES
To connect to a container running in the background, use the docker exec
command along with the container ID or name selected using the docker ps
command:
student@lab-docker:~$ sudo docker exec -it a63ee06826a3 /bin/bash
root@a63ee06826a3:/# ps -ef
UID PID PPID C STIME TTY TIME CMD
root 1 0 0 02:19 ? 00:00:00 sleep 10000
root 7 0 2 02:19 pts/0 00:00:00 /bin/bash
root 19 7 0 02:20 pts/0 00:00:00 ps -ef
root@a63ee06826a3:/# exit
To stop a container running in the background, use the docker stop
command along with the container ID or name as follows:
student@lab-docker:~$ sudo docker stop a63ee06826a3
a63ee06826a3
student@lab-docker:~$ sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
student@lab-docker:~$
Exercise: Starting a container
- Start a container in the background based on the
quay.io/rockylinux/rockylinux:8
image. - Connect to the container just turned on and run the
yum install bind-utils
command. - Disconnect from container.
Context: Container separation
Most of the time when we use containers we do not use them interactively. They have a well-defined purpose, to run a service, an application, or to do a set of fixed operations.
A constructive approach to using containers is do one thing and do it well
. For this reason, we recommend that each container be built with a single purpose in mind.
For example, for a web application we might have the following approach:
- a container running an http server;
- a container running a database.
This architecture allows us to change a container, such as changing the type of database used without changing the entire container.
Building a container
Most times just running a container interactively and connecting to it when the need arises is not enough. We want a way to automatically build and distribute single-use containers. For example, we want to use purpose build containers when running a CI/CD system that build a website and publishes it to the web. Each website has its own setup requirements, and we'd like to automate this. We could add automation by running a script, but in this case we'd lose one of the positives of running containers, the iterative nature of images, because the docker images would be monolithic.
In order to create a container we need to define a Dockerfile
file as follows:
FROM gitlab.cs.pub.ro:5050/scgc/cloud-courses/ubuntu:18.04
ARG DEBIAN_FRONTEND=noninteractive
ARG DEBCONF_NONINTERACTIVE_SEEN=true
RUN apt-get update
RUN apt-get install -y software-properties-common
RUN apt-get install -y firefox
Each line contains commands that will be interpreted by Docker when building the image:
FROM
, specifies the base container imageRUN
, runs in container
This container will then be used to compile a container which can run Firefox.
It should be noted that in the process of building containers we have to use non-interactive commands, because we do not have access to the terminal where the terminal is built, so we can not write the keyboard options.
To build the container we will use the following command:
student@lab-docker:~$ docker build -t firefox-container .
When we run the command we base that the Dockerfile
file is in the current directory (~
). The -t
option will generate a container image named firefox-container
.
To list container images on the machine use the following command:
student@lab-docker:~$ docker image list
This list contains both internally downloaded and locally built containers.