Docker
Docker is an application that uses containerization technology to encapsulate an application along with its entire runtime environment into a container.
Docker Architecture
- Docker Engine
The runtime that runs and manages Docker containers. It is installed on a host machine and communicates with Docker APIs to execute commands. It is accessed by running thedocker
command-line app. - Container
The running instances deployed from a Docker image. - Image
Docker images serve as the basis (the blueprint) for Docker containers. - Docker Compose
A tool used to define and run multi-container Docker applications.
Tutorials and Docs
- https://docs.docker.com/
- Docker Guides
- What next after the Docker workshop
- Build with Docker Guide
- Building best practices
- Language-specific guides
- Docker Beginner to Expert Tutorial
PHP
Other Useful Articles:
Containers
A Docker container is a running instance of a Docker image. A Docker image is a read-only file containing instructions for creating a Docker container.
Docker containers can run on any system that has Docker installed. They are lightweight and share the host system's OS kernel, making them much more resource-efficient than traditional virtual machines, but run in isolation from each other.
A container comprises the application and all its dependencies, bundled together, including the code, system tools, libraries, and settings. This ensures that the application runs the same way, regardless of the environment. Each container runs in isolation, ensuring that its execution does not interfere with other containers or the host system. This means that processes running in one container cannot affect those running in another, enhancing security and reducing conflicts.
Multiple containers can be run at the same time and talk with eachother over a Docker Network.
Applications can be scaled by running multiple instances of a container for the same application. This is particularly useful in a web hosting context, where demand can vary greatly.
Running Multiple Processes
Docker is NOT a fancy VM. What it is instead is a process manager, where the processes just so happen to have a pre-built disk image and a security layer around them. Docker is designed to run one single, isolated process (service/concern) per container. This approach allows for better resource management, as well as easier management of the process running inside the container.
The Docker CMD
instruction in a Dockerfile starts a container. Docker will always run only a single CMD
in a container, not more. So in your Dockerfile, you can only specify one command to run. A container runs as long as the command runs. As soon as the command finishes, the container stops. So each container represents one single (running) command. The process in the container must run in the foreground in order to keep the container alive.
Because the container CMD
MUST run in the forground, no other commands that follow it will start. Only that first service will start and all commands after it will be ignored.
You can execute two commands in one CMD
line:
CMD service sshd start && /opt/mq/sbin/rabbitmq-server start
Or put multiple commands into a script file:
CMD sh startscript.sh
However, in either example the &&
workaround will only work when the first command starts in the background (daemon) or will execute quickly without interaction and release the prompt.
If you need to start an interactive (forground) service first that keeps the prompt and the container alive, but you also need to start a secondary service afterwards, for example, like first starting nginx
and then running spawn-fcgi
, a supervisor (process manager) must be used and started with the CMD
to run both services.
Dockerfile:
RUN apk add supervisor
COPY supervisord.conf /etc/supervisord.conf
CMD ["/usr/bin/supervisord", "-c", "/etc/supervisord.conf"]
The local supervisord.conf
file:
[supervisord]
nodaemon=true
[program:nginx]
command=nginx
[program:spawn-fcgi]
command=/usr/bin/spawn-fcgi -n -u nginx -g www-data -a 127.0.0.1 -p 9000 -- /opt/rt5/sbin/rt-server.fcgi
Run Images as Containers
- Docker
run
docs: https://docs.docker.com/reference/cli/docker/container/run/
Create and run a Docker container using the docker run
command.
docker run [options] [the-image-name]
Example: Run an Nginx Image in a Container
docker run -dp 127.0.0.1:3001:80 nginx:latest
If you have not used docker pull
, Docker will automatically search for and download the nginx:latest
image.
-d
or--detach
: Run the container detached from the terminal, i.e the container runs as a background process that doesn't occupy your terminal window. If you ommit this value the container will launch into the terminal window. If you exit the terminal (cmd
+c
orexit
), the container is stopped.-p
: Maps port3001
on the host to port80
in the container. You can now access the nginx server by navigating tohttp://127.0.0.1:3001
in your web browser.
Omitting the IP address such as http://127.0.0.1
will automatically launch the container on http://localhost/
.
docker run -dp 3002:80 nginx:latest
You can now access the nginx server by navigating to http://localhost:3002
in your web browser.
Example: Run an Interactive Ubuntu Image in a Container
This container will not be accisible via a port because no port is defined.
docker run -itd ubuntu
-i
: Interactive mode (Keep STDIN open even if not attached)-t
: Allocate a pseudo-TTY-d
or--detach
: Run the container detached from the terminal, i.e the container runs as a background process that doesn't occupy your terminal window. If you ommit this value the container will launch into the terminal window. If you exit the terminal (cmd
+c
orexit
), the container is stopped.
Get Container IDs
Run docker ps
to list all running containers and their IDs.
docker ps
Stop a Running Container
To stop a running container, run docker stop
with the container ID.
docker stop [the-container-id]
Remove a Stopped Container
And to remove a stopped container, run docker rm
with the container ID.
docker rm [the-container-id]
Stop and Remove a Running Container
To stop and then remove a running container, run docker rm -f
with the container ID.
docker rm -f [the-container-id]
Watch Container Logs
docker logs [the-container-id]
Images
Docker Images are read-only files containing instructions for creating a Docker container. An image has everything needed to run a container (file-system, dependencies, configurations, scripts, binaries, etc.) You can think of an image as the blueprint that Docker uses to create a container. Docker images are built from a file called a Dockerfile
using the docker build
command.
Images can contain other images. Images are downloaded to your local machine and used during the build
process to creat a new image. Images can be downloaded manually using the pull
command, or they will be downloaded automatically when they are required by another image and not available locally.
By default, Docker images are downloaded to your local machine from the Docker Hub, a public Docker repository. Images you build are stored on your local machine but can be shared to other Docker repositories.
Each docker image is composed of layers piled on-top of the base image (a base image has no parent image specified). The final Docker image is like an onion with the base image (like an OS distribution) at the core, and a number of layers on top of it. For example, an app image can be built by starting with a base image like an Ubuntu distribution, then adding layers of dependencies, and then finally adding the application layer. Layers have a parent-child hierarchical relationship with each other. Layers are essentially a snapshot of the system at a particular step.
All docker layers are by default stored on your local machine in /var/lib/docker/graph
. Docker calls this the graph database (but you will not need to care about that). Using the docker images
command to see all the images on your local machine (pulled from the graph database), you will observe that the number of images grow exponentially with the numbers of images that are downloaded.
An image may require several other images in order to be built. When an image is downloaded, you may see the download of many "layers". This is because you may have instructed the builder that you wanted to start with a particular image that you didn't have on your machine, and Docker needed to download the dependent images for that image. An image is downloaded one layer at a time.
Dependant images that are not complete are labeled <none>
, these being one of the layers (system snapshots) of a complete image. Docker calls these "intermediate" images. Using docker images
you will see image sizes on the right, but these numbers are deceptive. The size you see in the list is the sum of all layers in the image. Layers can be shared between images, so this number never reflects the actual size on your disk.
See All Available Images
To see all the images on your local machine:
docker images
Named images are complete images. Images labeled <none>
are intermediate image layers used to build complete images. Layers can be shared between images.
Pull Images
- Docker
pull
docs: https://docs.docker.com/reference/cli/docker/image/pull/
To use a Docker image, you can pull it from a Docker repository like Docker Hub with the docker pull
command. You do not need to pull images before using them as required images will be automatically downloaded during a build.
docker pull [the-image-name:the-image-version-tag or the-image-name]
To pull the latest official image for nginx, you would use:
docker pull nginx:latest
If omitted, Docker uses the :latest
tag by default. This example pulls the nginx:latest
image:
docker pull nginx
While it's tempting to use the latest tag for Docker images, it's best to use version tags. This makes your Docker setup more reproducible and helps avoid issues when the latest tag gets updated.
View Image History
See the commands that were used to create each layer within an image.
docker image history [options] [the-project-name]
- Each of the lines represents a layer in the image. The display shows the base at the bottom with the newest layer at the top.
- Several lines may be truncated. If you add the
--no-trunc
flag, you'll get the full output.
Building Images
All image builds require a Dockerfile. A Dockerfile is simply a text-based file with no file extension that contains a script of instructions. Docker uses this script to build images.
BuildKit is the name of the parser engine packaged with Docker that actually builds images. BuildKit runs when the build
command is used; it reads the content of the Dockerfile
as instructions for building the image.
When build
is envolked BuildKit starts at the top of the Dockerfile and executes each line, in-order from top to bottom. Each FROM
line in the Dockerfile makes a distinct Build Stage (stage). A stage is the creation of an image. Each stage is a distinct image based on its own base image. FROM
can be used multiple times in one Dockerfile, this is called a multi-stage build, with the build process creating more than one image/stage during its runtime. One stage doesn't inherit anything done to a previous stage, but each stage can be named and used as a dependency for a subsequent stage. A build stage starts at a FROM
statement and ends at the step before the next FROM
statement, or at the end of the Dockerfile. By default, only the final stage in the Dockerfile is tagged as the final image produced by the build process.
The value of multi-stage builds is to separate the build environment from the runtime environment. It allows you to perform the entire build inside of one Dockerfile. The advantage is that the resulting final image doesn't include all the compilers or other build time tooling that isn't needed at runtime, resulting in smaller and more secure image. A typical example is a java app with one stage containing maven and a full JDK to build, and the final runtime stage having just the JRE and a copy of the jar file from the first stage.
During the build process each stage runs as an anonymous (invisible) intermediate build container, a running system if you will, in which all the build instructions are executed. Each line in the Dockerfile is saved (committed as a filesystem snapshot) called a layer (just another image). Each layer is availbale to the next layer.
At the end of the build process, all layers are combined into one named final image that is saved to your local machine. The image can then be run. The image can be viewed with the docker images
command. The image can also be shared to an image repository.
It is good to regularly rebuild your images to receive the latest security updates and bug fixes. Be aware, though, that updates can sometimes introduce breaking changes.
Dockerfile
The Dockerfile contains all the commands a user can call to build images. The Dockerfile is used to define the app environment so it can be reproduced anywhere.
- The
docker.ignore
file prevents theCOPY
directive from copying specified files to the image.
Comments and Parser Directives
BuildKit treats all lines that begin with #
as a comment, accept for valid Parser Directives. Comments don't support line continuation characters. There are only two types of parser directives, syntax
and escape
, and they are optional. Parser directives are written as a special type of comment in the form # directive=value
which must come before the first FROM
instruction in the Dockerfile. A single directive may only be used once. Once a parser directive has been processed, the parser stops looking for them and instead, treats anything formatted as a parser directive as a comment.
Instructions
The Dockerfile script uses a group of pre-defined BuildKit intructions to define each line. Instructions are formatted like:
INSTRUCTION arguments
The instruction is not case-sensitive. However, convention is for them to be UPPERCASE to distinguish them from arguments more easily.
These are some of the most common instructions used in the Dockerfile:
FROM
is required. It instantiates the anonymous (invisible) intermediate build container or Build Stage from the base image.COPY
copies files into the build container.RUN
executes instructions in the build container.
Syntax Parser Directive
## syntax=[remote-syntax-image-reference]
Most Dockerfiles begin with the line above (or similar). This line is called the Syntax Parser Directive and declares the Dockerfile syntax version to use for the build.
The Syntax Parser Directive affect the way in which subsequent lines in a Dockerfile are handled. Parser directives don't add layers to the build, and don't show up as build steps.
If the Syntax Parser Directive is unspecified, BuildKit uses a bundled version of the Dockerfile frontend. Declaring a syntax version lets you automatically use the latest Dockerfile version without having to upgrade BuildKit or Docker Engine.
The BuildKit frontend is the syntax used in the Dockerfile to describe the build definitions. The BuildKit backend is the engine which translates the syntax of the build operations prepared by the frontend.
The syntax version points to the specific syntax image you want to use. The most common is the docker/dockerfile:1
image:
## syntax=docker/dockerfile:1
Remote Syntax Directive images can also be used:
## syntax=docker.io/docker/dockerfile:1
## syntax=example.com/user/repo:tag@sha256:abcdef...
You can also use the BUILDKIT_SYNTAX
build argument to set the frontend image reference on the command line:
docker build --build-arg BUILDKIT_SYNTAX=docker/dockerfile:1 .
Escape Parser Directive
The escape directive sets the character used to escape characters in a Dockerfile. If not specified, the default escape character is \
.
Example:
## escape=`
FROM
FROM [--platform=<platform>] <image>[:<tag>] [AS <name>]
The FROM
instruction sets the base image to use for subsequent instructions (RUN
, COPY
, CMD
, etc) in the Dockerfile, forming a Build Stage that can optionally be named so it can be referenced later. The base image can be any valid image.
FROM
can appear multiple times within a single Dockerfile to create multiple images or to use one build stage as a dependency for another. Each FROM
instruction clears any state created by previous instructions, and thus each Build Stage is represented by a FROM
instruction.
The FROM
instruction is required in the Dockerfile to start a build. Only the ARG
instruction, comments, and Parser Directives can come before the FROM
instruction.
In this example the base image is node:18
. node
is the name of the base image and 18
is the image version.
FROM node:18
In this example the base image is nginx:latest
. nginx
is the name of the base image and latest
tells docker to use the latest version of the image.
FROM nginx:latest
If the version tag is omitted, Docker uses the :latest
tag by default. This example pulls the maven:latest
image:
FROM maven AS build
While it's tempting to use the latest tag for Docker images, it's best to use a version tag. This makes your Docker setup more reproducible and helps avoid issues when the latest tag gets updated.
Optionally a name can be given to a new build stage by adding AS <name>
to the FROM
instruction. The name can then be used in subsequent FROM <name>
, COPY --from=<name>
, and RUN --mount=type=bind,from=<name>
instructions to refer to the image built in this stage.
In this example an image named base
is build using the base image php:8.2-apache
. A second image named final
is then built using the base
image as it's base image.
FROM php:8.2-apache AS base
FROM base AS final
WORKDIR
WORKDIR /path/to/workdir
The WORKDIR
instruction sets the name of the "working directory" and appends it to the /usr/src/
path inside the container. All RUN
, CMD
, ENTRYPOINT
, COPY
and ADD
instructions that follow it in the Dockerfile will use this as the base directory. The WORKDIR
instruction can be used multiple times in a Dockerfile.
If WORKDIR
is not specified, it will be created even if it's not used. The default working directory is /
(/usr/src/
). In practice, if you aren't building a Dockerfile from scratch (FROM scratch
), the WORKDIR
may likely be set by the base image you're using. Therefore, to avoid unintended operations in unknown directories, it's best practice to set your WORKDIR
explicitly.
It is common practice to define the working directory as /app
(/usr/src/app/
).
WORKDIR /app
COPY
COPY [OPTIONS] ["<src>", "<dest>"]
The COPY
instruction copies new files or directories from the <src>
on the local filesystem and adds them to the filesystem in the container at the path <dest>
.
The <src>
file and directory paths will be interpreted as relative to the directory of the local project.
The <dest>
is an absolute path, or a path relative to WORKDIR
, into which the source will be copied inside the destination container.
This example uses a relative path, and adds test.txt
to <WORKDIR>/myRelativeDir/
(/usr/src/app/myRelativeDir/
) in the container:
COPY test.txt, myRelativeDir/
This example uses an absolute path, and adds test.txt
to /my/absolute/dir/
in the container.
COPY test.txt, /my/absolute/dir/
The <src>
may contain wildcards and matching will be done using Go's filepath.Match rules. When copying files or directories that contain special characters (such as [
and ]
), you need to escape those paths following the Golang rules to prevent them from being treated as a matching pattern.
RUN
RUN [OPTIONS] <command>
The RUN
instruction allows you to install your application files and any packages rquired for it on-top of the base image.
The RUN
instruction is an image build step and only executes during a build. It executes commands on the current layer of the intermediate build container. The result becomes the next layer, meaning the state of the build container after a RUN
instruction will be committed to the build image. A Dockerfile can have many RUN steps that layer on top of one another to build the image. Once a Docker image is built, RUN
can no longer be used.
CMD
CMD is the command the container executes by default when you launch the built image. A Dockerfile will only use the final CMD defined. The CMD can be overridden when starting a container with docker run $image $other_command.
The way I think of it is that "CMD" is setting a single global variable for the entire image that is being built, so successive "CMD" statements simply overwrite any previous writes to that global variable, and in the final image that's built the last one to write wins. Since a Dockerfile executes in order from top to bottom, we know that the bottom-most CMD is the one gets this final "write" (metaphorically speaking).
CMD defines the startup commnand when the docker image is loaded.
The RUN instruction executes at build time (usually remote) before the image is uploaded. The CMD is the process your local container runtime executes after you have retrieved the image.
"A Dockerfile can only have one CMD" - not technically true, but effectively all but one will be ignored. See the answer
the final command in the dockerfile is written to the image and is the command the container executes by default when you launch the built image.
RUN - command triggers while we build the docker image.
CMD - command triggers while we launch the created docker image.
Docker Build
- https://docs.docker.com/reference/cli/docker/image/build/
- https://docs.docker.com/reference/cli/docker/buildx/build/
The docker build .
command is used to build a Docker image from the Dockerfile.
docker build \
-t getting-started \
.
build
: Uses the Dockerfile to build a new image.-t getting-started
: Tags your image. Think of this as a human-readable name for the final image. Since you named the imagegetting-started
, you can refer to that image name when you run a container..
: Tells Docker that it should look for the Dockerfile in the current directory.
docker build \
-t getting-started \
--progress plain \
--no-cache \
--target test
.
--progress plain
: Set type of progress output (auto, plain, tty, rawjson). Use plain to show container output.--no-cache
: Do not use cache when building the image.--target
: Set the name of the target build stage to use.
Docker Compose
Compose simplifies the control of your entire application stack, making it easy to manage services, networks, and volumes in a single, comprehensible YAML configuration file.
Docker Compose is a tool for defining and managing multi-container Docker applications. With Docker Compose, you use a YAML file to specify the services your application needs. You can define your entire multi-container application in a single file, and then start all the services with a single command. Docker Compose simplifies the management of multi-container applications and makes your setup more reproducible.
compose.yml File
- Docker Compose docs: https://docs.docker.com/compose/compose-file/
Creat a compose.yml
file in the app directory.
This example compose.yml
file describes a simple web service using the nginx
image, which will be accessible at http://localhost:3002
.
version: '3'
services:
web:
image: nginx:latest
ports:
- "3002:80"
This composer file is the translation of:
docker run p 3002:80 nginx:latest
Compose File Attributes
version
version
is deprecated. Many online examples use it, but it is no longer needed. The new Docker Compose spec supports not defining a version
property and is the recommended way to go moving forward. It supports both v2 and v3 properties.
Run a Compose Project
Make sure any older instance of the app are removed.
You must be in the project directory on the host machine.
docker compose up
compose
: Start composer.up
: Start and run the services defined in thecompose.yml
file.
The docker compose up
command is used to start and run the services defined in the compose.yml
file. It automatically builds the necessary images (if they don't already exist) and then creates and starts the containers based on the specified configurations.
If you haven’t created the containers yet, docker compose up
will build the images (if needed) and create the containers. If the containers already exist (created by a previous docker compose up
or docker compose run
command), it will start those existing containers.
Common Options
docker-compose up [options]
-d
or--detach
: Run the container detached from the terminal, i.e the container runs as a background process that doesn't occupy your terminal window. If you ommit this value the container will launch into the terminal window. If you exit the terminal (cmd
+c
orexit
), the container is stopped.--build
: Build images before starting the containers, even if they already exist.--force-recreate
: Recreate containers even if they exist and are up-to-date.
Stop a Running Compose Project
You must be in the project directory on the host machine.
docker compose down
The docker-compose down
command stops and removes the containers created by docker-compose up
. It stops the running containers associated with the services defined in the compose.yml
file. It removes the containers, but not the images or volumes.
Common Options
docker-compose down [options]
--volumes
: Remove named volumes declared in the volumes section of the docker-compose.yml file.--rmi all
: Remove all images used by any service in the docker-compose.yml file.
Composer Build
Use the --build
option with Docker Compose to force a new image build before launching the container.
docker compose up --build
compose
: Run coposer.up
: Start and run the services defined in thecompose.yml
file.--build
: Build images before starting the containers, even if they already exist.
If you will build with a compose file, you will need at least a build
section in the services to be built:
services:
server:
build:
context: .
target: development
context
: defines either a path to a directory containing a Dockerfile, or a URL to a git repository.target
: selects a specific stage from a multi-stage Dockerfile.
Compose Watch
Compose Logs
If you start a project with Docker Compose file, logs from each of the services are displayed interleaved into a single stream.
docker compose logs -f
-f
: Follow the log, to get live output as it's generated.
Init
- Docker Init docs: https://docs.docker.com/reference/cli/docker/init/
Initialize a project with the files necessary to run the project in a container.
docker init
Volumes
- Docker Volumes docs: https://docs.docker.com/storage/volumes/
Docker volumes are used for persisting data generated by and used by Docker containers. When a container is deleted, any data written to the container that is not stored in a volume is lost. Volumes are completely managed by Docker.
Create a Volume
docker volume create [the-volume-name]
List all Volumes
docker volume ls
Inspect a Volume
docker volume inspect [the-volume-name-or-id]
Remove a Volume
docker volume rm [the-volume-name-or-id]
Example: Run a Nginx Container with a Volume
Make a my-vol
volume.
docker volume create my-vol
Start the container with the volume.
docker run \
-d \
-p 3001:80 \
-v my-vol:/app \
nginx:latest
-d
or--detach
: Run the container detached from the terminal, i.e the container runs as a background process that doesn't occupy your terminal window. If you ommit this value the container will launch into the terminal window. If you exit the terminal (cmd
+c
orexit
), the container is stopped.-p
: Maps port3001
on the host to port80
in the container. You can now access the nginx server by navigating tohttp://127.0.0.1:3001
in your web browser.
Bind Mounts
Networks
Docker networking allows containers to communicate with each other and with the outside world. Docker provides a variety of networking features, allowing you to closely control how your containers communicate.
Create a Network
docker network create [the-network-name]
List all Networks
docker network ls
Inspect a Network
docker network inspect [the-network-name-or-id]
Example: Creating a Custom Network and Run Containers On It
In this example, a network named my-net
is created. The docker run
commands are then used to start two new containers that are connected to my-net
.
Create a my-net
network
docker network create my-net
Run two containers in the my-net
network.
docker run -d -p 8080:80 --network=my-net nginx:latest
docker run -d --network=my-net my-other-container
Commands and Shells
- Docker
exec
docs: https://docs.docker.com/reference/cli/docker/container/exec/
Execute a Command in a Container
docker exec [options] [the-container-id] [the-command-to-run]
Start a Command Line Shell in a Container
docker exec -it [the-container-id] /bin/ash
-i
: Interactive mode (Keep STDIN open even if not attached)-t
: Allocate a pseudo-TTY
Example: Open the Bash shell in a Ubunti Container
docker exec -it [the-container-id] bash
Caching
Large Docker images take longer to pull and consume more disk space. Try to use small base images (like Alpine Linux), remove unnecessary files, and leverage Docker's image layer caching to keep your images as small as possible.
Security
Take steps to secure your Docker setup. This could include running containers as non-root users, using Docker's built-in security features (like seccomp profiles and AppArmor), and regularly scanning your images for vulnerabilities.
## Example Dockerfile running as a non-root user
FROM nginx:latest
RUN useradd -m myuser
USER myuser
Health Checks
Health checks help to ensure that your application is running correctly. They can be implemented in the Dockerfile with the HEALTHCHECK
instruction or the compose.yml
file.
## Example Dockerfile with a healthcheck
FROM nginx:latest
HEALTHCHECK CMD curl --fail http://localhost:80 || exit 1
Docker Swarm
Docker Swarm is a Docker-native clustering and orchestration tool. With Docker Swarm, you can manage a cluster of Docker nodes as a single virtual system. Docker Swarm also provides advanced features like service discovery and load balancing. Here's a simple example of deploying a service to a Docker Swarm:
Initialize a Docker Swarm
docker swarm init
Deploy a service to the swarm
docker service create --replicas 3 -p 8080:80 nginx:latest
In this example, a Docker Swarm is initialized, and then a service running the nginx:latest image is deployed to the swarm. The --replicas 3 option tells Docker to maintain three instances of this service at all times.
Please note that Docker Swarm requires multiple Docker nodes to be fully utilized, and setting up such a cluster is beyond the scope of this guide. For more information on Docker Swarm, check out the official Docker documentation.
These advanced features can significantly enhance your ability to host web applications with Docker. By mastering these features, you can create robust, scalable, and resilient web hosting setups. In the next section, we'll discuss some best practices for using Docker in a web hosting context.