docker-compose
docker-compose
is a tool that lets us write up a config file
where we can automate the containers' behaviour instead of using the docker
cli for everything.
Make sure you've read the docker and yaml pages before jumping into this section.
Anatomy of a compose file
Let's break down the following (rather simplified) compose file:
version: "3"
services:
client:
build:
context: ./client
dockerfile: ./Dockerfile-client-dev
command: ["webpack-dev-server"]
ports:
- 9000:4000
environment:
- NODE_ENV=development
volumes:
- ./app:/home/node/app/app
server:
build:
context: ./server
dockerfile: ./Dockerfile-server-dev
depends_on:
- db
entrypoint: ["sh", "scripts/docker/setupDevServer.sh"]
command: ["node", "startServer.js"]
ports:
- ${SERVER_PORT:-3000}:${SERVER_PORT:-3000}
environment:
- NODE_ENV=development
- POSTGRES_HOST=db
- POSTGRES_PORT=5432
- POSTGRES_DB=${POSTGRES_DB:-pg_db}
- POSTGRES_USER=${POSTGRES_USER:-pg_user}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD:-pg_pass}
volumes:
- ./server:/home/node/app/server
db:
image: postgres:10-alpine
ports:
- 5490:5432
environment:
- POSTGRES_DB=${POSTGRES_NAME:-pg_db}
- POSTGRES_USER=${POSTGRES_USER:-pg_user}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD:-pg_pass}
docker-compose
will look for environment variables in a file named .env
by default. If you want your environment file to be somewhere else, you will need to source
it beforehand.
The environment variable syntax {'${MY_VAR:-foo}'}
provides a default value of "foo" if MY_VAR
is not defined.
docker-compose
will load environment variables in order to interpolate their values inside the compose file alone. If you need an environment variable to be defined inside the container itself, you need to say so explicitly (eg. through the environment
key in your compose file).
Set the version
version: "3"
Defines the version of the compose file syntax that should be processed. Use version 3 by default. Version 2 is also good for development, but not for production.
Services
services:
client:
...
server:
...
depends_on:
- db
...
db:
...
Define a number of services that you want to run. In this case: a client, a server and a db. These services can be brought up and down all together or independently and will generate separate log outputs.
Notice that the "server" service depends_on
the "db" service. This means
that every time you start the server, the db will also be started in the
background beforehand, even if you don't explicitly tell it to.
depends_on
will start the required service(s), but will not necessarily wait for them to be in a particular working state. If you need to wait for something specific, you will probably need to use a wait-for-it
script.
Build or use an image
The build
key will provide instructions on how to build a local image.
build:
# Set the "client" folder as the context of the image (ie. which files should be copied in)
context: ./client
# Use the `Dockerfile-client-dev` file for build instructions
# File path is relative to the value of `context`
dockerfile: ./Dockerfile-client-dev
If we don't need to build an image for a service, we can use the image
key to grab an
existing image from dockerhub:
image: postgres:10-alpine
Be careful with the value of context
. It should be the root of the relevant code, not all code. For example, if you are in a monorepo and
set context to .
, it will copy in the whole monorepo, even though you only want to make a container out of a part of it. If your builds take a long time to start up, revisit what the value of context
is and make sure it is correct.
Optionally set an entrypoint and command
These commands will override whatever has been set as ENTRYPOINT
and CMD
in
your Dockerfile
.
entrypoint: ["sh", "scripts/docker/setupDevServer.sh"]
command: ["node", "startServer.js"]
Set the service ports
This can be confusing because people tend to read it backwards. The syntax here
is <host port> -> <container port>
. The line below means that
whatever is running at port 4000 inside the container should be accessible at
port 9000 on the host.
ports:
- 9000:4000
Define the environment
These are the environment variables that will be set inside the container. You
can use hardcoded values (eg. "development") or variables loaded from your .env
file (eg. POSTGRES_USER
).
environment:
- NODE_ENV=development
- POSTGRES_HOST=db
- POSTGRES_PORT=5432
- POSTGRES_DB=${POSTGRES_DB:-pg_db}
- POSTGRES_USER=${POSTGRES_USER:-pg_user}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD:-pg_pass}
docker-compose
will create a network and bring all the services
up inside it. This means that each service can find the other on the network
by name. That is why we can just set POSTGRES_HOST
to the value
of "db" in the example above. Note that given the compose file we've set up,
the server will find postgres running at port 5432 inside the db service,
and not at port 5490, which is the port at which you can
find it on the host.
When creating a development compose file, provide defaults
for all your variables. This makes it really easy for someone to jump in,
run docker-compose up
and have a working app without worrying
about variables unless they need to change a value. The opposite is true for
production: do not use any defaults, so that values are not implicitly
applied without the sysadmin knowing it.
Mount a volume
You can define a list of folders to mount as a volume inside the container. As
with ports, the syntax is <host path> -> <container path>
. In the example
below, we are mounting the contents of the app
folder (path relative to the context
)
to the /home/node/app/app
folder inside the container.
volumes:
- ./app:/home/node/app/app
Keep in mind that the contents of that folder within the container will be overriden by whatever you mounted in it. This can be a very useful feature in development, as you can let changes in your local folder be reflected to the container automatically (thus triggering hot reload etc).
Working with the docker-compose
cli
docker-compose
will look at the docker-compose.yml
file at your current
working directory and run that.
All these commands assume that you are in the root of your repo.
Run all services
docker-compose up
Run a specific service
It is worth noting that if the service has dependencies (via the depends_on
key in the compose file), those services will be started as well.
docker-compose up <service-name>
# eg. docker-compose up client
Run one or more services in the background
Useful if you don't want to have mutliple terminals open. You can always find
the generated output with logs
(see further down).
# start all services in the background
docker-compose up -d
# start the client only in the background
docker-compose up -d client
Stop and delete all containers
Emphasis on the fact that the container(s) will be deleted. For example, if your container is a database, this command will delete the database (along with any data).
docker-compose down
Stop containers without deleting them
# stop them all
docker-compose stop
# stop only server & client
docker-compose stop server client
Start containers
The main difference between start
and up
is that up
will also build the
container if it has not been built already.
# start all
docker-compose start
# start client
docker-compose start client
Build containers
Builds, but does not run containers.
# build all
docker-compose build
# force docker to not use any layers from cache (re-build everything from scratch)
docker-compose build --no-cache
# build client
docker-compose build client
View generated logs
# see generated logs of the "server" service
docker-compose logs server
# see and follow logs of the "server" service
docker-compose logs -f server
Enter a running container
This will open a shell inside the linux container of your service. We usually
make sure the containers have vim
and ranger
installed for development
containers so that you can look around and edit things as needed.
# Run bash inside the "server" service
docker-compose exec server bash
# Same as above, but enter the container as the root user
docker-compose exec -u root server bash
Enter a container that is not running
Same as the exec
scenario, but for a container that is not running.
# Run bash inside the "server" service
docker-compose run server bash
Execute commands inside a container
bash
in the examples above is just one example of a command. You can run any
command you need.
# Run the tests inside the "server" service
docker-compose run server yarn jest --watch
Run docker-compose
with a different compose file
docker-compose -f docker-compose.production.yml up
docker-compose
in production
It is generally not recommended to run production deployments with docker-compose
.
There are other tools (eg. kubernetes
, docker swarm
and more) that are better
suited for this job and provide features such as orchestrating and scaling containers.
It can be quite useful however to have production-ready compose files for a few reasons:
- As a developer, you want to easily check that a production build is functional
- They can be useful in basic deployments where scaling and performance is typically not an issue (eg. setting up a demo site)
- There are orchestration tools that can read
docker-compose
files - At the very least, they can be seen as a manual / documentation of the steps that need to be done to get up and running in production
Resources
Compose file docs
https://docs.docker.com/compose/compose-file/compose-file-v3/
Compose file variable substitution
https://docs.docker.com/compose/compose-file/compose-file-v3/#variable-substitution
Compose cli docs
https://docs.docker.com/compose/reference/