Build Docker Containers Inside Docker: A Quick Guide

TOC:

Why run Docker in Docker?

The first question running through your head is probably “Why would anyone want to do this?” However running Docker within a container is actually quite a common use case.

Many applications have a fairly complicated build environment. They could require very specific versions of libraries, or a binary blob from a driver vendor to be included. In this case, it makes a lot of sense to containerise your build environment. Then you are able to push the container out to a build server with lots of CPU to churn out nightlies, or just make it easy for someone else to build your application. For example to compile a go application without having any of the dependencies installed, you could use the golang image.

In a continuous delivery environment, your build process itself might generate a container as an artifact. This means you will need to have some way of running Docker from within Docker!

The Simple Demo

The easiest way to give a container access to a Docker container is to mount the docker binary and docker socket inside the container like so:

docker run --rm -it --name dockerception \
  -v /usr/bin/docker:/usr/bin/docker \
  -v /var/run/docker.sock:/var/run/docker.sock \
  debian:7

# from inside the container's shell
docker ps -q

# example output: 809f283ee12a

Note: This will only work if your host’s docker binary is static. Otherwise, you’ll have to mount every library listed in the output of ldd /usr/bin/docker inside the container as well.

Now you can run your build process, and create a container with artifacts from the build process without needing to ship the whole build environment! However, this method has some drawbacks. Mounting the docker socket inside your container is equivalent to giving the container root access to your host. Depending on how much you trust the code you’re building, you may not want to do that.The other disadvantage is the static binary issue - relying on a certain host configuration means your container isn’t truly portable.

The Complicated Demo

Since Docker version 0.6, you have been able to create “privileged” containers, which have the ability to access cgroups devices. This means you can run a fully functional docker daemon within a container. Let’s try it out:

docker run --rm --privileged -it --name dockerception jpetazzo/dind

Example output:

INFO[0000] +job serveapi(unix:///var/run/docker.sock)
INFO[0000] Listening for HTTP on unix (/var/run/docker.sock)
INFO[0000] +job init_networkdriver()
INFO[0000] -job init_networkdriver() = OK (0)
INFO[0000] WARNING: Your kernel does not support cgroup swap limit.
INFO[0000] Loading containers: start.
INFO[0000] Loading containers: done.
INFO[0000] docker daemon: 1.5.0 a8a31ef; execdriver: native-0.2; graphdriver: aufs
INFO[0000] +job acceptconnections()
INFO[0000] -job acceptconnections() = OK (0)
INFO[0001] GET /v1.17/info
INFO[0001] +job info()
INFO[0001] +job subscribers_count()
INFO[0001] -job subscribers_count() = OK (0)
INFO[0001] +job registry_config()
INFO[0001] -job registry_config() = OK (0)
INFO[0001] -job info() = OK (0)
root@76293d728294:/# docker ps -q
INFO[0065] GET /v1.17/containers/json
INFO[0065] +job containers()
INFO[0065] -job containers() = OK (0)

Note: this does not solve the security issue, as a privileged container should be considered as having full access to the host. For more information see docker security.

DevOps Engineer Final Thoughts

If you’re setting up continuous delivery and require containers, I encourage you to try and use this pattern to create fully disposable and portable build environments!