Docker in Docker cannot mount volume

A Docker container in a Docker container uses the parent HOST's Docker daemon and hence, any volumes that are mounted in the "docker-in-docker" case is still referenced from the HOST, and not from the Container.

Therefore, the actual path mounted from the Jenkins container "does not exist" in the HOST. Due to this, a new directory is created in the "docker-in-docker" container that is empty. Same thing applies when a directory is mounted to a new Docker container inside a Container.

Very basic and obvious thing which I missed, but realized as soon I typed the question.


Another way to go about this is to use either named volumes or data volume containers. This way, the container inside doesn't have to know anything about the host and both Jenkins container and the build container reference the data volume the same way.

I have tried doing something similar to what you are doing, except with an agent rather that using the Jenkins master. The problem was the same in that I couldn't mount the Jenkins workspace in the inner container. What worked for me was using the data volume container approach and the workspace files were visible to both the agent container and the inner container. What I liked about the approach is the both containers reference the data volume in the same way. Mounting directories with an inner container would be tricky as the inner container now needs to know something about the host that its parent container is running on.

I have detailed blog post about my approach here:

http://damnhandy.com/2016/03/06/creating-containerized-build-environments-with-the-jenkins-pipeline-plugin-and-docker-well-almost/

As well as code here:

https://github.com/damnhandy/jenkins-pipeline-docker

In my specific case, not everything is working the way I'd like it to in terms of the Jenkins Pipeline plugin. But it does address the issue of the inner container being able to access the Jenkins workspace directory.


Lots of good info in these posts but I find none of them are very clear about which container they are referring to. So let's label the 3 environments:

  • host: H
  • docker container running on H: D
  • docker container running in D: D2

We all know how to mount a folder from H into D: start D with

docker run ... -v <path-on-H>:<path-on-D> -v /var/run/docker.sock:/var/run/docker.sock ...

The challenge is: you want path-on-H to be available in D2 as path-on-D2.

But we all got bitten when trying to mount the same path-on-H into D2, because we started D2 with

docker run ... -v <path-on-D>:<path-on-D2> ...

When you share the docker socket on H with D, then running docker commands in D is essentially running them on H. Indeed if you start D2 like this, all works (quite unexpectedly at first, but makes sense when you think about it):

docker run ... -v <path-on-H>:<path-on-D2> ...

The next tricky bit is that for many of us, path-on-H will change depending on who runs it. There are many ways to pass data into D so it knows what to use for path-on-H, but probably the easiest is an environment variable. To make the purpose of such var clearer, I start its name with DIND_. Then from H start D like this:

docker run ... -v <path-on-H>:<path-on-D> --env DIND_USER_HOME=$HOME \
    --env DIND_SOMETHING=blabla -v /var/run/docker.sock:/var/run/docker.sock ...

and from D start D2 like this:

docker run ... -v $DIND_USER_HOME:<path-on-D2> ...