How to cache local Maven repository using Docker with Pipelines?

I found a work-around, see Local settings.xml not picked up by Jenkins agent:

The issue is related to the -u uid:gid that jenkins uses to run the container. As you may know the image you are running only has the user root created, so when jenkins pass its own uid and gid , there is no entry for the user and consequentially no $HOME declared for it.

If you only want to run the build independently of the user, you can use the follow as agent:

agent {
        docker {
            image 'maven:3-alpine'
            args '-v $HOME/.m2:/root/.m2:z -u root'
            reuseNode true
        }
}

A few notes:

  1. if you notice the volume I am using with the flag z, as I am going to build with root, I need to tell docker that this volume will be shared among another containers, and then preventing access denied from my jenkins container (running with the user jenkins not root)
  2. I tell jenkins to reuseNode, so any other stage using the same image, will be executing on the same container (it is just to speed up the provisioning time)

Log

[DEBUG] Reading global settings from /usr/share/maven/conf/settings.xml
[DEBUG] Reading user settings from /root/.m2/settings.xml
[DEBUG] Reading global toolchains from /usr/share/maven/conf/toolchains.xml
[DEBUG] Reading user toolchains from /root/.m2/toolchains.xml
[DEBUG] Using local repository at /root/.m2/repository
[DEBUG] Using manager EnhancedLocalRepositoryManager with priority 10.0 for /root/.m2/repository

Unfortunately the files in local repository /home/jenkins/.m2 are now owned by user root instead of user jenkins. That could cause other problems.


You can see my answer related but using Gradle configuration.

As you said, in my base image, Jenkins runs the Docker container with the user 1002 and there is no user defined. You have to configure the Maven variable user.home in order to put the dependencies there. You can do it by including user.home in the JAVA_OPTIONS as an environment variable in your pipeline. Also MAVEN_CONFIG should be included:

environment {
  JAVA_TOOL_OPTIONS = '-Duser.home=/var/maven'
  SETTINGS = credentials('your-secret-file')
}

and create a volume to cache the dependencies:

docker {
    image 'maven:3.3.9-jdk-8-alpine'
    args '-v $HOME:/var/maven'
    reuseNode true
}

UPDATE: forgot to tell you that you can put your settings.xml in a secret file in order to use a ‘least exposure principle’ to limit credentials exposure in the Jenkins pipeline. Also we are configuring personal credentials and this is the way we are configuring for instance Nexus credentials per user. Check the Jenkins documentation on how to upload your secret file in your credentials:

sh 'mvn -s $SETTINGS -B clean verify'

UPDATE2: I'm not using declarative pipeline, so my pipeline looks like:

            withCredentials([
                 file(credentialsId: 'settings-xml', variable: 'SETTINGS')]) {
                    stage('Deploy') {
                        gitlabCommitStatus(name: 'Deploy') {
                            // Upload the Snapshot artefact
                            sh "mvn -s $SETTINGS clean verify"
                        }
                    }
                }

It seems it can also be used in declarative pipelines but I did not test it myself.


Getting a Jenkins pipeline to use Docker containers for the Jenkins Agents, and for the builds to share a Maven local repository, is tricky because there are two problems to solve: sharing the local repository files, and ensuring the files have usable permissions.

I created a Docker Volume to hold the shared files:

docker volume create maven-cache

Then told Jenkins to mount that Docker Volume in a suitable location for each Agent, by having it give a --mount option to its docker run command. That makes the Docker volume available... but owned by root, rather than the jenkins user running the Agent.

A complication to fixing that permissions problem is that Jenkins will docker run your image using the Jenkins UID, and you can not know what that UID will be. As I've noted elsewhere, you can work around that using some shell script magic and RUN commands to set up the jenkins user name (and docker group name, if necessary) for your Agent image.

You can fix the permissions problem by adding sudo to your Docker image, and configuring the image to allow the jenkins user to run sudo commands without a password. Then an early Jenkins pipeline step can use sudo to create a suitable directory to hold the local repository, within the shared mount, and change the owner of that directory to be jenkins.

Finally, you can set up a Maven settings file for use by the Jenkins Agent, which tells Maven to use the shared local repository.

My Jenkinsfile is like this:

pipeline {
    agent {
        dockerfile {
            filename 'Dockerfile.jenkinsAgent'
            additionalBuildArgs  '--build-arg JENKINSUID=`id -u jenkins` --build-arg JENKINSGID=`id -g jenkins` --build-arg DOCKERGID=`stat -c %g /var/run/docker.sock`'
            args '-v /var/run/docker.sock:/var/run/docker.sock --mount type=volume,source=maven-cache,destination=/var/cache/maven -u jenkins:docker'
       }
    }
    stages {
...
        stage('Prepare') {
            steps {
                sh '[ -d /var/cache/maven/jenkins ] || sudo -n mkdir /var/cache/maven/jenkins'
                sh 'sudo -n chown jenkins /var/cache/maven/jenkins'
...
                sh 'mvn -B -s maven-jenkins-settings.xml clean'
            }
        }

And later steps using Maven also say mvn -B -s maven-jenkins-settings.xml ....

My Dockerfile.jenkinsAgent is like this:

FROM debian:stretch-backports
ARG JENKINSUID
ARG JENKINSGID
ARG DOCKERGID

# Add Docker CE
RUN apt-get -y update && \
 apt-get -y install \
   apt-transport-https \
   ca-certificates \
   curl \
   gnupg \
   lsb-release \
   software-properties-common

RUN curl -fsSL https://download.docker.com/linux/debian/gpg | apt-key add -
RUN add-apt-repository \
   "deb [arch=amd64] https://download.docker.com/linux/debian \
   $(lsb_release -cs) \
   stable"

RUN apt-get -y update && \
 apt-get -y install \
   docker-ce \
   docker-ce-cli \
   containerd.io

# Add the build and test tools and libraries
RUN apt-get -y install \
   ... \
   maven \
   sudo \
   ...

# Set up the named users and groups
# Installing docker-ce will already have added a "docker" group,
# but perhaps with the wrong ID.
RUN groupadd -g ${JENKINSGID} jenkins
RUN groupmod -g ${DOCKERGID} docker
RUN useradd -c "Jenkins user" -g ${JENKINSGID} -G ${DOCKERGID} -M -N -u ${JENKINSUID} jenkins
# Allow the build agent to run root commands if it *really* wants to:
RUN echo "jenkins ALL=(ALL:ALL) NOPASSWD: ALL" >> /etc/sudoers

(If your Jenkins pipeline does not itself run Docker commands you could remove the RUN commands for installing Docker, but you would then have to groupadd the docker group, rather than groupmod)

And the Maven settings file for the Jenkins Agent (maven-jenkins-settings.xml) is like this:

<settings xmlns="http://maven.apache.org/SETTINGS/1.0.0"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="http://maven.apache.org/SETTINGS/1.0.0
                          https://maven.apache.org/xsd/settings-1.0.0.xsd">
    <localRepository>/var/cache/maven/jenkins</localRepository>
    <interactiveMode>false</interactiveMode>
</settings>