Adding large files to docker during build

These files might change once in a while, and I don't mind to rebuild my container and re-deploy it when it happens.

Then a source control is not the best fit for such artifact.

A binary artifact storage service, like Nexus or Artifactory (which both have free editions, and have their own docker image if you need one) is more suited to this task.

From there, your Dockerfile can fetch from Nexus/Artifactory your large file(s).
See here for proper caching and cache invalidation.


I feel that I must be misreading your question, because the answer seems blindingly obvious to me, but none of the other respondents are mentioning it. So please kindly forgive me if I am vastly misinterpreting your problem.

If your service needs large files when running and they change from time to time, then

  • do not include them in the image; but instead
  • mount them as volumes,

It actually comes down to how you build your container, For example we build our containers using Jenkins & fabric8 io plugin as part of maven build. We use ADD with remote source url (Nexus).

In general , you can use a URL as source. so it depends which storage you have access to. 1. you can create an s3 bucket and provide access to your docker builder node . You can add ADD http://example.com/big.tar.xz /usr/src/things/ in your docker file to build

  1. you can upload the large files into artifact repository (Such as Nexus or Artifactory) and use it in ADD

  2. if you're building using Jenkins, in the same host create a folder and configure the webserver to serve that content with a virtualhost config. Then use that Url.

Optimal solution would be the one which is cheaper in terms of effort and cost without compromising on security.