How to handle new version deployement for Docker webpack application that use code splitting?

A simple solution is that DISABLE caching of index.html

Cache-Control: no-store

If I understood the problem correctly then there are several approaches to this problem and I will list them from the simplest one to more complicated:

Use previous version to build new version from

This is by far the simplest approach which only requires to change base image for you new version.

Consider the following Dockerfile to build version2 of the application:

FROM version1

RUN ...

Then build it with:

docker build -t version2 .

This approach, however, has a problem - all old chunks will be accumulating in newer images. It may or may not desirable, but something to take into consideration.

Another problem is that you can't update you base image easily.

Use multistage builds

Multistage builds allow you to run multiple stages and include results from each stage into your final image. Each stage may use different Docker images with different tools, e.g. GCC to compile some native library, but you don't really need GCC in your final image.

In order to make it work with multi-stage build you would need to be able to create the very first image. Let's consider the following Dockerfile which does exactly that:

FROM alpine

RUN mkdir -p /app/latest && touch /app/latest/$(cat /proc/sys/kernel/random/uuid).chunk.js

It creates a new new Docker image with a new chunk with random name and put's it into directory named latest - this is important with proposed approach!

In order to create subsequent versions, we would need a Dockerfile.next which looks like this:

FROM version2 AS previous
RUN rm -rf /app/previous && mv /app/latest/ /app/previous

FROM alpine

COPY --from=previous /app /app
RUN mkdir -p /app/latest && touch /app/latest/$(cat /proc/sys/kernel/random/uuid).chunk.js

In a first stage it rotates version by removing previous version, and moving latest into previous.

During the second stage, it copies all versions there are left in the first stage, creates a new version and puts it into latest.

Here's how to use it:

docker build -t image:1 -f Dockerfile .

>> /app/latest/99cfc0e6-3773-40a0-82d4-8c8643cc243b.chunk.js

docker build -t image:2 --build-arg PREVIOUS_VERSION=1 -f Dockerfile.next .

>> /app/previous/99cfc0e6-3773-40a0-82d4-8c8643cc243b.chunk.js
>> /app/latest/2adf34c3-c50c-446b-9e85-29fb32011463.chunk.js

docker build -t image:3 --build-arg PREVIOUS_VERSION=2 -f Dockerfile.next 

>> /app/previous/2adf34c3-c50c-446b-9e85-29fb32011463.chunk.js
>> /app/latest/2e1f8aea-36bb-4b9a-ba48-db88c175cd6b.chunk.js

docker build -t image:4 --build-arg PREVIOUS_VERSION=3 -f Dockerfile.next 

>> /app/previous/2e1f8aea-36bb-4b9a-ba48-db88c175cd6b.chunk.js
>> /app/latest/851dbbf2-1126-4a44-a734-d5e20ce05d86.chunk.js

Note how chunks are moved from latest to previous.

This solution requires your server to be able to discover static files in different directories, but that might complicate local development, thought this logic might be conditional based on environment.

Alternatively you could copy all files into a single directory when container starts. This can be done in ENTRYPOINT script in Docker itself or in your server code - it's completely up to you, depends on what is more convenient.

Also this example looks at only one version back, but it can be scaled to multiple versions by a more complicated rotation script. For example to keep 3 last versions you could do something like this:

RUN rm -rf /app/version-0; \
    [ -d /app/version-1 ] && mv /app/version-1 /app/version-0; \
    [ -d /app/version-2 ] && mv /app/version-2 /app/version-1; \
    mv /app/latest /app/version-2; 

Or it can be parameterized using Docker ARG with the number of versions to keep.

You can read more about multi-stage builds in the official documentation.


An approach we've used in production is to have two different environments serving your .js assets. First We have the bleeding edge one: this one only knows of the most recently built version. All requests are directed towards this environment.

When a request hits eg the assets folder and no .js file can be found, we issue a redirect to the "rescue" environment. This is a simple AWS Cloudfront distribution, which is backed by an AWS S3 bucket. Upon the build of the bleeding edge environment, we push all new assets to that S3 bucket.

In case a user is using the most recent version of the application they'll comfortably be using the bleeding edge only. Once the application updates server-side, or the user was not using the latest version, all assets are served through the "backup domain". As the bleeding edge issues a redirect* instead of serving a 404, the user does not experience a problem here (apart from having to redo the request to a different location).

This setup ensures that even very old clients can continue to function. We have seen cases where Googlebot could still request assets of over 1000 deployment ago!

Big big downside: pruning of the S3 bucket is much more work. As storage is relatively cheap, for now we just keep the assets on there. As we add a chunk identifier to the file name, storage usage does not increase that much.

Something to consider is the implementation of the redirect. You'll want your application to be agnostic about its construction. We have done it in the following way:

  1. Request comes in to https://example.com/assets/asset-that-is-no-longer-available.js.
  2. The server detects the request was aimed at the assets directory, but the file is not there.
  3. The server replaces the hostname in the request url with assets.example.com and redirects to that location.
  4. The browsers asset request is redirected to https://assets.example.com/assets/asset-that-is-no-longer-available.js which is available.
  5. The application continues normally.

This keeps your main Docker image free of very infrequently accessed files, and ensures your internal deploys can continue at a higher speed. It also removes the requirement that your CI should always be able to access every single previously completed deployment code.

We have been using this approach in a setup which uses Docker as well for deployments, and have not seen issues with it for any client.