Should I split up an application in multiple, linked, Docker containers or combine them into one?

Docker themselves make this clear: You're expected to run a single process per container.

But, their tools for dealing with linked containers leave much to be desired. They do offer docker-compose (which used to be known as fig) but my developers report that it is finicky and occasionally loses track of linked containers. It also doesn't scale well and is really only suitable for very small projects.

Right now I think the best available solution is Kubernetes, a Google project. Kubernetes is also the basis of the latest version of Openshift Origin, a PaaS platform, as well as Google Container Engine, and probably other things by now. If you're using Kubernetes you'll be able to deploy to such platforms easily.


Docker does make it clear that they believe one process per container is the "right" way, but never really give justification. My answer is, it depends. In this particular case, I would break them up and manage with Kubernetes or OpenShift because it's trivial and it will give you the ability to scale each piece of your application independently.

I wouldn't say it's a rule that you HAVE to split your application up though. Running containers are essentially the clone() system call, cgroups and selinux, which means you can absolutely run more than one process per container. Docker, LXC, homegrown it really doesn't matter when they are running. LXC encourages multiple processes per containers, so I would argue "one process per container" is philosophy not engineering

http://rhelblog.redhat.com/2016/03/16/container-tidbits-when-should-i-break-my-application-into-multiple-containers/