Docker: Run Cronjob for different Container

I've written a daemon that observes containers and schedules jobs, defined in their metadata, on them. This comes closest to your 1) solution. Example:

version: '2'

services:
  wordpress:
    image: wordpress
  mysql:
    image: mariadb
    volumes:
      - ./database_dumps:/dumps
    labels:
      deck-chores.dump.command: sh -c "mysqldump --all-databases > /dumps/dump-$$(date -Idate)"
      deck-chores.dump.interval: daily

'Classic', cron-like configuration is also possible.

Here are the docs, here's the image repository.


Cron itself can be installed and run in the foreground (cron -f) making it very easy to install in a container. To access other containers, you'd likely install docker in the same container for the client CLI (not to run the daemon). Then to access the host docker environment, the most common solution is to bind mount the docker socket (-v /var/run/docker.sock:/var/run/docker.sock). The only gotcha is that you need to setup the docker gid inside your container to match the host gid, and then add users inside the container to the docker group.

This does mean that those users have the same access of any docker user on the host, e.g. root level access, so you need to either fully trust the user submitting them, or limit the commands they can run with some kind of sudo equivalent. The other downside is that this is less portable, and security aware admins will be unlikely to approve running your containers on their systems.

The fallback to option B is very easy with a tool like supervisord. While less than the ideal "one process per container", it's not quite an anti-pattern either since it keeps your entire container and dependencies together, and removes any security risks to the host.

Whether you go with the first or second option comes down to your environment, who's submitting the jobs, how many containers need to have jobs submitted against themselves, etc. If it's an admin submitting jobs against lots of a containers, then a cron container makes sense. But if you're the application developer that needs to include a scheduled job with your app as a package, go for the second option.


Run cron in another container or even on the host but run the script via php-fpm (eg. the cron would "curl" or something the PHP script).

Make sure you secure such a setup with a security token, network limitations etc. An enhancement could be to have a separated php-fpm pool with dynamic processes that is able to spawn max one process. This pool would only be accessible by the cron. It could also benefit for it's own individual settings such a way bigger execution time, more or less memory etc.

P.S.: You can use something like this to call the script directly in the FPM container and not go through nginx.

Reasoning: You probably want to access same libraries, same configuration etc. Running a process randomly spawned and not controlled by a signal manager in Docker is a really bad idea.