Docker container save logs on the host directory

  • Yes , you can mount the volume from host into the container as outlined in the above answer , using a bind mount

  • In production , I would highly recommend sending logs of all containers to some central location , so that even if the whole docker host goes down you still have access to logs and maybe can easily analyse , filter, set watchers on log errors and make a dashboard , such as ELK

https://docs.docker.com/config/containers/logging/configure/

For this to work , you need to configure the app to send logs to stdout instead , and then configure docker daemon to send logs to one of your end points such as logstash , then you can configure logstash to do some pre-processing ( if needed ) and then stream it to your elasticsearch instance.

Going one step further , you may consider a container management system such as kubernetes with central logging to ELK and metering to promethous.


All you need is a docker volume in order to persist the log files. So in the same directory as your docker-compose.yml create a logs directory, then define a volume mount. When defining a mount remember the syntax is <host_machine_directy>:<container_directory>.

Give the following volume a try and let me know what you get back.

version: '3'
services:
  myapp:
    build: .
    image: myapp
    ports:
      - "9001:9001"
    volumes:
      - ./logs:/home/logs

Also worth noting that persistence goes both ways with this approach. Any changes made to the files from within the container are reflected back onto the host. Any changes from the host, are also reflected inside the container.