Check mem_limit within a docker container

Previously the /sys/fs/cgroup/memory/memory.limit_in_bytes worked for me, but in my ubuntu with kernel 5.8.0-53-generic seems that the correct endpoint now is /sys/fs/cgroup/memory.max to recover the memory limit from inside the container.


On the host you can run docker stats to get a top like monitor of your running containers. The output looks like:

$ docker stats
CONTAINER ID        NAME                CPU %               MEM USAGE / LIMIT     MEM %               NET I/O             BLOCK I/O           PIDS
729e4e0db0a9        dev                 0.30%               2.876GiB / 3.855GiB   74.63%              25.3MB / 4.23MB     287kB / 16.4kB      77

This is how I discovered that docker run --memory 4096m richardbronosky/node_build_box npm run install was not getting 4G of memory because Docker was configured to limit to 2G of memory. (In the example above this has been corrected.) Without that insite I was totally lost as to why my process was ending with simply "killed".


Worked for me in the container, thanks for the ideas Sebastian

#!/bin/sh
function memory_limit
{
  awk -F: '/^[0-9]+:memory:/ {
    filepath="/sys/fs/cgroup/memory"$3"/memory.limit_in_bytes";
    getline line < filepath;
    print line
  }' /proc/self/cgroup
}

if [[ $(memory_limit) < 419430400 ]]; then
  echo "Memory limit was set too small. Minimum 400m."
  exit 1
fi

The memory limit is enforced via cgroups. Therefore you need to use cgget to find out the memory limit of the given cgroup.

To test this you can run a container with a memory limit:

docker run --memory 512m --rm -it ubuntu bash

Run this within your container:

apt-get update
apt-get install cgroup-bin
cgget -n --values-only --variable memory.limit_in_bytes /
# will report 536870912

Docker 1.13 mounts the container's cgroup to /sys/fs/cgroup (this could change in future versions). You can check the limit using:

cat /sys/fs/cgroup/memory/memory.limit_in_bytes