Make -j RAM limits

It is possible to limit process RAM usage using ulimit. But it may lead to failing process when limit will be exceeded. gcc loves to exceed any limit when linking in single thread. So ulimit solution is not popular.

Another solution is to provide estimate of gcc RAM usage per thread and maintain swap that will be used rarely. You can add load-average to stop spawning of new threads/jobs when load is too high.

I am using the following script /etc/profile.d/makeopts.sh (gentoo):

#!/bin/bash

# We need up to 1000 MB (less than 1GB) per thread.
MAX_THREADS=$(($(getconf _PHYS_PAGES) * $(getconf PAGE_SIZE) / (1000 ** 3)))
EFFECTIVE_THREADS=$(getconf _NPROCESSORS_ONLN)
THREADS=$((MAX_THREADS < EFFECTIVE_THREADS ? MAX_THREADS : EFFECTIVE_THREADS))

MAX_JOBS=$((MAX_THREADS / THREADS))
JOBS=$((MAX_JOBS < EFFECTIVE_THREADS ? MAX_JOBS : EFFECTIVE_THREADS))

MAX_LOAD=$((EFFECTIVE_THREADS * 9 / 10))

export MAKEOPTS="--jobs=$THREADS --load-average=$MAX_LOAD"
export EMERGE_DEFAULT_OPTS="--jobs=$JOBS --load-average=$MAX_LOAD"

Machine with 4 threads and 16 GB RAM:

MAKEOPTS="--jobs=4 --load-average=3"
EMERGE_DEFAULT_OPTS="--jobs=4 --load-average=3"

Machine with 16 threads and 32 GB RAM:

MAKEOPTS="--jobs=16 --load-average=14"
EMERGE_DEFAULT_OPTS="--jobs=2 --load-average=14"

Please be aware that this config was created for CFLAGS="-O2 -pipe -march=native". Please re-estimate it if you want to add/remove light/heavy options.


The somewhat simple solution would be for each workstation to have an environment variable that is suited to what that hardware can handle. Have the makefile read this environment variable and pass it to the -j option. How to get gnu make to read env variables.

Also, if the build process has many steps and takes a long time have make re-read the environment variable so that during the build you can reduce / increase resource usage.

Also, maybe have a service/application running on the workstation that does the monitoring of resource usage and modify the environment variable instead of trying to have make do it...


Is it possible that there is some confusion what make -j does? (at least I had it wrong for a long time...). I assumed that -j without options will adapt to the number of CPUs, but it doesn't - it simply doesn't apply any limit. This means that for big projects it will create a huge number of processes (just run "top" to see...), and possibly use up all the RAM. I stumbled on this when the "make -j" of my project used all of my 16Gb of RAM and failed, while "make -j 8" topped out at 2.5 Gb RAM usage (on 8 cores, load is close to 100% in both cases).

In terms of efficiency, I think using a limit equal to or bigger than the maximum number of CPUs you expect is more efficient that no limit, as the scheduling of thousands of processes has some overhead. Total number of processes created should be constant, but as "-j" creates a lot of them at once, memory allocation might become a problem. Even setting the limit to twice the number of CPUs should still be more conservative that not setting a limit at all.

PS: After some more digging I came up with a neat solution - just read out the number of processors and use that as the -j option:

make -j `nproc`

Tags:

Makefile