Why can't I crash my system with a fork bomb?

You probably have a Linux distro that uses systemd.

Systemd creates a cgroup for each user, and all processes of a user belong to the same cgroup.

Cgroups is a Linux mechanism to set limits on system resources like max number of processes, CPU cycles, RAM usage, etc. This is a different, more modern, layer of resource limiting than ulimit (which uses the getrlimit() syscall).

If you run systemctl status user-<uid>.slice (which represents the user's cgroup), you can see the current and maximum number of tasks (processes and threads) that is allowed within that cgroup.

$ systemctl status user-$UID.slice
● user-22001.slice - User Slice of UID 22001
   Loaded: loaded
  Drop-In: /usr/lib/systemd/system/user-.slice.d
           └─10-defaults.conf
   Active: active since Mon 2018-09-10 17:36:35 EEST; 1 weeks 3 days ago
    Tasks: 17 (limit: 10267)
   Memory: 616.7M

By default, the maximum number of tasks that systemd will allow for each user is 33% of the "system-wide maximum" (sysctl kernel.threads-max); this usually amounts to ~10,000 tasks. If you want to change this limit:

  • In systemd v239 and later, the user default is set via TasksMax= in:

    /usr/lib/systemd/system/user-.slice.d/10-defaults.conf
    

    To adjust the limit for a specific user (which will be applied immediately as well as stored in /etc/systemd/system.control), run:

    systemctl [--runtime] set-property user-<uid>.slice TasksMax=<value>
    

    The usual mechanisms of overriding a unit's settings (such as systemctl edit) can be used here as well, but they will require a reboot. For example, if you want to change the limit for every user, you could create /etc/systemd/system/user-.slice.d/15-limits.conf.

  • In systemd v238 and earlier, the user default is set via UserTasksMax= in /etc/systemd/logind.conf. Changing the value generally requires a reboot.

More info about this:

  • man 5 systemd.resource-control
  • man 5 systemd.slice
  • man 5 logind.conf
  • http://0pointer.de/blog/projects/systemd.html (search this page for cgroups)
  • man 7 cgroups and https://www.kernel.org/doc/Documentation/cgroup-v1/pids.txt
  • https://en.wikipedia.org/wiki/Cgroups

This won't crash modern Linux systems anymore anyway.

It creates hoards of processes but doesn't really burn all that much CPU as the processes go idle. You run out of slots in the process table before running out of RAM now.

If you're not cgroup limited as Hkoof points out, the following alteration still brings systems down:

:(){ : | :& : | :& }; :

Back in the 90's I accidentally unleashed one of these on myself. I had inadvertently set the execute bit on a C source file that had a fork() command in it. When I double-clicked it, csh tried to run it rather than open it in an editor like I wanted.

Even then, it didn't crash the system. Unix is robust enough that your account and/or the OS will have a process limit. What happens instead is it gets super sluggish, and anything that needs to start a process is likely to fail.

What's happening behind the scenes is that the process table fills up with processes that are trying to create new processes. If one of them terminates (either due to getting an error on the fork because the process table is full, or due to a desperate operator trying to restore sanity to their system), one of the other processes will merrily fork a new one to fill the void.

The "fork bomb" is basically an unintentionally self-repairing system of processes on a mission to keep your process table full. The only way to stop it is to somehow kill them all at once.