Why is number of open files limited in Linux?

The reason is that the operating system needs memory to manage each open file, and memory is a limited resource - especially on embedded systems.

As root user you can change the maximum of the open files count per process (via ulimit -n) and per system (e.g. echo 800000 > /proc/sys/fs/file-max).


Please note that lsof | wc -l sums up a lot of duplicated entries (forked processes can share file handles etc). That number could be much higher than the limit set in /proc/sys/fs/file-max.

To get the current number of open files from the Linux kernel's point of view, do this:

cat /proc/sys/fs/file-nr

Example: This server has 40096 out of max 65536 open files, although lsof reports a much larger number:

# cat /proc/sys/fs/file-max
65536
# cat /proc/sys/fs/file-nr 
40096   0       65536
# lsof | wc -l
521504

I think it's largely for historical reasons.

A Unix file descriptor is a small int value, returned by functions like open and creat, and passed to read, write, close, and so forth.

At least in early versions of Unix, a file descriptor was simply an index into a fixed-size per-process array of structures, where each structure contains information about an open file. If I recall correctly, some early systems limited the size of this table to 20 or so.

More modern systems have higher limits, but have kept the same general scheme, largely out of inertia.