What's the most resource efficient way to count how many files are in a directory?

Short answer:

\ls -afq | wc -l

(This includes . and .., so subtract 2.)


When you list the files in a directory, three common things might happen:

  1. Enumerating the file names in the directory. This is inescapable: there is no way to count the files in a directory without enumerating them.
  2. Sorting the file names. Shell wildcards and the ls command do that.
  3. Calling stat to retrieve metadata about each directory entry, such as whether it is a directory.

#3 is the most expensive by far, because it requires loading an inode for each file. In comparison all the file names needed for #1 are compactly stored in a few blocks. #2 wastes some CPU time but it is often not a deal breaker.

If there are no newlines in file names, a simple ls -A | wc -l tells you how many files there are in the directory. Beware that if you have an alias for ls, this may trigger a call to stat (e.g. ls --color or ls -F need to know the file type, which requires a call to stat), so from the command line, call command ls -A | wc -l or \ls -A | wc -l to avoid an alias.

If there are newlines in the file name, whether newlines are listed or not depends on the Unix variant. GNU coreutils and BusyBox default to displaying ? for a newline, so they're safe.

Call ls -f to list the entries without sorting them (#2). This automatically turns on -a (at least on modern systems). The -f option is in POSIX but with optional status; most implementations support it, but not BusyBox. The option -q replaces non-printable characters including newlines by ?; it's POSIX but isn't supported by BusyBox, so omit it if you need BusyBox support at the expense of overcounting files whose name contains a newline character.

If the directory has no subdirectories, then most versions of find will not call stat on its entries (leaf directory optimization: a directory that has a link count of 2 cannot have subdirectories, so find doesn't need to look up the metadata of the entries unless a condition such as -type requires it). So find . | wc -l is a portable, fast way to count files in a directory provided that the directory has no subdirectories and that no file name contains a newline.

If the directory has no subdirectories but file names may contain newlines, try one of these (the second one should be faster if it's supported, but may not be noticeably so).

find -print0 | tr -dc \\0 | wc -c
find -printf a | wc -c

On the other hand, don't use find if the directory has subdirectories: even find . -maxdepth 1 calls stat on every entry (at least with GNU find and BusyBox find). You avoid sorting (#2) but you pay the price of an inode lookup (#3) which kills performance.

In the shell without external tools, you can run count the files in the current directory with set -- *; echo $#. This misses dot files (files whose name begins with .) and reports 1 instead of 0 in an empty directory. This is the fastest way to count files in small directories because it doesn't require starting an external program, but (except in zsh) wastes time for larger directories due to the sorting step (#2).

  • In bash, this is a reliable way to count the files in the current directory:

    shopt -s dotglob nullglob
    a=(*)
    echo ${#a[@]}
    
  • In ksh93, this is a reliable way to count the files in the current directory:

    FIGNORE='@(.|..)'
    a=(~(N)*)
    echo ${#a[@]}
    
  • In zsh, this is a reliable way to count the files in the current directory:

    a=(*(DNoN))
    echo $#a
    

    If you have the mark_dirs option set, make sure to turn it off: a=(*(DNoN^M)).

  • In any POSIX shell, this is a reliable way to count the files in the current directory:

    total=0
    set -- *
    if [ $# -ne 1 ] || [ -e "$1" ] || [ -L "$1" ]; then total=$((total+$#)); fi
    set -- .[!.]*
    if [ $# -ne 1 ] || [ -e "$1" ] || [ -L "$1" ]; then total=$((total+$#)); fi
    set -- ..?*
    if [ $# -ne 1 ] || [ -e "$1" ] || [ -L "$1" ]; then total=$((total+$#)); fi
    echo "$total"
    

All of these methods sort the file names, except for the zsh one.


find /foo/foo2/ -maxdepth 1 | wc -l

Is considerably faster on my machine but the local . directory is added to the count.


ls -1U before the pipe should spend just a bit less resources, as it does no attempt to sort the file entries, it just reads them as they are sorted in the folder on disk. It also produces less output, meaning slightly less work for wc.

You could also use ls -f which is more or less a shortcut for ls -1aU.

I don't know if there is a resource-efficient way to do it via a command without piping though.