How is a directory a "special type of file"?

Many entities in *nix style (and other) operating systems are considered files, or have a defining file-like aspect, even though they are not necessarily a sequence of bytes stored in a filesystem. Exactly how directories are implemented depends on the kind of filesystem, but generally what they contain, considered as a list, is a sequence of stored bytes, so in that sense they are not that special.

One way of defining what a "file" is in a *nix context is that it is something which has a file descriptor associated with it. As per the wikipedia article, a file descriptor

is an abstract indicator used to access a file or other input/output resource, such as a pipe or network connection...

In other words, they refer to various kinds of resources from/to which a sequence of bytes may be read/written, although the source/destination of that sequence is unspecified. Put another way, the "where" of the resource could be anything. What defines it is that it is a conduit of information. This is part of why it is sometimes said that in unix "everything is a file". You should not take that completely literally, but it is worth serious consideration. In the case of a directory, this information pertains to what is in the directory, and on a lower, implementation level, how to find it within the filesystem.

Directories are sort of special in this sense because in native C code they are not ostensibly associated with a file descriptor; the POSIX API uses a special type of stream handle, DIR*. However, this type does in fact have an underlying descriptor which can be retrieved. Descriptors are managed by the kernel and accessing them always involves system calls, hence, another aspect of what a descriptor is is that it is a conduit controlled by the OS kernel. They have unique (per process) numbers starting with 0, which is usually the descriptor for the standard input stream.


In the Unix Way of Doing Things: everything is a file.

A directory is one (of many) type of special file. It doesn't contain data. Instead, it contains pointers to all of the files that are contained within the directory.

Other types of special files:

  • links
  • sockets
  • devices

But because they are considered "files", you can ls them and rename them and move them and, depending on the type of special file, send data to/from them.


My answer is mere reminiscence, but in 199x vintage Unixes, of which there were many, directories were files, just marked "directory" somewhere in the on-disk inode.

You could open a directory with something like open(".", O_RDONLY) and get back a usable file descriptor. You could parse the contents if you scrounged through /usr/include and found the correct C struct definition. I know that I did this for SunOS 4.1.x systems, SGI's EFS filesystem, and whatever DEC's Mips-CPU workstations had for a filesystem, probably BSD4.2 FFS.

That was a bad experience. Standardizing on a virtual filesystem layer is a good thing for portability, even if directories are no longer strict files. VFS layers let us experiment with filesystems where directories aren't files, like ReiserFS, or NFS.