Java IOException "Too many open files"

On Linux and other UNIX / UNIX-like platforms, the OS places a limit on the number of open file descriptors that a process may have at any given time. In the old days, this limit used to be hardwired1, and relatively small. These days it is much larger (hundreds / thousands), and subject to a "soft" per-process configurable resource limit. (Look up the ulimit shell builtin ...)

Your Java application must be exceeding the per-process file descriptor limit.

You say that you have 19 files open, and that after a few hundred times you get an IOException saying "too many files open". Now this particular exception can ONLY happen when a new file descriptor is requested; i.e. when you are opening a file (or a pipe or a socket). You can verify this by printing the stacktrace for the IOException.

Unless your application is being run with a small resource limit (which seems unlikely), it follows that it must be repeatedly opening files / sockets / pipes, and failing to close them. Find out why that is happening and you should be able to figure out what to do about it.

FYI, the following pattern is a safe way to write to files that is guaranteed not to leak file descriptors.

Writer w = new FileWriter(...);
try {
    // write stuff to the file
} finally {
    try {
        w.close();
    } catch (IOException ex) {
        // Log error writing file and bail out.
    }
}

1 - Hardwired, as in compiled into the kernel. Changing the number of available fd slots required a recompilation ... and could result in less memory being available for other things. In the days when Unix commonly ran on 16-bit machines, these things really mattered.

UPDATE

The Java 7 way is more concise:

try (Writer w = new FileWriter(...)) {
    // write stuff to the file
} // the `w` resource is automatically closed 

UPDATE 2

Apparently you can also encounter a "too many files open" while attempting to run an external program. The basic cause is as described above. However, the reason that you encounter this in exec(...) is that the JVM is attempting to create "pipe" file descriptors that will be connected to the external application's standard input / output / error.


You're obviously not closing your file descriptors before opening new ones. Are you on windows or linux?


For UNIX:

As Stephen C has suggested, changing the maximum file descriptor value to a higher value avoids this problem.

Try looking at your present file descriptor capacity:

   $ ulimit -n

Then change the limit according to your requirements.

   $ ulimit -n <value>

Note that this just changes the limits in the current shell and any child / descendant process. To make the change "stick" you need to put it into the relevant shell script or initialization file.

Tags:

Java

File Io