Why glibc's fclose(NULL) cause segmentation fault instead of returning error?

fclose requires as its argument a FILE pointer obtained either by fopen, one of the standard streams stdin, stdout, or stderr, or in some other implementation-defined way. A null pointer is not one of these, so the behavior is undefined, just like fclose((FILE *)0xdeadbeef) would be. NULL is not special in C; aside from the fact that it's guaranteed to compare not-equal to any valid pointer, it's just like any other invalid pointer, and using it invokes undefined behavior except when the interface you're passing it to documents as part of its contract that NULL has some special meaning to it.

Further, returning with an error would be valid (since the behavior is undefined anyway) but harmful behavior for an implementation, because it hides the undefined behavior. The preferable result of invoking undefined behavior is always a crash, because it highlights the error and enables you to fix it. Most users of fclose do not check for an error return value, and I'd wager that most people foolish enough to be passing NULL to fclose are not going to be smart enough to check the return value of fclose. An argument could be made that people should check the return value of fclose in general, since the final flush could fail, but this is not necessary for files that are opened only for reading, or if fflush was called manually before fclose (which is a smarter idiom anyway because it's easier to handle the error while you still have the file open).


fclose(NULL) should succeed. free(NULL) succeeds, because that makes it easier to write cleanup code.

Regrettably, that's not how it was defined. Therefore you can't use fclose(NULL) in portable programs. (E.g. see http://pubs.opengroup.org/onlinepubs/9699919799/).

As others have mentioned, you don't generally want an error return if you pass NULL to the wrong place. You want a warning message, at least on debug/test builds. Dereferencing NULL gives you an immediate warning message, and the opportunity to collect a backtrace which identifies the programming error :). While you're programming, a segfault is about the best error you can get. C has many more subtle errors, which take much longer to debug...

It is possible to abuse error returns to increase robustness against programming errors. However, if you're worried a software crash would lose data, note that exactly the same can happen e.g. if your hardware loses power. That's why we have autosave (since Unix text editors with two-letter names like ex and vi). It'd still be preferable for your software to crash visibly, rather than continuing with an inconsistent state.


The errors that the man page are talking about are runtime errors, not programming errors. You can't just pass NULL into any API expecting a pointer and expect that API to do something reasonable. Passing a NULL pointer to a function documented to require a pointer to data is a bug.

Related question: In either C or C++, should I check pointer parameters against NULL/nullptr?

To quote R.'s comment on one of the answers to that question:

... you seem to be confusing errors arising from exceptional conditions in the operating environment (fs full, out of memory, network down, etc.) with programming errors. In the former case, of course a robust program needs to be able to handle them gracefully. In the latter, a robust program cannot experience them in the first place.