What's the philosophy behind delaying writing data to disk?

It simply gives an illusion of speed to programs that don't actually have to wait until a write is complete. Mount your filesystems in sync mode (which gives you your instant writes) and see how slow everything is.

Sometimes files exist only temporarily... a program does some bit of work and deletes the file right after the work is done. If you delayed those writes, you might get away with never having written them in the first place.

Is there no danger that the write will fail due to an IO error?

Oh, absolutely. In such a case, usually the entire filesystem goes into read-only mode, and everything is horrible. But that rarely happens, no point in losing out on the performance advantages in general.


What's the philosophy behind such an approach?

Efficiency (better usage of the disk characteristics) and performance (allows the application to continue immediately after a write).

Why isn't the data written at once?

The main advantage is the OS is free to reorder and merge contiguous write operations to improve their bandwidth usage (less operations and less seeks). Hard disks perform better when a small number of large operations are requested while applications tend to need a large number of small operations instead. Another clear optimization is the OS can also remove all but the last write when the same block is written multiple times in a short period of time, or even remove some writes all together if the affected file has been removed in the meantime.

These asynchronous writes are done after the write system call has returned. This is the second and most user visible advantage. Asynchronous writes speeds up the applications as they are free to continue their work without waiting for the data to actually be on disk. The same kind of buffering/caching is also implemented for read operations where recently or often read blocks are retained in memory instead of being read again from the disk.

Is there no danger that the write will fail due to an IO error?

Not necessarily. That depends on the file system used and the redundancy in place. An I/O error might be harmless if the data can be saved elsewhere. Modern file systems like ZFS do self heal bad disk blocks. Note also that I/O errors do not crash modern OSes. If they happen during data access, they are simply reported to the affected application. If they happen during structural metadata access and put the file system at risk, it might be remounted read-only or made inaccessible.

There is also a slight data loss risk in case of an OS crash, a power outage, or an hardware failure. This is the reason why applications that must be 100% sure the data is on disk (e.g. databases/financial apps) are doing less efficient but more secure synchronous writes. To mitigate the performance impact, many applications still use asynchronous writes but eventually sync them when the user saves explicitly a file (e.g. vim, word processors.)

On the other hand, a very large majority of users and applications do not need nor care the safety that synchronous writes do provide. If there is a crash or power outage, the only risk is often to lose at worst the last 30 seconds of data. Unless there is a financial transaction involved or something similar that would imply a cost much larger than 30 seconds of their time, the huge gain in performance (which is not an illusion but very real) asynchronous writes is allowing largely outperforms the risk.

Finally, synchronous writes are not enough to protect the data written anyway. Should your application really need to be sure their data cannot be lost whatever happens, data replication on multiple disks and on multiple geographical locations need to be put in place to resist disasters like fire, flooding, etc.


Asynchronous, buffered I/O was in use before Linux and even before Unix. Unix had it, and so have all its offshoots.

Here is what Ritchie and Thompson wrote in their CACM paper The UNIX Time-Sharing System:

To the user, both reading and writing of files appear to be synchronous and unbuffered. That is immediately after return from a read call the data are available, and conversely after a write the user’s workspace may be reused. In fact the system maintains a rather complicated buffering mechanism which reduces greatly the number of I/O operations required to access a file.


In your question, you also wrote:

Is there no danger that the write will fail due to an IO error?

Yes, the write can fail and the program might not ever know about it. Although never a good thing, the effects of this can be minimized in cases where an I/O error generates a system panic (on some OS'es this is configurable - instead of panicking, the system can continue to run but the affected filesystem is unmounted or mounted read-only). Users can then be notified that the data on that filesystem is suspect. And a disk drive can be proactively monitored to see whether its grown defect list is rapidly increasing, which is an indication that the drive is failing.

BSD added the fsync system call so a program could be certain that its file data had been completely written to disk before proceeding, and subsequent Unix systems have provided options to do synchronous writes. GNU dd has an option conv=fsync to make sure that all the data has been written out before the command exits. It comes in handy when writing to slow removable flash drives, where buffered data can take several minutes to write out.

Another source of file corruption is a sudden system shutdown, for example from loss of power. Virtually all current systems support a clean/dirty flag in their filesystems. The flag is set to clean when there is no more data to be written out and the filesystem is about to be unmounted, typically during system shutdown or by manually calling umount. Systems will usually run fsck upon reboot if they detect that filesystems were not shut down cleanly.