Rock-stable filesystem for large files (backups) for linux

You can use ext4 but I would recommend mounting with journal_data mode which will turn off dealloc (delayed allocation) which 'caused some earlier problems. The disabling of dealloc will make new data writes slower, but make writes in the event of power failure less likely to have loss. I should also mention that you can disable dealloc without using journal_data which has some other benefits (or at least it did in ext3), such as slightly improved reads, and I believe better recovery.

Extents will still help with fragmentation. Extents make delete's of large files much faster than ext3, a delete of any sized data (single file) should be near instantaneous on ext4 but can take a long time on ext3. (any extent based FS has this advantage)

ext4 also fsck 's faster than ext3.

One last note, there were bugfixes in ext4 up to like 2.6.31? I would basically make sure you aren't running a kernel pre 2.6.32 which is an LTS kernel.


XFS is rock solid and has been in the kernel for ages. Examine tools like xfs_freeze and see if it is what you are looking for. I know this is highly subjective but I have used XFS for data storage for years without incident.


Just use a backup tool that support checksums. For example Dar does, and it supports incremental backups. Then you can backup to a rock solid filesystem like ext3.

For backups you want something rock solid/very stable. And btrfs or ZFS are simply not ready today.