Use F2FS on RAID0 within HDD?

Solution 1:

Do not use RAID0, a failure of any one drive will kill the array. RAID6, RAID10, even a single drive with no array would be better for availability.


f2fs intends to be friendly to modern solid state devices, and Linux md can go very fast.

However, impossible to make general statements like f2fs on an array is better without data. You need to take into account what your workload is, if the I/O pattern has been benchmarked on a system similar to yours, and what limiting factors exist.

Do a capacity analysis. Estimate things like database queries per second, or how many files are read and written. Measure IOPS with tools like iostat -xz 1. If the r/s and w/s numbers approach the rated capacity of the device, you may need faster disks. Expect roughly 100 IOPS per spinning magnetic, and at least a couple thousand IOPS out of most SSDs. And it makes a different whether disks are connected as SATA or NVMe.

Evaluate the performance of every resource on the system. Fast storage is of limited help if you are CPU or memory bound. Memory is especially useful as cache. Excessive paging out is bad, as the swap file steals storage system performance but isn't as fast as DRAM.

Once you understand the performance of the system now, then you can start evaluating changes to the storage system.

Solution 2:

Using F2FS on a classical HDD is not a good idea: while its random write performance will probably be higher then EXT4 or XFS, the sequential read speed on an aged filesystem will be very disappointing.

To increase random write performance without having a powerloss-protected write back cache (read: a true RAID controller), you have to configure your applications to not issue fsync(), but this will significantly increase the odds of losing data with unplanned shutdown. Do not disable the barriers at system level (ie: by telling the kernel you have write through caches), as that can trash the entire filesystem in case of powerloss.

You can also consider using ZFS (preferably leaving striping to itself, rather than to the MDRAID layer): due to its CoW nature, random writes are significantly faster than on other filesystem, while advanced caching avoid the issues with sequential reads. It even supports sync=disabled: if you can tolerate a ~5s data loss window in case of unexpected shutdown, it will provide a ton of random write IOPs without impacting application or filesystem consistency.

Finally, if you are using EXT4, you can do a quick test with data=journal: while this will lower sequential write performance, random writes should be somewhat faster then the default journal mode.