Is there a way to protect SSD from corruption due to power loss?

Solution 1:

When suddenly losing power, MLC/TLC/QLC SSDs have two failure modes:

  • they lose the in-flight and in-DRAM-only writes;
  • they can corrupt any data-at-rest stored in the lower page of the NAND cell being programmed.

The first failure condition is obvious: without power protection, any data which are not on stable storage (ie: NAND itself) but on volatile cache only (DRAM) will be lost. The same happens with classical mechanical disks (and that alone can wreak havoc on filesystem which does not properly issue fsyncs).

The second failure condition is a MLC+ SSDs affair: when reprogramming the high page bit for storing new data, an unexpected power loss can destroy/alter the lower bit (ie: previous committed data) also.

The only true, and most obvious, solution is to integrate a power-loss-protected DRAM cache (generally using battery/supercaps), as done since forever by high-end RAID controllers; this, however, increase drive cost/price. Consumer drives typically have no power-loss-protected caches; rather, they use an array of more economical solutions as:

  • partially protected write cache (ie: Crucial M500/M550/M600+);
  • NAND changes journal (ie: Samsung drives, see SMART PoR attribute);
  • special SLC/pseudo-SLC NAND regions to absorbe new writes without previous data at risk (ie: Sandisk, Samsung, etc).

Back to your question: your Kingstone drives are ultra-cheap ones, using unspecified controller and basically no public specs. It does not surprise me that a sudden power loss corrupted previous data. Unfortunately, even disabling the disk's DRAM cache (with the massive performance loss it commands) will not solve your problem, as previous data (ie: data-at-rest) can, and will, be corrupted by unexptected power losses. If they are based on the old Sandforce controller, even a total drive brick can be expected under the "right" circumstances.

I strongly suggest to review your UPS and, in the mid-term, to replace these aging drives.

A last note about PostgreSQL and other Linux databases: they will not disable the disk's cache and should not be exptected to do that. Rather, they isses periodic/required fsyncs/FUAs to commit key data to stable storage. This is the way things should be done unless a very compelling reason exists (ie: a drive which lies about ATA FLUSHES/FUAs).

EDIT: if possible, consider migrating to a checksumming filesystem as ZFS or BTRFS. At the very least consider XFS, which has journal checksum and, lately, even metadata checksum. If you are forced to use EXT4, consider enabling auto-fsck at startup (fsck.ext4 is very good at repair corruption).

Solution 2:

Yeah. Don't get super cheap SSD - anything outside the low end consumer market has capacitators and full protection against power loss. Amd really does not cost that much more.


Solution 3:

The first thing to do is to define recovery time and recovery point objectives. How long do you have to recover one of these terminals, and what data point in time is acceptable? Perhaps within a couple hours you need to be capable of recovering to last week's backup.

All sorts of strange things can happen to files if in flight writes are lost. File system priority is maintaining their own metadata consistency, they may not provide the same guarantees for your data. In other words, fsck isn't guaranteed to recover your data. Its job is to get you a file system that will mount.

So, power. Install, configure, and test that UPS will shut the system down gracefully. This allows file system caches and the drives themselves to write.

And, durability of the writes to the disks. Read PostgreSQL's reliability chapter. Use the diskchecker.pl script linked there to do a crash test and determine if the SSDs are lying about if writes got to non-volatile storage. If there is loss, consider replacing with SSDs known to have power loss protection.

Edit: you added details that write cache was enabled. You can attempt to disable that: hdparm -W0 /dev/sda or the appropriate command for a hardware array. Reference: RHEL storage administration guide.

File system write barriers enforce an order of journal commits. Its not a guarantee the data will be intact, but its safer for the file system with a volatile cache. Although it is the default, adding the "barrier" mount option clearly documents you value consistency over performance.

Finally, the last line of defense. Do a restore test to ensure you can get your application and database to desired point in time. This is useful for all kinds of data loss, not just power failure.