Is it safe to use consumer MLC SSDs in a server?

Solution 1:

A few thoughts;

  • SSDs have 'overcommit' memory. This is the memory used in place of cells 'damaged' by writing. Low end SSDs may only have 7% of overcommit space; mid-range around 28%; and enterprise disks as much as 400%. Consider this factor.
  • How much will you be writing to them per day? Even middle-of-the-range SSDs such as those based on Sandforce's 1200 chips rarely appreciate more than around 35GB of writes per day before seriously cutting into the overcommitted memory.
  • Usually, day 1 of a new SSD is full of writing, whether that's OS or data. If you have significantly more than >35GB of writes on day one, consider copying it across in batches to give the SSD some 'tidy up time' between batches.
  • Without TRIM support, random write performance can drop by up to 75% within weeks if there's a lot of writing during that period - if you can, use an OS that supports TRIM
  • The internal garbage collection processes that modern SSDs perform is very specifically done during quiet periods, and it stops on activity. This isn't a problem for a desktop PC where the disk could be quiet for 60% of its usual 8 hour duty cycle, but you run a 24hr service... when will this process get a chance to run?
  • It's usually buried deep in specs but like cheapo 'regular' disks, inexpensive SSDs are also only expected to have a duty cycle of around 30%. You'll be using them for almost 100% of the time - this will affect your MTBF rate.
  • While SSDs don't suffer the same mechanical problems regular disks do, they do have single and multiple-bit errors - so strongly consider RAIDing them even though the instinct is not to. Obviously it'll impact on all that lovely random write speed you just bought but consider it anyway.
  • It's still SATA not SAS, so your queue management won't be as good in a server environment, but then again the extra performance boost will be quite dramatic.

Good luck - just don't 'fry' them with writes :)

Solution 2:

I did find this link, which has an interesting and thorough analysis of MLC vs SLC SSDs in servers

In my view using an MLC flash SSD array for an enterprise application without at least using the (claimed) wear-out mitigating effects of a technology like Easyco's MFT is like jumping out of a plane without a parachute.

Note that some MLC SSD vendors claim that their drives are "enterprisey" enough to survive the writes:

SandForce aims to be the first company with a controller supporting multi-level cell flash chips for solid-state drives used in servers. By using MLC chips, the SF-1500 paves the way to lower cost and higher density drives servers makers want. To date flash drives for servers have used single-level cell flash chips. That's because the endurance and reliability for MLC chips have generally not been up to the requirements of servers.

There is further analysis of these claims at AnandTech.

Additionally, now Intel has gone on the record saying that SLC might be overkill in servers 90% of the time:

"We believed SLC [single-level cell] was required, but what we found through studies with Microsoft and even Seagate is these high-compute-intensive applications really don't write as much as they thought," Winslow said. "Ninety percent of data center applications can utilize this MLC [multilevel cell] drive."

.. over the past year or so, vendors have come to recognize that by using special software in the drive controllers, they're able to boost the reliability and resiliency of their consumer-class MLC SSDs to the point where enterprises have embraced them for high-performance data center servers and storage arrays. SSD vendors have begun using the term eMLC (enterprise MLC) NAND flash to describe those SSDs.

"From a volume perspective, we do see there are really high-write-intensive, high-performance computing environments that may still need SLC, but that's in the top 10% of even the enterprise data center requirements," Winslow said.

Intel is feeding that upper 10% of the enterprise data center market through its joint venture with Hitachi Global Storage Technologies. Hitachi is producing the SSD400S line of Serial Attached SCSI SSDs, which has 6Gbit/sec. throughput -- twice that of its MLC-based SATA SSDs.

Intel, even for their server oriented SSD drives, has migrated away from SLC to MLC with very high "overprovisioning" space with the new Intel SSD 710 series. These drives allocate up to 20% of overall storage for redundancy internally:

Performance is not top priority for the SSD 710. Instead, Intel is aiming to provide SLC-level endurance at a reasonable price by using cheaper eMLC HET NAND. The SSD 710 also supports user-configurable overprovisioning (20%), which increases drive endurance significantly. The SSD 710's warranty is 3 years or until a wear indicator reaches a certain level, whichever comes first. This is the first time we've seen SSD warranty limited in this manner.


Solution 3:

Always base these sorts of things on facts rather than supposition. IN this case, collecting facts is easy: record longish-term read/write IOPS profiles of your production systems, and then figure out what you can live with in a disaster recovery scenario. You should use something like the 99th percentile as your measurement. Do not use averages when measuring IOPS cpacity - the peaks are all that matter! Then you need to buy the required capacity and IOPS as needed for your DR site. SSDs may be the best way to do that, or maybe not.

So, for example, if your production applications require 7500 IOPS at the 99th percentile, you might decide you can live with 5000 IOPS in a disaster. But that's at least 25 15K disks required right there at your DR site, so SSD might be a better choice if your capacity needs are small (sounds like they are). But if you only measure that you do 400 IOPS in production, just buy 6 SATA drives, save yourself some coin, and use the extra space for storing more backup snapshots at the DR site. You can also separate reads and writes in your data collection to figure out just how long non-enterprise SSDs will last for your workload based on their specifications.

Also remember that DR systems might have smaller memory than production, which means more IOPS are needed (more swapping and less filesystem cache).


Solution 4:

Even if the MLS SSD only lasted for one year, in a years time the replacements will be a lot cheaper. So can you cope with having to replace the MLS SSD when they where out?


Solution 5:

As the original question is really interesting but all answers are quite old, I would like to give an updated answer.

As of 2020, current consumer SSDs (or at least the one from top-tier brands) are very reliable. Controller failure is quite rare and they correctly honor write barriers / syncs / flushes / FUAs, which means good things for data durability. Albeit using TLC flash, they sport quite good endurance rating.

However, by using TLC chips, their flash page size and program time is much higher than old SLC or MLC drives. This means that their private DRAM cache is critical to achieve good write performance. Disabling that cache will wreak havok on any TLC (or even MLC, albeit with lower impact) write IOPs. Moreover, any write patter which effectively bypasses the write-combining function of the DRAM cache (ie: small synchronous writes done by fsync-rich workload) is bound to see very low performance. At the same time write amplification will skyrocket, wearing the SSD much faster than expected.

A pratical example: my laptop has the OEM variant of a Samsung 960 EVO - a fast M.2 SSD. When hammered with random writes it provide excellent IOPs, unless using fsync writes: in this case it is only good for ~300 IOPs (measured with fio), which is a far cry from the 100K+ IOPs delivered without forcing syncs.

Point is that many enterprise workload (ie: databases, virtual machines, etc) are fsync heavy, being unfavorable to consumer SSDs. Of course if your workload is read-centric, this would not apply; however, if using something as PostgreSQL on a consumer SSDs you can be deluded by the results.

Another thing to consider is the eventual use of a RAID controller with BBU (or powerloss-protected) writeback cache. Most such controllers disable the SSD DRAM private cache, leading to much lower performance than expected. Some controller supports re-enabling it, but not all of them pass down the required sync/barrier/FUAs to get reliable data storage on consumer SSDs.

For example, older PERC controllers (eg: 6/i) announced themselves as write-through devices, effectively telling the OS to not issue cache flushes at all. A consumer SSD connected to such a controller can be unreliable unless its cache is disabled (or the controller using extra undocumented care), which means low performance.

Not all controllers behave in this manner - for exampler, newer PERC H710+ controllers announce themselves as write-back devices, enabling the OS to issues cache flushes as required. The controller can ignores these flushes unless the attached disks have their cache enabled: in this last case, they should pass down the required sync/flushes.

However this is all controller (and firmware) related; being HW RAID controllers black boxes, one can not be sure about their specific behavior and only hope for the best. It is worth noting that open sources RAID implementation (ie: Linux MDRAID and ZFS mirroring/ZRAID) are much more controllable beasts, and generally much better at extracting performance from consumer SSDs. For this reason I use opensource software RAID whenever possible, especially when using consumer SSDs.

Enterprise-grade SSD with a powerloss protected writeback cache are immune from all these problems: having a non-volatile cache they can ignore sync/flush requests, providing very high performance and low write amplification irrespective of HW RAID controllers. Considering how low the prices for enterprise-grade SATA SSDs are nowadays, I often see no value in using consumer SSDs in busy servers (unless the intended workload is read-centric or otherwise fsync-poor).

Tags:

Storage