ADVERTISEMENT

SAS disk vs entry level SSD

Solution 1:

QLC SSDs are absolutely inadequate for write heavy workload as databases and SAP. I strongly suggest you to buy enterprise-grade TLC disks, as Samsung PM/SM863 and Intel S4510/S4610.

I would not go the SAS 10k route unless the SSD system cost too much for your budget.

Finally, I would keep all disks in the same RAID10 array so that production workloads can benefit from all the 16 disks IOPS.

Solution 2:

Always go flash, if you can of course. QLC has some crazy low endurance so watch out spare cells usage and be prepared to swap drives as they die like crazy - keep some in stock and maybe do it proactively. You’ll be fine :)


Solution 3:

In terms of raw speed, the SSD options in the question will grossly out-perform the SAS drives. It's embarrassing, really. Nevertheless, don't use the QLC disks! You can use consumer SSDs, but look for disks using TLC or better.*

Additionally, you need to be careful using consumer SSDs to build RAID volumes. Modern SSDs have internal controllers that lie to the OS and RAID controllers, and will claim to have fully committed data when this is not actually the case! There are good reasons for this in the desktop systems where these drives are intended to be installed, but in the event of a power failure it can lead to significant data loss in a server RAID/SAN volume, because data the OS thought was committed was still in volatile cache within the disk and suddenly the check-bit for the whole stripe is off.

Enterprise SSDs avoid this issue with a small internal capacitor able to provide enough power to finish committing anything still in a volatile buffer if the power drops. It's a $2 manufacturing addition, but it can triple (or more) the price of the drive :(

You may also be able to address this issue by ensuring you have a RAID controller with it's own battery unit, or if you otherwise have very high confidence in the power situation for your data center and your backups.

With that in mind, I see this:

We have redundant UPS, along with dedicated online generator for Data Center.

That's a start. What I'd like to see on top of this is a documented history for this data center proving UPS batteries are replaced on schedule, the generator is actually maintained and powered up once a quarter, and the data center has survived previous power issues without unexpected server drops. If you have or can get this documentation, you should feel comfortable using (non-QLC) consumer SSDs in your servers.


* Note: QLC has eventual potential to exceed TLC endurance, but that's not what's on the market today. As such, this post may not age very well, and future readers should do additional research.


Solution 4:

We use a very similar setup of SAP systems currently, with an additional QAS server.

As primary storage we use a Dell Compellent SSD solution, with LUNs made of 1.92TB SSDs. We also have a HDD bay used for backup of the DB. The array is a RAID 6 out of 8 drives plus 1 separate hot spare.

The advantage is that the system works very fast and we do have a reliable backup in case of emergency.

The servers are Hyper-V'ed in cluster on 2 physical servers. So the servers have redundancy, the storage has backup on HDDs.

The system works for 3 years now and there was no problem, SSDs still show healthy, with endurance at 95%.

As for the array, there is no mandatory need to break it. You can just make a big array and assign space for each VM or make 2 arrays and have each one assigned to a specific server.

Tags:

Storage