Third-party SSD solutions in ProLiant Gen8 servers

Solution 1:

I've covered SSD interoperability and compatibility issues with HP servers several times here.

Check these posts:

HP D2700 enclosure and SSDs. Will any SSD work?

Are there any SAN vendors that allow third party drives?

So, the move from G6 and G7 HP ProLiants to the Gen8 variants forced a disk carrier form-factor change. HP went to the SmartDrive carrier with the Gen8 product, and that's created a whole set of issues that impact SSD compatibility.

I like the idea of choosing the most appropriate options for my environments and applications, within reason. With G7's, I could use HP's SanDisk/Pliant SAS enterprise SSDs when needed, but also Intel or other low-cost SandForce-based SSDs where it made sense. If using an external enclosure like a D2700 or D2600, I could also use sTec SSDs (which offer another quality SAS SSD option). Drive carriers for the old form-factor were easily obtained.

With Gen8 servers, much of this isn't possible. From the difficult access to the SmartDrive carriers to restrictive firmware and disk validation techniques to the obscenely high price of the HP-branded SSDs ($2500+ per drive), I think HP have priced themselves out of the market.

Their rebranded drives aren't stellar performers, but have tremendous endurance. That's not needed in every environment. Getting the best performance out of HP SSDs on current HP Smart Array controller also requires tuning or even additional HP SmartPath licensing. Previous controllers like the Smart Array P410 were limited by IOPS and other constraints.

A good development that may affect your application on Gen8 servers is the HP SmartCache SSD tiering. Much like LSI's Cachecade, this allows you to add SSD read caching and benefit from lower latencies where it matters. Also see: How effective is LSI CacheCade SSD storage tiering?

In general, I'm not concerned about SSD reliability in RAID setups with disk form-factors. PCIe-based SSDs introduce other concerns. I haven't had any endurance problems, but check: Are SSD drives as reliable as mechanical drives (2013)?


So what can you do?

  • The D2700 external enclosure may be key here. It uses the older G7 disk carriers. It's also a very solid unit and compatible with old and new generation controllers. You can stuff Intel/sTec/cheapo disks in it all day and be fine. Connect that to the adapter in your hosts, and that will give you the flexibility you require. Use a DL360p instead of a DL380p to save a rack unit.

  • Intel disks inside of the Gen8 server... I wouldn't do it, if for any reason than to avoid the POST 1709 errors. Plus you'll be self-supporting in a way that impacts the main server unit. I just had a customer try to fill a 25-bay DL380p Gen8 with Intel SSDs and eBay drive carriers. He had to return the Intel drives and use low-end HP SATA disks for the system to even work.

The HP ProLiant DL380p Gen8 is offered in 8-bay, 12-bay15, 16-bay and 25-bay units.

  • The 8-bay has been fine. It's a good platform, especially if you add external storage.

  • The 16-bay Gen8 has no SAS expander card (and is incompatible with the excellent HP SAS Expander), so you need two internal RAID controllers to use it. As a result, your logical drives cannot span the two 8-bay drive cages. This is a departure from the G7s, where 16-bays/disks in one array was no problem.

  • The 25-bay unit has a concerning design flaw. The SAS expander is embedded on the 25-drive backplane. This backplane requires a P420i controller with FBWC cache to function. Fine. I had three RAID controller DIMMs die in a 60-day period, though. On the 8-bay units, this just disables write cache. On the 25-bay server, a cache failure makes the Smart Array a "zero-memory" controller and disables all access to the disks!! Avoid this model unless you can accept that risk. My failure rate on 2GB cache modules is far higher than 1GB modules, so I downgrade to the 1GB modules for this specific platform.

1746-Slot z Drive Array - Unsupported Storage Connection Detected - SAS connection via expander is not supported on this controller model. Access to all storage has been disabled.

enter image description here

Solution 2:

Here's an update to summarize my takeaways from this question. Thanks for the contributions!

It's fair to say that the original question presumes an OEM storage solution (HP SSDs in this case) provides a supported or "guaranteed" working solution in terms of component compatibility and system performance. This obviously comes at a premium price, and the perceived value informs how reasonable the premium is.

While I had really discarded the notion of using SSDs in this hardware refresh, the press on the Intel S3700, specifically, made an SSD solution attractive enough to consider. Looking at the equivalent HP products, I found (1) they aren't currently available, and (2) the expected price premium is 2.4x the Intel product. So, the question becomes how much effort would it take to integrate and validate the Intel solution? Understanding this leads to a very product-specific solution that runs counter to the aim of serverfault, so I'll generalize my thinking process using the answers provided:

  1. Whether vendor-integrated or DIY, there are still a lot of variables in hanging SSDs behind RAID controllers optimized for spinning disks. HP recommends assorted tweaks for SSD use, and the HP SmartPath software that ewwhite mentioned (Gen8 RAID + Windows only) basically short-circuits much of the RAID firmware when using SSDs. HP's additional "protectionism" with the Gen8 carriers, and managing firmware updates for 3rd party SSDs (that I would expect to be more critical than for HDDs) also makes this all just look a little too immature (or too management intensive) for prime time in a complex setup.

  2. Before I ran back to spinning disks, though, I took another look at the FusionIO product, as Tom O'Connor suggested. Since performance isn't really an issue for us, the biggest benefit is that it is an integrated storage module. That makes compatibility and configuration much more straightforward. Another important point is that HP OEMs these, so you can get "genuine" HP product in this line, and integration becomes even less an issue. Furthermore, and in stark contrast to the SATA/SAS SSDs I was considering, HP's advertised (online) prices are actually better than FusionIO's. Go figure.

Re-thinking the deployment with this post in mind, I considered building availability nodes with single FusionIO cards. This took the solution cost from "can't consider" down to "let's investigate further." Finally, when the actual quote came in at a better-than-expected level, I was sold.

So the bottom line is that we have two Gen8 servers sporting HP-branded FusionIO cards running in the sandbox. Endurance will be far beyond our expected use, the cost was lower than for a 15K SAS disk solution, and we'll substantially reduce power consumption and rack space. The redundancy model is different, sure, but the only thing I expect people will miss are all the blinking LEDs.

My original thinking regarding SSDs for a mission-critical database system was to wait a few years, as there will be many more mature and proven solutions at better price points. No doubt that will still be the case, but I was surprised to find something today that looks like it will do the job well.