To use cable management arms or not

Solution 1:

Coming from a webhosting env. We dealt with hundreds of servers some of which were always moving based on contract changes.

I don't care for them and prefer velcro instead.

IMO, if you're going to pull a server from a rack to do something inside the case it should be off. Hot swappable drives are all accessible from the front.

It was one more thing I didn't need stuffed into the back of the rack.

It added to install time, and removal time.

It made it harder to replace a bad cable in a hurry.

It blocked access to the label on the cables near the jack.

It made it hard to move a server and cables if say I wanted to move it higher up and shorten them.

It added to any heat problems we might have had.

Solution 2:

The problem is one extra word in this sentence:

It seems like a nice idea to ensure that you have enough cable slack to be able to pull a running server out of a rack without worrying about accidentally unplugging a cable, but how many times is this really done?

Take the word "running" out of the sentence, and you'll see the light. Cable management arms make it easier to do ANY maintenance on a server, not just when it's running. Need to pop it open to add more memory, HBAs or network cards? Done. Less time during an outage.

If you're going after five nines, every second you can save during outages is crucial. Unplugging three or four network cables doesn't seem time-intensive, but watch what happens when you accidentally put the wrong network card into a port. Maintenance time skyrockets.


Solution 3:

I do not. My argument is that they impede airflow, and that there are better 3rd party cable management solutions that accomplish the same thing.

I can count the times I've wanted to leave a server powered on while I was adding or removing hardware on 0 fingers, and that's their only^H^H^H^Hmain purpose.

Edit

I admit, they make it faster to pull hardware out of the rack, but in my opinion, it's not worth the hassle and heat.


Solution 4:

I prefer to use the cable management arms, but I can see the other side of the argument. I have found that a neat and tidy rack is easier to deal with in a crisis situation, because it is easier to know what is where.

Reasons PRO

  • GREATLY reduced likelihood of accidental disconnection when working on other things in the rack. This is big for me .. it just sucks when you are debugging a problem with server A and accidentally knock out the power cord for server B.

  • Tidiness / Cleanliness - the rack just looks better with arms, and it is harder to keep the rack tidy without arms.

  • No need to disconnect (and reconnect) when doing maintenance. Even if the maintenance is done offline, not having to touch the connections makes it easier.

Reasons CON

  • Airflow / cooling - the arms can reduce airflow, particularly in dense racks.

  • Difficulty changing cables - I think this is overblown. When swapping a cable in a crisis you skip the cable management, get both ends plugged in, and make it pretty when things settle down.

  • More moving parts - can pinch fingers, catch on things, etc.


Solution 5:

I inherited an environment that had no cable management arms, and we've slowly been managing to get them installed.

The reasons the previous admin used for not purchasing/using them were cited above: You would not be unracking a live server, they interfere with airflow, and you should be trying to reduce the amount of cable in the rack, not increase it to deal with the full span.

The problem shows up when you're maintaining a heterogeneous server room over a number of years instead of installing an entire racks of servers at once. We have three manufacturers of servers and usually 2-3 generations of each in production. We add or remove machines every three months.

  • We have equipment arriving and leaving constantly.
  • We don't have the opportunity to zip-tie things to lacing bars -- we don't have enough space to give up a 1-2 U to them, and we don't want to "layer" things because we'll always end up digging the oldest cables out of the bottom layer.
  • We don't get to pick or focus on one vendor because I work for a university that receives grants (sometimes from hardware manufacturers) and relies on a public bidding process for large purchases.
  • We have three to four Cat5 spans to each server -- typically one for internal network, one for public network, one for KVM, and one for the ILO management port.
  • Some servers are also attached to fiber (and we run the fiber inside a small conduit to keep it from pinching), while others have an additional 2-4 cat5 cables running to teamed network interfaces.
  • Then there's two power supplies for each server.

I'd like to see anyone make a clean server cabinet with that many cables running to each server unless they use cable management arms of some sort.

In a "rush" environment, we're able to pull a server out without walking around to the back first. We know what cables are being plugged or unplugged because of their colors.

There's many reasons not to use cable management arms, but when you're working in a typical business environment and not a engineered environment, they're really worth it.