ECC registered vs ECC unbuffered

ECC seems to correct only single bits errors.

Correct. To correct more errors would require more bits. As it is, you already use 10 bits to store 8 bits of information, 'wasting' 20% of the memory chips to allow to a single bit correction and up to two bits of error detection.

It works as follows. Imagine a 0 or an 1. If I read either then I just have to hope I read the right thing. If a 0 got flipped to a 1 by some cosmic radiation or by a bad chip then I will never know.

In the past we tried to solve that with parity. Parity was adding a ninth bit per 8 bits stored. We checked how many zeros and how many 1 were in the byte. The ninth was set to make that a even number. (for even parity) If you ever read a byte and the number was wrong, then you knew something was wrong. You do not know which bit was wrong though.

ECC expanded on that. It uses 10 bits and a complex algorithm to discover when a single bit has flipped. It also knows what the original value was. A very simple way to explain how it does that would be this:

Replace all 0s with 000. Replace all 1s with 111.

Now you can read six combinations:
000
001
010
100
101
111

We are never 100% sure what was originally stored. If we read 000 then that might have been just the 000 which we were expecting, or all three bits might have flipped. The latter is very unlikely. Bits do not randomly flip, though it does happen. Let say that happens one in ten times for some easy calculations (reality is much less). That works out to the following chances of reading the correct value:

000 -> Either 000 (99.9% sure), or a triple flip (1/1000 chance)

001 -> We know something has gone wrong. But it either was 000 and one bit flipped (1:10 chance), or it was 111 and two bits have flipped (a 1:100 chance). So let's treat it as if we read 000 but log the error.

010 -> Same as above.

100 -> Same as above.

011 -> Same as above, but assuming it was a 111

101 -> Same as above, but assuming it was a 111

110 -> Same as above, but assuming it was a 111

111 -> Either 111 (99.9% sure), or a triple flip (1/1000 chance)

111 -> Either 000 (99.9% sure), or a triple flip (1/1000 chance)

ECCs does similar tricks but does it more efficiently. For 8 bits (one byte) they only use 10 bits to detect and correct.


ECC registered RAM is only usable with workstation / server boards ECC unbuffered is usable on Intel Xeon lga1155 or AMD AM3+ on Asus boards.

I already mentioned what the ECC part was, now the registered vs unbuffered part.

In modern CPUs the memory controller is on the CPU die, starting long ago for AMD Opteron chips and with the Core i series for Intel. Most desktop CPUs then talk directly to the DIMM sockets holding the RAM. It works and no extra logic is needed. That is cheap to build, and the speed is high because there's no delay going from the memory controller to the RAM.

But a memory controller can only drive a limited current at high speeds. This means that there is a limit to how many memory sockets can be added to a motherboard. (And to make it more complex, to how much the DIMMs can use, which leads to memory ranks. I will skip that since this is already long).

On server boards you often want to use more memory than a desktop system. Therefore a "register" buffer is added to the memory. Reads from the chips on the DIMM first get copied to this buffer. A clock cycle later this buffer connects to the memory controller to transfer the data.

This buffer/register delays things, making memory slower. That is undesirable and thus it is only used/needed on boards that have a lot of memory banks. Most consumer boards do not need this, and most consumer CPU's do not support it.

Directly connected, unbuffered RAM vs. buffered/registered RAM isn't a case where one is better or worse than the other. They just have different trade-offs in terms of how many memory slots you can have. Registered RAM allows more RAM at the cost of some speed (and possibly expense). In most cases where you need as much memory as possible, that extra memory more than compensates for the RAM running at a slightly slower speed.

The doubt I'm having is (mainly concerning asus am3+ board): is ECC-unbuffered RAM as good as ECC-registered RAM (from the point of view of safety and reliability) ? Or is it a worse choice. I don't care much for the speed.**

From the standpoint of safety and stability, ECC-unbuffered and ECC-registered are the same.


More details: server will use a server case with up to 24 x 3 ½'' drives and should consume as little as possible.

24 drives are going to consume a lot of power. How much depends on the drives. My 140GB 15K RPM SAS drive is drawing a mere 10 watt at idle, same as the 1TB SATA 7k2 disk. At use both draw more.

Multiply that by 24. 24x10 Watt at idle means 240 watts just keeping the disks platters spinning, overcoming air resistance. Double-ish that for in use.


LGA1155 seems to be in that sense a better bet (TDP ~ 20-95W) versus the others (>80W) for twice the price.

Intel is better at low power CPU's, at the time of writing and for the CPU's you mentioned.

Any suggestion is welcome. Let's say less than 120W at idle (~ with 10 hard disks out of 24).

If you go for FreeBSD, look hard at ZFS. It can be great. Many of its more advanced features (e.g. deduplication and/or compression) use serious CPU power, and want plenty of memory. ZFS for basic use with ZRAID will do fine on both CPU sets you mentioned and with 16 GB, but if you turn on features like deduplication you should look carefully into the recommended memory needed for your disk capacity; up to 5GB per TB of storage is recommended by some guides.

Two more things:

  1. I did not see anything about connecting the drives. Some boards may go up to 10 SATA ports. But for anything over that, you will need add-in cards. If you consider hardware RAID then it might be best to plan that from the beginning.
  2. Drive failure: Should you use SATA port multipliers then look carefully how they act if a SATA drive fails. It often is not pretty. Not a big problem for a home setup, but very much not enterprise grade. You may need to consider how individual drives handle errors too. The reason some drives are labeled as being for "NAS" or "RAID" use is that they handle errors differently than regular drives. With no RAID, you want the drive to retry as many times as possible. With RAID, you want the drive to fail quickly, so you can read from another copy.

Two separate issues.

ECC Vs non-ECC

  • use ECC wherever uptime is important
  • costs more -- need (multiples of) 9 chips instead of 8
  • motherboard must support it to use it

Registered Vs Unbuffered:

  • Can have (much) more total RAM installed with Registered DIMMs
    • Less electrical strain on the memory controller interface
  • But all DIMMs installed must be registered or not
    • must remove unbuffered DIMMS if upgrading to Registered
  • Also is more expensive, and a cycle slower to access
    • Unbuffered is slightly lower latency, if that matters
    • all random accesses take many cycles anyway
    • Note absolute access latency (time in nanoseconds) hasn't improved much over history of DRAM use in PC's
      • cost, capacity and bandwidth vastly improved instead
      • memory caches hides the latency for most memory accesses anyway
    • Longer latency hurts single-thread 'real-time' performance most
      • usually doesn't affect 'server' use cases much
    • No/minimal difference in bandwidth and overall performance
      • sequential access bandwidth unaffected
      • L2/L3 caches mean actual access patterns mostly replace rows at a time in the cache, so are usually 'burst' accesses anyway