RAID 5/6 rebuild time calculation

Solution 1:

You can calculate the best-case rebuild rate fairly simply: as rebuild is sequential, the needed time is capacity / transfer rate. For example, rebuilding a 10 TB disk with a 200 MB/s transfer rate needs at least 10000000 / 200 = 50000s = ~14h.

Now take this result and trow it away, as it is an overly optimistic scenario: it suppose 100% disk availability for the rebuild operation and totally sequential reads/writes. Toss in the mix some non-rebuild (ie: application) load, cap the rebuild itself to 30% (to not grind other applications to an halt) and you are suddenly in the 10x (eg: a week) rebuild time.

These long rebuild times are the reason while I avoid RAID5/6 in many system, favoring mirroring instead. Anyway, with such big drives, absolutely avoid RAID5, which is too much exposed to double failure and/or URE issues.

If you want to play with the number, give a look here

Solution 2:

The theoretical absolute minimum rebuild time is the time needed to write a complete disk worth of data : the capacity of a disk divided by the average sustained write speed a disk can maintain without cache.
(Note: that average sustained write speed will probably be not even near the performance numbers quoted in the specs.)

Larger disks take longer.
Slower disks take longer.
Parity calculations take extra time.

Real world numbers will vary but will certainly be (much) larger and depend on your RAID level , the number of remaining disks, load on the system while the array rebuild takes place, the controller etc.

Also see What are the different widely used RAID levels and when should I consider them?


Solution 3:

It depends upon your RAID controller (or software RAID stack). As others mentioned, first don't use RAID-5 with large hard drives (it's OK for up to 1TB SSDs and not much else).

In my experience, rebuild time vary largely with storage solicitation. For idle systems, most of controllers will require 36 to 72 hours to rebuild arrays of 8 to 12 TB drives (depending upon your controller type and disk size).

When the system is under IO load during rebuild, however, it's not uncommon to see this duration grow to a week length.

Notice that Helium drives have a much better reliability record than standard drives; in my experience UltraStar He drives failure rate is low enough to still make RAID-6 relevant (a typical 100 TB to 1 PB system won't see more than one rebuild in a 5 years time span).