NAS / RAID / Backup Scheme

Normally, the way this is done is something like this:

1) RAID array with 1 or more redundant drives (so RAID 5 or 6) - allowing one or two drives to fail at once without data loss. Sometimes, this is done with RAID 10 which is effectively two arrays, you can lose more drives, but only if they're from different arrays. Given the rest of the scheme, 5 or 6 should be ok. It depends on the amount of data, costs, performance requirements, etc.

2) Offsite backup: Basically, take a full copy of the data and store it elsewhere.

Regarding theft, you need to allow for the data's security, so the offsite backup at least should use full disk encryption (if applicable).

Regarding your current setup (and the proposed one), do you need to allow for accidental deletes? You need to make sure removing a file won't automatically remove it from all your other copies. Same goes for file corruption.

If you use RAID 1 (i.e. mirroring), it should be possible to swap drives out and automatically sync the data, but personally, I wouldn't do this for the reasons above. What I'd do (and in fact do) is to use RAID 5 to aid in hardware failures, take a manual copy once a month which stays on site, and an encrypted copy once every 3 months which goes off site. If my data was super important, I'd likely go with RAID 10 rather than 5, but restore times aren't an issue for me.

Re: restore times. Having the entire array offsite on an encrypted drive is ok, but can you afford the downtime to restore it?

As for swapping drives, I use a case which holds the drives and has a slot in it which takes a SATA drive. Pop it in, do the backup, and hit the eject button. Done! SATA drives are handy like that as you can hot swap them.

Overall, I'd say your incremental backup approach, combined with RAID 5 and an offsite (maybe encrypted) would be good enough. But practice RAID skills on a virtual machine or similar, as if you need it, you may really need it.


Some potential failure modes:

  • Swapping disks causes a disk failure.
  • Swapping an old mirror disk back in causes the mirror to update with the older member (losing all current data)
  • Backups succeed without apparent error for weeks, after which time you find that you replaced a critical file with a cute picture of a squirrel by accident.
  • One of the drives dies due to travel.

Tags:

Backup

Raid

Nas