Adding drives to a RAID 10 Array

To grow RAID 10 you need mdadm in version min. 3.3 and kernel version min 3.5. You need also an even number of disks - unpaired ones can only work as a spare or, eventually, to grow into degradated mode (not tested).

Here goes the example of growing RAID 10 from 4 drives to 6 using mdadm 3.3-2ubuntu2 @ Linux 4.2.0-10-generic. Tested with ext4 data on it, filesystem was unmounted, ext4 was extended after the RAID grow without any issue.

~$ cat /proc/mdstat
md126 : active raid10 sdd1[1] sdc1[0] sdf1[3] sde1[2]
976428032 blocks super 1.2 512K chunks 2 near-copies [4/4] [UUUU]
bitmap: 0/8 pages [0KB], 65536KB chunk

~$ sudo mdadm /dev/md126 --add /dev/sdi1 /dev/sdj1
mdadm: added /dev/sdi1
mdadm: added /dev/sdj1
~$ sudo mdadm --grow /dev/md126 --raid-devices=6

~$ cat /proc/mdstat
md126 : active raid10 sdj1[5] sdi1[4] sdd1[1] sdc1[0] sdf1[3] sde1[2]
1464642048 blocks super 1.2 512K chunks 2 near-copies [6/6] [UUUUUU]
bitmap: 0/6 pages [0KB], 131072KB chunk

I realize this is over a year old but someone might find this helpful...

You can expand a raid 10 array, but not how you are hoping. You would have to nest multiple levels of raid. This can be done with mdadm on 2 drives in raid 10, which quite nice performance depending on the layout, but you would have to make multiple 2 disk raid 10 arrays, then attach them to logical node. Then to expand add a few more, and stripe across that. If that is your use case (needing to expand a lot) then you would be wise to use a parity array, which can be grown.

These are the limitations you get with raid 10, while maintaining better read/write performance overall. And a clarification, raid 5/6 absolutely does not "In general, provide better write performance...". Raid 5/6 has their own respective pros/cons just as raid 10, but write performance is not a pro for raid 5/6.

Also, you didnt specify the size of your drives but beware of raid 5 on new large drives. Though if you are careful, you can recover from an unrecoverable read error, you risk downtime and the possibility of not being able to recover at all.

--edit to add info-- Use tools like hdparm (hdparm -i) and lshw to get the serial numbers along with the device name (/dev/sda) when you have a failure. This will ensure you remove the correct device when replacing. Up-arrow on Travis' comment as it is very correct and a nice layout, but as usual, weight the pros and cons of every solution.


Some great news from the release announcement for mdadm 3.3:

This is a major new release so don't be too surprised if there are a few issues...

Some highlights are:

...

  • RAID10 arrays can be reshaped to change the number of devices, change the chunk size, or change the layout between 'near' and 'offset'. This will always change data_offset, and will fail if there is no room for data_offset to be moved.

...

According to this answer on U&L, you will need at least linux 3.5 as well.