mdadm --zero-superblock on disks with other partitions on them

https://raid.wiki.kernel.org/index.php/RAID_superblock_formats

The superblock is 4K long and is written into a 64K aligned block that starts at least 64K and less than 128K from the end of the device (i.e. to get the address of the superblock round the size of the device down to a multiple of 64K and then subtract 64K). The available size of each device is the amount of space before the super block, so between 64K and 128K is lost when a device in incorporated into an MD array.

So, It's already too late and might be unsafe to use --zero-superblock, because we don't know is there any data or not - you must resize/shrink your current partition to -128K from the end of the x-RAID partition, then, wipe that part and grow partition back.

Other option 1: write large files to fill entire disk, it will overwrite RAID superblocks and it will not be recognizable by the mdadm.

Other option 2: similar to 1: https://unix.stackexchange.com/questions/44234/clear-unused-space-with-zeros-ext3-ext4


wipefs --all /dev/sd[4ppropr14t3][123] (of course set up the glob for your drives/partitions!)


This is how I figured this out (it might be quite specific to my case but I'll try to keep it general where I can).

(When I talk about devices, what I mean are the devices the raid volume is composed of, not the raid array itself)

I used mdadm -E $DEVICE to figure out which metadata format the array was using. I then went to [0] to find some information about the superblock format. In my case this was version 0.90.

This format has the superblock stored towards the end of the device. This is where my situation comes in. My old array was made directly on the drives, no partitioning. Because of this, I knew the superblock should be at the very end of the device. My new partitioning included a swap partition at the end. Therefore, there was not much data to lose where the superblock was located.

I did some reading around, the conclusion I reached was that mdadm --zero-superblock only zeroes out the superblock itself and thus it should be safe in my case. I went ahead and removed the superblocks on all three devices:

mdadm --stop $ONE_OF_THE_DEVICES

Repeat this line as required

mdadm --zero-superblock $DEVICE

Some additional comments/speculation:

Generally, if the space is needed by the new partitioning/filesystems it should have been overwritten already. Thus, if the superblock still there, zeroing it shouldn't hurt the partitioning/filesystems. I am however not sure how MD handles the case where the superblock has already been overwritten on one or many of the devices but not all. The man page says that -f is needed to zero the superblock out if it is invalid, but keep it in mind.

0: https://raid.wiki.kernel.org/index.php/RAID_superblock_formats