Fast Way to Randomize HD?

dd if=/dev/urandom of=/dev/sda, or simply cat /dev/urandom >/dev/sda, isn't the fastest way to fill a disk with random data. Linux's /dev/urandom isn't the fastest cryptographic RNG around. Is there an alternative to /dev/urandom? has some suggestions. In particular, OpenSSL contains a faster cryptographic PRNG:

openssl rand $(</proc/partitions awk '$4=="sda" {print $3*1024}') >/dev/sda

Note that in the end, whether there is an improvement or not depends on which part is the bottleneck: the CPU or the disk.

The good news is that filling the disk with random data is mostly useless. First, to dispel a common myth, wiping with zeroes is just as good on today's hardware. With 1980s hard disk technology, overwriting a hard disk with zeroes left a small residual charge which could be recovered with somewhat expensive hardware; multiple passes of overwrite with random data (the “Gutmann wipe”) were necessary. Today even a single pass of overwriting with zeroes leaves data that cannot realistically be recovered even in laboratory conditions.

When you're encrypting a partition, filling the disk with random data is not necessary for the confidentiality of the encrypted data. It is only useful if you need to make space used by encrypted data indistinguishable from unused space. Building an encrypted volume on top of a non-randomized container reveals which disk blocks have ever been used by the encrypted volume. This gives a good hint as to the maximum size of the filesystem (though as time goes by it will become a worse and worse approximation), and little more.


You can get OpenSSL to encrypt /dev/zero with a randomized password, giving decent pseudorandom data very fast (if your CPU supports accelerating it).

openssl enc -aes-256-ctr -pass pass:"$(dd if=/dev/urandom bs=128 count=1 2>/dev/null | base64)" -nosalt < /dev/zero | dd of=/dev/sda

You could pipe this through pv to get progress/ETA. The commands I'm running right now (in a root shell) are:

DISK="sda"
DISKSIZE=$(</proc/partitions awk '$4=="'"$DISK"'" {print sprintf("%.0f",$3*1024)}')
apt-get install pv
openssl enc -aes-256-ctr -nosalt \
  -pass pass:"$(dd if=/dev/urandom bs=128 count=1 2>/dev/null | base64)" \
  < /dev/zero |
  pv --progress --eta --rate --bytes --size "$DISKSIZE" |
  dd of=/dev/"$DISK" bs=2M

I got this idea from this answer, after having the same problem as irrational John, who commented on Gilles's answer above. This increased my wipe speed to my new RAID array from 11 MB/s to around 300 MB/s, taking what was going to take a week down to 10 hours.

I'll add that you should be able to use openssl rand #of_bytes rather than the more complicated openssl enc ... statement above, but there is a bug which will allow ssl to produce only 16 MB of output. (This bug has been filed, Jan 2016.)

And, as per the answer to this question, and continuing to assume that the CPU is the bottleneck, it may be possible to increase speed further by running multiple parallel openssl processes on separate cores, combining them using a FIFO.


The openssl did not seem to work for me. I got "unknown options" and other issues with the provided solutions. So I ended up going with the program fio.

fio -name="fill" -ioengine=libaio -direct=1 -bs=512m -rw=write -iodepth=4 -size=100% -filename=/dev/md0

Which seems to be taking 3 hours to do 19TB across 24 HDDs. So roughly 1,800 MB/s

smp-016:~ # fdisk -l /dev/md0
Disk /dev/md0: 18890.1 GB, 18890060464128 bytes

smp-016:~ # fio -name="fill" -ioengine=libaio -direct=1 -bs=512m -rw=write -iodepth=4 -size=100% -filename=/dev/md0
fill: (g=0): rw=write, bs=512M-512M/512M-512M/512M-512M, ioengine=libaio, iodepth=4
fio-2.2.10
Starting 1 process
Jobs: 1 (f=1): [W(1)] [2.7% done] [0KB/1536MB/0KB /s] [0/3/0 iops] [eta 03h:01m:11s]

I hope this is actually random data. The man page says fio "Default: fill buffers with random data." http://linux.die.net/man/1/fio

I'm not doing it for secure/encryption purposes, just trying to be sure my later read tests are actual data and not just 0's. This same fio command could be used for SSD/NVMe preconditioning. As just using /dev/zero can lead to disk level compression "cheating" how much is actually written. Although I would add a -loops=2 flag to it, if it is a fresh SSD for benchmarking.

If you did want it to be secure you may be able to use the -randrepeat=bool option, as that will toggle "Seed the random number generator in a predictable way so results are repeatable across runs. Default: true.", but I'm still not certain how secure that would be.

Additionally some enterprise class HDDs out there are SED (Self Encrypting Drives) and will allow you to spin the encryption key to instantly and securely erasing all the data written.

Lastly, I have in the past used DBAN (aka Darik's Boot and Nuke), which has CD and USB bootable options and "is an open source project hosted on SourceForge. The program is designed to securely erase a hard disk until its data is permanently removed and no longer recoverable"