Why writing to /dev/random does not make parallel reading from /dev/random faster?

You can write to /dev/random because it is part of the way to provide extra random bytes to /dev/random, but it is not sufficient, you also have to notify the system that there is additional entropy via an ioctl() call.

I needed the same functionality for testing my smartcard setup program, as I did not want to wait for my mouse/keyboard to generate enough for the several calls to gpg that were made for each test run. What I did is to run the Python program, which follows, in parallel to my tests. It of course should not be used at all for real gpg key generation, as the random string is not random at all (system generated random info will still be interleaved). If you have an external source to set the string for random, then you should be able to have high entropy. You can check the entropy with:

cat /proc/sys/kernel/random/entropy_avail

The program:

#!/usr/bin/env python
# For testing purposes only 
# DO NOT USE THIS, THIS DOES NOT PROVIDE ENTROPY TO /dev/random, JUST BYTES

import fcntl
import time
import struct

RNDADDENTROPY=0x40085203

while True:
    random = "3420348024823049823-984230942049832423l4j2l42j"
    t = struct.pack("ii32s", 8, 32, random)
    with open("/dev/random", mode='wb') as fp:
        # as fp has a method fileno(), you can pass it to ioctl
        res = fcntl.ioctl(fp, RNDADDENTROPY, t)
    time.sleep(0.001)

(Don't forget to kill the program after you are done.)


Typically, it's designed by kernel developers and documented in man 4 random:

Writing to /dev/random or /dev/urandom will update the entropy pool
with the data written, but this will not result in a higher entropy
count.  This means that it will impact the contents read from both
files, but it will not make reads from /dev/random faster.

Tags:

Linux

Random