What is the optimal algorithm for generating an unbiased random integer within a range?

The problem is that you're doing a modulo operation. This would be no problem if RAND_MAX would be evenly divisible by your modulus, but usually that is not the case. As a very contrived example, assume RAND_MAX to be 11 and your modulus to be 3. You'll get the following possible random numbers and the following resulting remainders:

0 1 2 3 4 5 6 7 8 9 10
0 1 2 0 1 2 0 1 2 0 1

As you can see, 0 and 1 are slightly more probable than 2.

One option to solve this is rejection sampling: By disallowing the numbers 9 and 10 above you can cause the resulting distribution to be uniform again. The tricky part is figuring out how to do so efficiently. A very nice example (one that took me two days to understand why it works) can be found in Java's java.util.Random.nextInt(int) method.

The reason why Java's algorithm is a little tricky is that they avoid slow operations like multiplication and division for the check. If you don't care too much you can also do it the naïve way:

int n = (int)(max - min + 1);
int remainder = RAND_MAX % n;
int x, output;
do {
  x = rand();
  output = x % n;
} while (x >= RAND_MAX - remainder);
return min + output;

EDIT: Corrected a fencepost error in above code, now it works as it should. I also created a little sample program (C#; taking a uniform PRNG for numbers between 0 and 15 and constructing a PRNG for numbers between 0 and 6 from it via various ways):

using System;

class Rand {
    static Random r = new Random();

    static int Rand16() {
        return r.Next(16);
    }

    static int Rand7Naive() {
        return Rand16() % 7;
    }

    static int Rand7Float() {
        return (int)(Rand16() / 16.0 * 7);
    }

    // corrected
    static int Rand7RejectionNaive() {
        int n = 7, remainder = 16 % n, x, output;
        do {
            x = Rand16();
            output = x % n;
        } while (x >= 16 - remainder);
        return output;
    }

    // adapted to fit the constraints of this example
    static int Rand7RejectionJava() {
        int n = 7, x, output;
        do {
            x = Rand16();
            output = x % n;
        } while (x - output + 6 > 15);
        return output;
    }

    static void Test(Func<int> rand, string name) {
        var buckets = new int[7];
        for (int i = 0; i < 10000000; i++) buckets[rand()]++;
        Console.WriteLine(name);
        for (int i = 0; i < 7; i++) Console.WriteLine("{0}\t{1}", i, buckets[i]);
    }

    static void Main() {
        Test(Rand7Naive, "Rand7Naive");
        Test(Rand7Float, "Rand7Float");
        Test(Rand7RejectionNaive, "Rand7RejectionNaive");
    }
}

The result is as follows (pasted into Excel and added conditional coloring of cells so that differences are more apparent):

enter image description here

Now that I fixed my mistake in above rejection sampling it works as it should (before it would bias 0). As you can see, the float method isn't perfect at all, it just distributes the biased numbers differently.


The problem occurs when the number of outputs from the random number generator (RAND_MAX+1) is not evenly divisible by the desired range (max-min+1). Since there will be a consistent mapping from a random number to an output, some outputs will be mapped to more random numbers than others. This is regardless of how the mapping is done - you can use modulo, division, conversion to floating point, whatever voodoo you can come up with, the basic problem remains.

The magnitude of the problem is very small, and undemanding applications can generally get away with ignoring it. The smaller the range and the larger RAND_MAX is, the less pronounced the effect will be.

I took your example program and tweaked it a bit. First I created a special version of rand that only has a range of 0-255, to better demonstrate the effect. I made a few tweaks to rangeRandomAlg2. Finally I changed the number of "balls" to 1000000 to improve the consistency. You can see the results here: http://ideone.com/4P4HY

Notice that the floating-point version produces two tightly grouped probabilities, near either 0.101 or 0.097, nothing in between. This is the bias in action.

I think calling this "Java's algorithm" is a bit misleading - I'm sure it's much older than Java.

int rangeRandomAlg2 (int min, int max)
{
    int n = max - min + 1;
    int remainder = RAND_MAX % n;
    int x;
    do
    {
        x = rand();
    } while (x >= RAND_MAX - remainder);
    return min + x % n;
}

It's easy to see why this algorithm produces a biased sample. Suppose your rand() function returns uniform integers from the set {0, 1, 2, 3, 4}. If I want to use this to generate a random bit 0 or 1, I would say rand() % 2. The set {0, 2, 4} gives me 0, and the set {1, 3} gives me 1 -- so clearly I sample 0 with 60% and 1 with 40% likelihood, not uniform at all!

To fix this you have to either make sure that your desired range divides the range of the random number generator, or otherwise discard the result whenever the random number generator returns a number that's larger than the largest possible multiple of the target range.

In the above example, the target range is 2, the largest multiple that fits into the random generation range is 4, so we discard any sample that is not in the set {0, 1, 2, 3} and roll again.