How to calculate 32-bit floating-point epsilon?

#include <iostream>
#include <iomanip>

/*
https://en.wikipedia.org/wiki/Machine_epsilon#How_to_determine_machine_epsilon
*/

typedef union
{
    int32_t i32;
    float   f32;
} fi32_t;

float float_epsilon(float nbr)
{
    fi32_t flt;
    flt.f32 = nbr;
    flt.i32++;
    return (flt.f32 - nbr);
}

int main()
{
    // How to calculate 32-bit floating-point epsilon?

    const float one {1.}, ten_mills {10e6};
    std::cout << "epsilon for number " << one << " is:\n"
        << std::fixed << std::setprecision(25)
        << float_epsilon(one)
        << std::defaultfloat << "\n\n";

    std::cout << "epsilon for number " << ten_mills << " is:\n"
        << std::fixed << std::setprecision(25)
        << float_epsilon(ten_mills)
        << std::defaultfloat << "\n\n";


    // In book Game Engine Architecture : "..., let’s say we use a
    // floating-point variable to track absolute game time in seconds.
    // How long can we run our game before the magnitude of our clock
    // variable gets so large that adding 1/30th of a second to it no
    // longer changes its value? The answer is roughly 12.9 days."
    // Why 12.9 days, how to calculate it ?

    const float one_30th {1.f/30}, day_sec {60*60*24};
    float time_sec {}, time_sec_old {};

    while ((time_sec += one_30th) > time_sec_old)
    {
        time_sec_old = time_sec;
    }

    std::cout << "We can run our game for "
        << std::fixed << std::setprecision(5)
        << (time_sec / day_sec)
        << std::defaultfloat << " days.\n";


    return EXIT_SUCCESS;
}

This outputs

epsilon for number 1 is:
0.0000001192092895507812500

epsilon for number 10000000 is:
1.0000000000000000000000000

We can run our game for 12.13630 days.

When the result of a floating point computation can't be represented exactly, it is rounded to the nearest value. So you want to find the smallest value x such that the increment f = 1/30 is less than half the width h between x and the next largest float, which means that x+f will round back to x.

Since the gap is the same for all elements in the same binade, we know that x must be the smallest element in its binade, which is a power of 2.

So if x = 2k, then h = 2k-23 since a float has a 24-bit significand. So we need to find the smallest integer k such that

2k-23/2 > 1/30

which implies k > 19.09, hence k = 20, and x = 220 = 1048576 (seconds).

Note that x / (60 × 60 × 24) = 12.14 (days), which is a little bit less that what your answer proposes, but checks out empirically: in Julia

julia> x = 2f0^20
1.048576f6

julia> f = 1f0/30f0
0.033333335f0

julia> x+f == x
true

julia> p = prevfloat(x)
1.04857594f6

julia> p+f == p
false

UPDATE: Okay, so where did the 12.9 come from? The 12.14 is in game time, not actual time: these will have diverged due to the rounding error involved in floating point (especially near the end, when the rounding error is actually quite large relative to f). As far as I know, there's no way to calculate this directly, but it's actually fairly quick to iterate through 32-bit floats.

Again, in Julia:

julia> function timestuff(f)
           t = 0
           x = 0f0
           while true
               t += 1
               xp = x
               x += f
               if x == xp
                   return (t,x)
               end
           end
       end
timestuff (generic function with 1 method)

julia> t,x = timestuff(1f0/30f0)
(24986956,1.048576f6)

x matches our result we calculated earlier, and t is the clock time in 30ths of a second. Converting to days:

julia> t/(30*60*60*24)
9.640029320987654

which is even further away. So I don't know where the 12.9 came from...

UPDATE 2: My guess is that the 12.9 comes from the calculation

y = 4 × f / ε = 1118481.125 (seconds)

where ε is the standard machine epsilon (the gap between 1 and the next largest floating point number). Scaling this to days gives 12.945. This provides an upper bound on x, but it is not the correct answer as explained above.


This is due to zones of expressibility in floating point representation. Check out this lecture from my uni.

As the exponent gets larger, the jump on the really number line between the values actually represented increases; when the exponent is low, the density of representation is high. To give an example, imaging decimal numbers with a finite number of place values. given 1.0001e1 and 1.0002e1 the difference is 0.0001 between the two values. But if the exponent increases 1.0001-10 1.0002-10 the difference between the two is 0.000100135. Obviously this gets larger as the exponent increases. In the case you talk about, it is possible that the jump becomes so large, the increase does not promote a rounding increase of the least significant bit

Interestingly, towards the limits of the representations, the accuracy of a larger float type is worse! Simply because the increase in the bit pattern in the mantissa jumps much further on the number line when more bits are available for the exponent; as is the case for double, over float