How deterministic is floating point inaccuracy?

The short answer is that FP calculations are entirely deterministic, as per the IEEE Floating Point Standard, but that doesn't mean they're entirely reproducible across machines, compilers, OS's, etc.

The long answer to these questions and more can be found in what is probably the best reference on floating point, David Goldberg's What Every Computer Scientist Should Know About Floating Point Arithmetic. Skip to the section on the IEEE standard for the key details.

To answer your bullet points briefly:

  • Time between calculations and state of the CPU have little to do with this.

  • Hardware can affect things (e.g. some GPUs are not IEEE floating point compliant).

  • Language, platform, and OS can also affect things. For a better description of this than I can offer, see Jason Watkins's answer. If you are using Java, take a look at Kahan's rant on Java's floating point inadequacies.

  • Solar flares might matter, hopefully infrequently. I wouldn't worry too much, because if they do matter, then everything else is screwed up too. I would put this in the same category as worrying about EMP.

Finally, if you are doing the same sequence of floating point calculations on the same initial inputs, then things should be replayable exactly just fine. The exact sequence can change depending on your compiler/os/standard library, so you might get some small errors this way.

Where you usually run into problems in floating point is if you have a numerically unstable method and you start with FP inputs that are approximately the same but not quite. If your method's stable, you should be able to guarantee reproducibility within some tolerance. If you want more detail than this, then take a look at Goldberg's FP article linked above or pick up an intro text on numerical analysis.


Also, while Goldberg is a great reference, the original text is also wrong: IEEE754 is not gaurenteed to be portable. I can't emphasize this enough given how often this statement is made based on skimming the text. Later versions of the document include a section that discusses this specifically:

Many programmers may not realize that even a program that uses only the numeric formats and operations prescribed by the IEEE standard can compute different results on different systems. In fact, the authors of the standard intended to allow different implementations to obtain different results.


I think your confusion lies in the type of inaccuracy around floating point. Most languages implement the IEEE floating point standard This standard lays out how individual bits within a float/double are used to produce a number. Typically a float consists of a four bytes, and a double eight bytes.

A mathmatical operation between two floating point numbers will have the same value every single time (as specified within the standard).

The inaccuracy comes in the precision. Consider an int vs a float. Both typically take up the same number of bytes (4). Yet the maximum value each number can store is wildly different.

  • int: roughly 2 billion
  • float: 3.40282347E38 (quite a bit larger)

The difference is in the middle. int, can represent every number between 0 and roughly 2 billion. Float however cannot. It can represent 2 billion values between 0 and 3.40282347E38. But that leaves a whole range of values that cannot be represented. If a math equation hits one of these values it will have to be rounded out to a representable value and is hence considered "inaccurate". Your definition of inaccurate may vary :).


From what I understand you're only guaranteed identical results provided that you're dealing with the same instruction set and compiler, and that any processors you run on adhere strictly to the relevant standards (ie IEEE754). That said, unless you're dealing with a particularly chaotic system any drift in calculation between runs isn't likely to result in buggy behavior.

Specific gotchas that I'm aware of:

  1. some operating systems allow you to set the mode of the floating point processor in ways that break compatibility.

  2. floating point intermediate results often use 80 bit precision in register, but only 64 bit in memory. If a program is recompiled in a way that changes register spilling within a function, it may return different results compared to other versions. Most platforms will give you a way to force all results to be truncated to the in memory precision.

  3. standard library functions may change between versions. I gather that there are some not uncommonly encountered examples of this in gcc 3 vs 4.

  4. The IEEE itself allows some binary representations to differ... specifically NaN values, but I can't recall the details.