# Why can the compiler not optimize floating point addition with 0?

IEEE 754 floating-point numbers have two zero values, one negative, one positive. When added together, the result is the positive one.

So id1(-0.f) is 0.f, not -0.f.
Note that id1(-0.f) == -0.f because 0.f == -0.f.

Demo

Also, note that compiling with -ffast-math in GCC does make the optimization and changes the result.

"I have four identity functions which do essentially nothing."

That's not true.

For floating-point numbers x + 1 - 1 is not equal x + 0, it is equal (x + 1) - 1. So if you have e.g. a very small x then you will lose that very small portion in the x + 1 step, and the compiler can't know if that was your intent or not.

And in the case of x * 2 / 2, the x * 2 might not be exact either, due to floating-point precision, so you have a similar case here, the compiler does not know if you for some reason want to change the value of x in that manner.

So these would be equal:

float id0(float x) {
return x + (1. - 1.);
}

float id1(float x) {
return x + 0;
}


And these would be equal:

float id2(float x) {
return x * (2. / 2.);
}

float id3(float x) {
return x * 1;
}


The desired behavior could for sure be defined in another way. But as already mentioned by Nelfeal this optimization has to be explicitly activated using -ffast-math

Enable fast-math mode. This option lets the compiler make aggressive, potentially-lossy assumptions about floating-point math. These include:

• Floating-point math obeys regular algebraic rules for real numbers (e.g. + and * are associative, x/y == x * (1/y), and (a + b) * c == a * c + b * c),
• Operands to floating-point operations are not equal to NaN and Inf, and
• +0 and -0 are interchangeable.

fast-math is for clang and gcc a collection of flags (here the one listed by clang):

• -fno-honor-infinities
• -fno-honor-nans
• -fno-math-errno
• -ffinite-math
• -fassociative-math
• -freciprocal-math
• -fno-signed-zeros
• -fno-trapping-math
• -ffp-contract=fast