Why does printf("%f",0); give undefined behavior?

The "%f" format requires an argument of type double. You're giving it an argument of type int. That's why the behavior is undefined.

The standard does not guarantee that all-bits-zero is a valid representation of 0.0 (though it often is), or of any double value, or that int and double are the same size (remember it's double, not float), or, even if they are the same size, that they're passed as arguments to a variadic function in the same way.

It might happen to "work" on your system. That's the worst possible symptom of undefined behavior, because it makes it difficult to diagnose the error.

N1570 7.21.6.1 paragraph 9:

... If any argument is not the correct type for the corresponding conversion specification, the behavior is undefined.

Arguments of type float are promoted to double, which is why printf("%f\n",0.0f) works. Arguments of integer types narrower than int are promoted to int or to unsigned int. These promotion rules (specified by N1570 6.5.2.2 paragraph 6) do not help in the case of printf("%f\n", 0).

Note that if you pass a constant 0 to a non-variadic function that expects a double argument, the behavior is well defined, assuming the function's prototype is visible. For example, sqrt(0) (after #include <math.h>) implicitly converts the argument 0 from int to double -- because the compiler can see from the declaration of sqrt that it expects a double argument. It has no such information for printf. Variadic functions like printf are special, and require more care in writing calls to them.


First off, as touched on in several other answers but not, to my mind, spelled out clearly enough: It does work to provide an integer in most contexts where a library function takes a double or float argument. The compiler will automatically insert a conversion. For instance, sqrt(0) is well-defined and will behave exactly as sqrt((double)0), and the same is true for any other integer-type expression used there.

printf is different. It's different because it takes a variable number of arguments. Its function prototype is

extern int printf(const char *fmt, ...);

Therefore, when you write

printf(message, 0);

the compiler does not have any information about what type printf expects that second argument to be. It has only the type of the argument expression, which is int, to go by. Therefore, unlike most library functions, it is on you, the programmer, to make sure the argument list matches the expectations of the format string.

(Modern compilers can look into a format string and tell you that you've got a type mismatch, but they're not going to start inserting conversions to accomplish what you meant, because better your code should break now, when you'll notice, than years later when rebuilt with a less helpful compiler.)

Now, the other half of the question was: Given that (int)0 and (float)0.0 are, on most modern systems, both represented as 32 bits all of which are zero, why doesn't it work anyway, by accident? The C standard just says "this isn't required to work, you're on your own", but let me spell out the two most common reasons why it wouldn't work; that will probably help you understand why it's not required.

First, for historical reasons, when you pass a float through a variable argument list it gets promoted to double, which, on most modern systems, is 64 bits wide. So printf("%f", 0) passes only 32 zero bits to a callee expecting 64 of them.

The second, equally significant reason is that floating-point function arguments may be passed in a different place than integer arguments. For instance, most CPUs have separate register files for integers and floating-point values, so it might be a rule that arguments 0 through 4 go in registers r0 through r4 if they are integers, but f0 through f4 if they are floating-point. So printf("%f", 0) looks in register f1 for that zero, but it's not there at all.


Why does using an integer literal instead of a float literal cause this behavior?

Because printf() doesn't have typed parameters besides the const char* formatstring as the 1st one. It uses a c-style ellipsis (...) for all the rest.

It's just decides how to interpret the values passed there according to the formatting types given in the format string.

You would have the same kind of undefined behavior as when trying

 int i = 0;
 const double* pf = (const double*)(&i);
 printf("%f\n",*pf); // dereferencing the pointer is UB