The real difference between "int" and "unsigned int"

There is no difference between the two in how they are stored in memory and registers, there is no signed and unsigned version of int registers there is no signed info stored with the int, the difference only becomes relevant when you perform maths operations, there are signed and unsigned version of the maths ops built into the CPU and the signedness tell the compiler which version to use.


Hehe. You have an implicit cast here, because you're telling printf what type to expect.

Try this on for size instead:

unsigned int x = 0xFFFFFFFF;
int y = 0xFFFFFFFF;

if (x < 0)
    printf("one\n");
else
    printf("two\n");
if (y < 0)
    printf("three\n");
else
    printf("four\n");

Yes, because in your case they use the same representation.

The bit pattern 0xFFFFFFFF happens to look like -1 when interpreted as a 32b signed integer and as 4294967295 when interpreted as a 32b unsigned integer.

It's the same as char c = 65. If you interpret it as a signed integer, it's 65. If you interpret it as a character it's a.


As R and pmg point out, technically it's undefined behavior to pass arguments that don't match the format specifiers. So the program could do anything (from printing random values to crashing, to printing the "right" thing, etc).

The standard points it out in 7.19.6.1-9

If a conversion specification is invalid, the behavior is undefined. If any argument is not the correct type for the corresponding conversion specification, the behavior is undefined.

Tags:

C