For embedded code, why should I use "uint_t" types instead of "unsigned int"?

A standards-conforming compiler where int was anywhere from 17 to 32 bits may legitimately do anything it wants with the following code:

uint16_t x = 46341;
uint32_t y = x*x; // temp result is signed int, which can't hold 2147488281

An implementation that wanted to do so could legitimately generate a program that would do nothing except output the string "Fred" repeatedly on every port pin using every imaginable protocol. The probability of a program getting ported to an implementation which would do such a thing is exceptionally low, but it is theoretically possible. If want wanted to write the above code so that it would be guaranteed not to engage in Undefined Behavior, it would be necessary write the latter expression as (uint32_t)x*x or 1u*x*x. On a compiler where int is between 17 and 31 bits, the latter expression would lop off the upper bits, but would not engage in Undefined Behavior.

I think the gcc warnings are probably trying to suggest that the code as written is not completely 100% portable. There are times when code really should be written to avoid behaviors which would be Undefined on some implementations, but in many other cases one should simply figure that the code is unlikely to get used on implementations which would do overly annoying things.

Note that using types like int and short may eliminate some warnings and fix some problems, but would likely create others. The interaction between types like uint16_t and C's integer-promotion rules are icky, but such types are still probably better than any alternative.


1) If you just cast from unsigned to signed integer of the same length back and forth, without any operations in between, you will get the same result every time, so no problem here. But various logical and arithmetical operations are acting differently on signed and unsigned operands.
2) The main reason to use stdint.h types is that the bit size of such a types are defined and equal across all of the platforms, which is not true for int, long e.t.c., as well as char has no standard signess, it can be signed or unsigned by default. It makes easier to manipulate the data knowing the exact size without using extra checking and assumptions.


Since Eugene's #2 is probably the most important point, I just would like to add that it's an advisory in

MISRA (directive 4.6): "typedefs that indicate size and signedness should be used in place of the basic types".

Also Jack Ganssle appears to be a supporter of that rule: http://www.ganssle.com/tem/tem265.html

Tags:

C

Gcc

Embedded