Is there any performance difference in using int versus int8_t

int is generally equivalent of the size of register on CPU. C standard says that any smaller types must be converted to int before using operators on them.

These conversions (sign extension) can be costly.

int8_t a=1, b=2, c=3;
 ...
a = b + c; // This will translate to: a = (int8_t)((int)b + (int)c);

If you need speed, int is a safe bet, or use int_fast8_t (even safer). If exact size is important, use int8_t (if available).


when you talk about code performance, there are several things you need to take into account which affect this:

  • CPU architecture, more to the point, which data types does the cpu support natively ( does it support 8 bit operations? 16 bit? 32 bit? etc...)

  • compiler, working with a well known compiler is not enough, you need to be familiar with it: they way you write your code influences the code it generates

  • data types and compiler intrinsics: these are always considered by the compiler when generating code, using the correct data type (even signed vs unsigned matters) can have a dramatic performance impact.

    "Trying to be smarter than the compiler is always a bad idea" - that is not actually true; remember, the compiler is written to optimize the general case and you are interested in you particular case; it's always a good idea to try and be smarter than the compiler.

Your question is really too broad for me to give a "to the point" answer (i.e. what is better performance wise). The only way to know for sure is to check the generated assembly code; at least count the number of cycles the code would take to execute in both cases. But you need to understand the code to understand how to help the compiler.