Difference between char and signed char in c++?

It's by design, C++ standard says char, signed char and unsigned char are different types. I think you can use static cast for transformation.


There are three distinct basic character types: char, signed char and unsigned char. Although there are three character types, there are only two representations: signed and unsigned. The (plain)char uses one of these representations. Which of the other two character representations is equivalent to char depends on the compiler.

In an unsigned type, all the bits represent the value. For example, an 8-bit unsigned char can hold the values from 0 through 255 inclusive.

The standard does not define how signed types are represented, but does specify that the range should be evenly divided between positive and negative values. Hence an 8-bit signed char is guaranteed to be able to hold values from -127 through 127.


So how to decide which Type to use?

Computations using char are usually problematic. Char is by default signed on some machines and unsigned on others. So we should not use (plain)char in arithmetic expressions. Use it only to hold characters. If you need a tiny integer, explicitly specify either signed char or unsigned char.

Excerpts taken from C++ Primer 5th edition, p. 66.


Adding more info about the range: Since c++ 20, -128 value is also guaranteed for signed char: P1236R0: Alternative Wording for P0907R4 Signed Integers are Two's Complement

For each value x of a signed integer type, there is a unique value y of the corresponding unsigned integer type such that x is congruent to y modulo 2N, and vice versa; each such x and y have the same representation.

[ Footnote: This is also known as two's complement representation. ].
[ Example: The value -1 of a signed type is congruent to the value 2N-1 of the corresponding unsigned type; the representations are the same for these values. ]

The minimum value required to be supported by the implementation for the range exponent of each signed integer type is specified in table X.

I kindly and painfully (since SO does not support markdown for table) rewrote table x below :

╔═════════════╦════════════════════════════╗  
║ Type        ║ Minimum range exponent N   ║  
╠═════════════╬════════════════════════════╣  
║ signed char ║        8                   ║  
║ short       ║       16                   ║  
║ int         ║       16                   ║  
║ long        ║       32                   ║  
║ long long   ║       64                   ║  
╚═════════════╩════════════════════════════╝  

Hence, as a signed char has 8 bits: -2ⁿ⁻¹ to 2ⁿ⁻¹-1 (n equal to 8).

Guaranteed range is from -128 to 127. Hence, when it comes to range, there is no more difference between char and signed char.


About Cadoiz's comment: There is what the standard says, and there is the reality.
Reality check with below program:

#include <stdio.h>

int main(void) {
    char c = -128;
    printf("%d\n", (int)c);
    printf("%d\n", (int)--c);
    return 0;
}

Output:

-128
127

I would also say that signed char would help fellow programmers and also potentially the compiler to understand that you will use char's value to perform pointer's arithmetic.


Indeed, the Standard is precisely telling that char, signed char and unsigned char are 3 different types. A char is usually 8 bits but this is not imposed by the standard. An 8-bit number can encode 256 unique values; the difference is only in how those 256 unique values are interpreted. If you consider a 8 bit value as a signed binary value, it can represent integer values from -128 (coded 80H) to +127. If you consider it unsigned, it can represent values 0 to 255. By the C++ standard, a signed char is guaranteed to be able to hold values -127 to 127 (not -128!), whereas a unsigned char is able to hold values 0 to 255.

When converting a char to an int, the result is implementation defined! the result may e.g. be -55 or 201 according to the machine implementation of the single char 'É' (ISO 8859-1). Indeed, a CPU holding the char in a word (16bits) can either store FFC9 or 00C9 or C900, or even C9FF (in big and little endian representations). Explicit casts to signed or unsigned char do guarantee the char to int conversion outcome.