Why is there one more negative int than positive int?

The types you provided are signed integers. Let's see one byte(8-bit) example. With 1 byte you have 2^8 combinations which gives you 256 possible numbers to store.

Now you want to have the same number of positive and negative numbers (each group should have 128).

The point is 0 doesn't have +0 and -0. There is only one 0.

So you end up with range -128..-1..0..1..127.

The same logic works for 16/32/64-bit.

EDIT:

Why the range is -128 to 127?

It depends on how you represent signed integer:

  • Signed magnitude representation
  • Ones' complement
  • Two's complement

This question isn't really related to databases.

As lad2025 points out, there are an even number of values. So, by including 0, there would be one more positive or negative value. The question you are asking seems to be: "Why is there one more negative value than positive value?"

Basically, the reason is the sign-bit. One possible implementation of negative numbers is to use n - 1 bits for the absolute value and then 0 and 1 for the sign bit. The problem with this approach is that it permits +0 and -0. That is not desirable.

To fix this, computer scientists devised the twos-complement representation for signed integers. (Wikipedia explains this in more detail.) Basically, this representation maintains the concept of a sign bit that can be tested. But it changes the representation. If +1 is represented as 001, then -1 is represented as 111. That is, the negative value is the bit-wise complement of the positive value minus one. In fact the negative is always generated by subtracting 1 and using the bit-wise complement.

The issue is then the value 100 (followed by any number of zeros). The sign bit is set, so it is negative. However, when you subtract 1 and invert, it becomes itself again (011 --> 100). There is an argument for calling this "infinity" or "not a number". Instead it is assigned the smallest possible negative number.