Negative numbers are stored as 2's complement in memory, how does the CPU know if it's negative or positive?

The CPU doesn't care whether a byte holds -1 or 15 when it moves it from one place to another. There's no such thing as a "signed move" (to a location of the same size - there is a signed move for larger or smaller destinations).

The CPU only cares about the representation when it does arithmetic on the byte. The CPU knows whether to do signed or unsigned arithmetic according to the op-code that you (or the compiler on your behalf) chose.


Most of the previous answers mentioned separate opcodes. That might be true for more complicated operations like multiplication and division, but for simple addition and subtraction that is not how the CPU works.

The CPU keeps data about the result of an instruction in its flags register. On x86 (where I am most familiar) the two most important flags here are the "overflow" and "carry" flags.

Basically the CPU doesn't care if the number is signed or unsigned it treats them both the same. The carry flag is set when the number goes over the highest unsigned value it can contain. The overflow flag is set when it goes over or under the range of an unsigned number. If you are working with unsigned numbers you check the carry flag and ignore the overflow flag. If you are working with signed numbers you check the overflow flag and ignore the carry flag.

Here are some examples:

Unsigned:

1111 (15) + 1111 (15) = 1110 (14)

What you do now is check the carry flag, which in this case contains one giving the final result

1 1110 (30)

Signed:

1111 (-1) + 1111 (-1) = 1110 (-2)

In this case you ignore the carry flag, the overflow flag should be set to zero.

Unsigned:

0111 (7) + 0111 (7) = 1110 (14)

When you check the carry flag it should be zero.

Signed:

0111 (7) + 0111 (7) = 1110 (-2)

In this case the overflow flag would be set meaning that there was an error in the addition.

So in summary the number is only signed or unsigned based on your interpretation of it, the CPU gives you the tools nessecary to distinguish between them, but doesn't distinguish on its own.


The CPU doesn't know if a number is signed or unsigned. When the compiler creates the machine language file, it chooses the correct operation to be executed to make a math operation with that number. If you declared your variable to be of the signed type, for instance, than the operation to be executed in machine language will be one that treats that memory position as a signed value.

In any software of any kind, it is always when you interpret the data that you give it meaning. A byte in memory can be a signed or unsigned number, or a character, or a part of a music file, or a pixel in a picture, etc. What gives it meaning is how you use that byte.