If 32-bit machines can only handle numbers up to 2^32, why can I write 1000000000000 (trillion) without my machine crashing?

I answer your question by asking you a different one:

How do you count on your fingers to 6?

You likely count up to the largest possible number with one hand, and then you move on to your second hand when you run out of fingers. Computers do the same thing, if they need to represent a value larger than a single register can hold they will use multiple 32bit blocks to work with the data.


You are correct that a 32-bit integer cannot hold a value greater than 2^32-1. However, the value of this 32-bit integer and how it appears on your screen are two completely different things. The printed string "1000000000000" is not represented by a 32-bit integer in memory.

To literally display the number "1000000000000" requires 13 bytes of memory. Each individual byte can hold a value of up to 255. None of them can hold the entire, numerical value, but interpreted individually as ASCII characters (for example, the character '0' is represented by decimal value 48, binary value 00110000), they can be strung together into a format that makes sense for you, a human.


A related concept in programming is typecasting, which is how a computer will interpret a particular stream of 0s and 1s. As in the above example, it can be interpreted as a numerical value, a character, or even something else entirely. While a 32-bit integer may not be able to hold a value of 1000000000000, a 32-bit floating-point number will be able to, using an entirely different interpretation.

As for how computers can work with and process large numbers internally, there exist 64-bit integers (which can accommodate values of up to 16-billion-billion), floating-point values, as well as specialized libraries that can work with arbitrarily large numbers.


First and foremost, 32-bit computers can store numbers up to 2³²-1 in a single machine word. Machine word is the amount of data the CPU can process in a natural way (ie. operations on data of that size are implemented in hardware and are generally fastest to perform). 32-bit CPUs use words consisting of 32 bits, thus they can store numbers from 0 to 2³²-1 in one word.

Second, 1 trillion and 1000000000000 are two different things.

  • 1 trillion is an abstract concept of number
  • 1000000000000 is text

By pressing 1 once and then 0 12 times you're typing text. 1 inputs 1, 0 inputs 0. See? You're typing characters. Characters aren't numbers. Typewriters had no CPU or memory at all and they were handling such "numbers" pretty well, because it's just text.

Proof that 1000000000000 isn't a number, but text: it can mean 1 trillion (in decimal), 4096 (in binary) or 281474976710656 (in hexadecimal). It has even more meanings in different systems. Meaning of 1000000000000 is a number and storing it is a different story (we'll get back to it in a moment).

To store the text (in programming it's called string) 1000000000000 you need 14 bytes (one for each character plus a terminating NULL byte that basically means "the string ends here"). That's 4 machine words. 3 and half would be enough, but as I said, operations on machine words are fastest. Let's assume ASCII is used for text storage, so in memory it will look like this: (converting ASCII codes corresponding to 0 and 1 to binary, each word in a separate line)

00110001 00110000 00110000 00110000
00110000 00110000 00110000 00110000
00110000 00110000 00110000 00110000
00110000 00000000 00000000 00000000

Four characters fit in one word, the rest is moved to the next one. The rest is moved to next word until everything (including first NULL byte) fits.

Now, back to storing numbers. It works just like with overflowing text, but they are fitted from right to left. It may sound complicated, so here's an example. For the sake of simplicity let's assume that:

  • our imaginary computer uses decimal instead of binary
  • one byte can hold numbers 0..9
  • one word consists of two bytes

Here's an empty 2-word memory:

0 0
0 0

Let's store the number 4:

0 4
0 0

Now let's add 9:

1 3
0 0

Notice that both operands would fit in one byte, but not the result. But we have another one ready to use. Now let's store 99:

9 9
0 0

Again, we have used second byte to store the number. Let's add 1:

0 0
0 0

Whoops... That's called integer overflow and is a cause of many serious problems, sometimes very expensive ones.

But if we expect that overflow will happen, we can do this:

0 0
9 9

And now add 1:

0 1
0 0

It becomes clearer if you remove byte-separating spaces and newlines:

0099    | +1
0100

We have predicted that overflow may happen and we may need additional memory. Handling numbers this way isn't as fast as with numbers that fit in single words and it has to be implemented in software. Adding support for two-32-bit-word-numbers to a 32-bit CPU effectively makes it a 64-bit CPU (now it can operate on 64-bit numbers natively, right?).

Everything I have described above applies to binary memory with 8-bit bytes and 4-byte words too, it works pretty much the same way:

00000000 00000000 00000000 00000000 11111111 11111111 11111111 11111111    | +1
00000000 00000000 00000000 00000001 00000000 00000000 00000000 00000000

Converting such numbers to decimal system is tricky, though. (but it works pretty well with hexadecimal)