Is it possible to represent every huge number in abbreviated form?

No. The problem is very simple: there are more huge numbers than abbreviated forms. Suppose that I allow you to use the numerals 0-9, the English alphabet, and spaces to describe whatever numbers you want; that's still only $37$ characters. There are only $37^{140}$ expressions you can write down using these characters which is $140$ characters or shorter (the length of a tweet!), so there are only $37^{140}$ numbers that you could conceivably tweet to someone else. In particular, since $\log_{10} 37^{140} = 219.5$, this implies that there is at least one number with $220$ digits which is impossible to tweet to someone else without using something other than numerals and English.

All right, I'll be even more generous: I'll let you use all $127$ ASCII characters. That means there are $127^{140}$ possible tweets, some of which mathematically describe some kind of number, and since $\log_{10} 127^{140} = 294.5$, there is at least one number with $295$ digits which is impossible to tweet to someone else (again, assuming that Twitter uses ASCII). I could give you the entire Unicode alphabet and you still won't be able to tweet, for example, the kind of prime numbers that public key cryptosystems use.

These kinds of questions are studied in computer science and information theory; you will find the word Kolmogorov complexity tossed around, as well as the word entropy.


If follows from the pigeonhole principle that any lossless compression algorithm that decreases the size of at least one number must increase the size of another - else the compression wouldn't be invertible. For deeper results consult any textbook algorithmic information theory (Kolmogorov complexity), e.g. Li; Vitanyi. An introduction to Kolmogorov complexity and its applications.


As in other answers, at the very least a pigeon-hole principle shows that "most" numbers cannot be "described" in fewer characters than their decimal (or whatever-you-pick) expression...

To my mind, the relevant developed body of ideas is "Solomonov-Chaitin-Kolmogorov" complexity, which is about descriptional (or program-length) complexity, rather than run-time "complexity".

This does remind one of the "smallest boring number" pseudo-paradox, which argues that the first non-interesting number has some interest because it is the first...

The bottom line is that "size" of numbers is not reliably comparable to "descriptional complexity", in general, although most large-ish numbers are also descriptionally complex, by pigeon-hole.

There is a book by Li and Vitanyi which is not only authoritative, but fascinatingly readable...