True random number set

By definition, to have randomness, you need to have a set of possible outcomes, and a probability measure that describes how likely each element of the set is. In this way, all random variables have some structure. The outcome of each individual experiment will be random, but if you make infinitely many experiments, the frequency of different occurrences will always converge to the given structure.

You can try to get rid of such structure, but you'll quickly run into problems. For example, even having each positive number happen with equal probability without an upper limit is problematic: for any big number $M$, the length of the segment $[0, M]$ is finite, while $[M, \infty)$ is infinite, so you're guaranteed to get numbers bigger than $M$. Regardless of how big $M$ is.

That said, there are many, varied distributions. Consider the Cauchy distribution with density function in its simplest form $f(x)=\frac{1}{1 + x^2}$

When drawing samples from this distribution, your mode is at 0, and if you draw lots and lots of samples, you'll have twice as many samples near 0 than near 1. And the histogram will follow a bell-like shape centered around 0 (though declining much more slowly than a normal).

However, the Cauchy distribution doesn't have a mean. If you try to compute it with an integral, the integral will diverge. And if you draw $n$ samples and average them, you'll get a significantly different number each time - itself a Cauchy random variable.

This partially satisfies your inquiry. The histogram will converge to the distribution curve. But the mean after 100 samples can be much different than the mean after 1000, and after 10000.


Your interpretation of "random" doesn't line up with the meaning of "random" in mathematics. Rather, it sounds like you're using it more along the lines of "arbitrary," and are looking for sequences which have no meaningful "overall behavior" along the lines of the law of large numbers. This is indeed captured by a mathematical notion, namely genericity.

To make sense of genericity, we should first rethink randomness: rather than considering a "dynamic" picture, we can think of a sequence of (say) $0$s and $1$s as (the binary expansion of) a point in $[0,1]$. Properties of random sequences correspond to sets of full measure - e.g. the set of sequences satisfying the law of large numbers has full measure, or to put it another way the set of sequences not satisfying that law is null.

Actually, rather than $[0,1]$ we should be living in Cantor space - think about the dyadic reals, which have two binary expansions. This isn't a huge issue though since those reals are highly non-random. Also, for simplicity I'm privileging one particular distribution here - the binomial distribution - but the general picture is the same: other distributions correspond to measures other than Lebesgue measure.

For genericity, we shift attention from measure to category (note that this is unrelated to category theory). Specifically, we replace "full measure" with "comeager" - generic behaviors correspond to comeager sets. For example, the set of sequences not satisfying the law of large numbers is comeager.

Since category is generally more mysterious than measure, let's prove that last sentence. Any finite binary sequence $s$ has an extension $t$ such that (say) $99\%$ of the bits of $t$ are $0$: just take $s$ and "tack on" a string of $0$s which is $99$ times longer than $s$. Similarly, any finite binary sequence $s$ has an extension $t$ such that $99\%$ of the bits of $t$ are $1$. Now for $n\in\mathbb{N}$ let $A_n$ be the set of infinite binary sequences which do not have some prefix of length $>n$ and which consist $99\%$ of $0$s, and let $B_n$ be the set of infinite binary sequences which do not have some prefix of length $>n$ and which consist $99\%$ of $1$s. By the observation above, all the $A_n$s and $B_n$s are nowhere dense, and consequently the set $X$ of sequences not lying in any of the $A_n$s or $B_n$s is comeager. But any sequence in $X$ violates the law of large numbers terribly: infinitely many of its prefixes are "mostly $0$s" and infinitely many of its prefixes are "mostly $1$s."

Unfortunately, category is much less well-behaved than measure. Measure is a gradation: we have full measure and measure zero, but also lots of intermediate sizes. By contrast, nothing like that really exists for category. The whole theory of integration is ultimately centered on measure, rather than category, and there's no real category-based analogue. So the fact that genericity is given less focus than randomness in general isn't arbitrary.

(On the other hand, genericity makes sense in arbitrary topological spaces, whereas measure doesn't. So the above isn't really fair. Still, in the context of $\mathbb{R}$ and its relatives, measure-based ideas are generally nicer than category-based ones.)

Ending on a humorous note, recall Russell's description of philosophers' hell:

There is a peculiarly painful chamber inhabited solely by philosophers who have refuted Hume. These philosophers, though in Hell, have not learned wisdom. They continue to be governed by their animal propensity towards induction. But every time that they have made an induction, the next instance falsifies it. This, however, happens only during the first hundred years of their damnation. After that, they learn to expect that an induction will be falsified, and therefore it is not falsified until another century of logical torment has altered their expectation. Throughout all eternity surprise continues, but each time at a higher logical level.

Genericity is a lot like that: "overall patterns" will appear and persist ... and then suddenly reverse. A generic sequence might start looking "mostly $0$s," then "mostly $1$s," then randomly distributed, and so forth. By contrast, random sequences are at least "predictably unpredictable."


Let me mention a final coda. Intuitively, a sequence is "really random" if it doesn't lie in any "simply describable" measure-zero set, and is "really generic" if it doesn't lie in any "simply describable" meager set. (It's impossible to avoid all measure-zero, or meager, sets - we always have $x\in\{x\}$, and every singleton set is measure-zero and meager - so the idea is to avoid all the ones which aren't "silly" somehow.) This idea is precisiated in mathematical logic, specifically computability theory and set theory; the details are too technical to go into here, but it's worth noting that it can in fact be developed well.


Regarding the histogram thing, you seem to be asking "how can something random converge to something deterministic?"

I think this is a great question.

The first thing you can do to try to understand this is study the proof of the weak law of large numbers based on Chebyshev's inequality. One moral is that averaging causes lots of small (and independent) random perturbations to cancel out with each other; mathematically this looks like the variance of the average going to zero.

There are many other examples of this phenomenon -- the one most relevant to your question is the central limit theorem.


Regarding whether the numbers from the experiment are 'truly random:'

I suggest you look into the theory of pseudorandomness; computer scientists have worked hard on understanding how to run randomized algorithms with imperfectly random random-bits, and this has led to many insights about 'the nature' of randomness.

This gives a good high level overview: https://www.ias.edu/ideas/2009/wigderson-randomness-pseudorandomness

Let me quote from the above article:

"A computational view of randomness: To answer the repeatedly asked question above, we have to carefully study our ubiquitous random object—the coin toss. Is it random? A key insight of theoretical computer science is that the answer depends on who (or which application) uses it! To demonstrate this we will conduct a few (mental) experiments. Imagine that I hold in my hand a (fair) coin, and a second after I toss it high in the air, you, as you are watching me, are supposed to guess the outcome when it lands on the floor. What is the probability that you will guess correctly? 50-50 you say? I agree! Now consider a variant of the same experiment, in which the only difference is that you can use a laptop to help you. What is the probability that you will guess correctly now? I am certain you will say 50-50 again, and I will agree again. How can the laptop help? But what if your laptop is connected to a super computer, which is in turn connected to a battery of video recorders and other sensors around the room? What are your chances of guessing correctly now? Indeed, 100 percent. It would be trivial for this machinery to calculate in one second all the required information: speed, direction, and angular momentum of the coin, the distance from my hand to the floor, air humidity, etc., and provide the outcome to you with certainty.

The coin toss remained the same in all three experiments, but the observer changed. The uncertainty about the outcome depended on the observer. Randomness is in the eye of the beholder, or more precisely, in its computational capabilities. The same holds if we toss many coins: how uncertain the outcome is to a given observer/application depends on how they process it. Thus a phenomenon (be it natural or artificial) is deemed “random enough,” or pseudorandom, if the class of observers/applications we care about cannot distinguish it from random! This viewpoint, developed by Manuel Blum, Shafi Goldwasser, Silvio Micali, and Andrew Yao in the early 1980s, marks a significant departure from older views and has led to major breakthroughs in computer science of which the field of cryptography is only one. Another is a very good understanding of the power of randomness in probabilistic algorithms, like the “Monte-Carlo method.” Is randomness actually needed by them, or are there equally efficient deterministic procedures for solving the monomer-dimer problem and its many siblings? Surprisingly, we now have strong evidence for the latter, indicating the weakness of randomness in such algorithmic settings. A theorem by Russell Impagliazzo and Wigderson shows that, assuming any natural computational problem to be intractable (something held in wide belief and related to the P=/NP conjecture), randomness has no power to enhance algorithmic efficiency! Every probabilistic algorithm can be replaced by a deterministic one with similar efficiency. Key to the proof is the construction of pseudorandom generators that produce sequences indistinguishable from random ones by these algorithms."

A dense monograph on this topic, if you are interested in learning more of the details: https://people.seas.harvard.edu/~salil/pseudorandomness/