Picking two random real numbers between 0 and 1, why isn't the probability that the first is greater than the second exactly 50%?

Yes, your answer is fundamentally wrong. Let me point at that it is not even right in the finite case. In particular, you are using the following false axiom:

If two sets of outcomes are equally large, they are equally probable.

However, this is wrong even if we have just two events. For a somewhat real life example, consider some random variable $X$ which is $1$ if I will get married exactly a year from today and which is $0$ otherwise. Now, clearly the sets $\{1\}$ and $\{0\}$ are equally large, each having one element. However, $0$ is far more likely than $1$, although they are both possible outcomes.

The point here is probability is not defined from cardinality. It is, in fact, a separate definition. The mathematical definition for probability goes something like this:

To discuss probability, we start with a set of possible outcomes. Then, we give a function $\mu$ which takes in a subset of the outcomes and tells us how likely they are.

One puts various conditions on $\mu$ to make sure it makes sense, but nowhere do we link it to cardinality. As an example, in the previous example with outcomes $0$ and $1$ which are not equally likely, one might have $\mu$ defined something like: $$\mu(\{\})=0$$ $$\mu(\{0\})=\frac{9999}{10000}$$ $$\mu(\{1\})=\frac{1}{10000}$$ $$\mu(\{0,1\})=1$$ which has nothing to do with the portion of the set of outcomes, which would be represented by the function $\mu'(S)=\frac{|S|}2$.

In general, your discussion of cardinality is correct, but it is irrelevant. Moreover, the conclusions you draw are inconsistent. The sets $(0,1)$ and $(0,\frac{1}2]$ and $(\frac{1}2,1)$ are pairwise equally large, so your reasoning says they are equally probable. However, the number was defined to be in $(0,1)$ so we're saying all the probabilities are $1$ - so we're saying that we're certain that the result will be in two disjoint intervals. This never happens, yet your method predicts that it always happens.

On a somewhat different note, but related in the big picture, you talk about "uncountably infinite sets" having the property that any non-trivial interval is also uncountable. This is true of $\mathbb R$, but not all uncountable subsets - like $(-\infty,-1]\cup \{0\} \cup [1,\infty)$ has that the interval $(-1,1)=\{0\}$ which is not uncountably infinite. Worse, not all uncountable sets have an intrinsic notion of ordering - how, for instance, do you order the set of subsets of natural numbers? The problem is not that there's no answer, but that there are many conflicting answers to that.

I think, maybe, the big thing to think about here is that sets really don't have a lot of structure. Mathematicians add more structure to sets, like probability measures $\mu$ or orders, and these fundamentally change their nature. Though bare sets have counterintuitive results with sets containing equally large copies of themselves, these don't necessarily translate when more structure is added.


The OP's answer is incorrect. The numbers are not chosen based on cardinality, but based on measure. It is not possible to define a probability distribution using cardinality (on an infinite set). However it is possible using measure.

Although the problem doesn't specify, if we assume the uniform distribution on $[0,1]$, then if $x=0.03$ then $y$ will be greater than $x$ 97% of the time. Of course, if a different probability distribution is used to select $x,y$, then a different answer will arise. It turns out that it is possible to win more than half the time even NOT KNOWING the distribution used, see this amazing result here.


Two distinct real numbers between 0 and 1 are written on two sheets of paper.

In the beginning, nothing is known about the choosing-and-writing process other than the range of possible values.

You have to select one of the sheets randomly and declare whether the number you see is the biggest or smallest of the two.

But one can see the number before making a statement.

How can one expect to be correct more than half the times you play this game?

One can observe the way the game is played and try to model the algorithm used to choose the numbers and use that model when making a decision.

  1. If the algorithm seems to be uncorelated random uniform distribution and the number seen is less than 0.5, than declare it to be the least of the two. This is easy to beat.
  2. If the algorithm seems to be picking some number at random, writhing it to Nth place where N >> 1 and then writing second number differing by 1 in the last place, then it is largely irrelevant what to declare. This is impossible to beat.
  3. If the algorithm is to actively confuse you about what the algorithm is, things might get really interesting.

The original question is not about math so much as it is about game theory. In this game, any side can choose to play to a tie and be successful at that. Otherwise, it's a contest of skill (though it seems there is no winning strategy for side #1 other than persuading side #2 into other than 2-to-1 payoff structure).