Two random variables from the same probability density function: how can they be different?

On the same $\Omega$, try $X$ uniform on $\{0,1\}$ and $Y=1-X$, then $\{X\ne Y\}=\Omega$.

Edit: Recall that in the probabilistic jargon, a random variable is just a measurable function, here $X:\Omega\to\{0,1\}$ and $Y:\Omega\to\{0,1\}$, that is, for every $\omega$ in $\Omega$, $X(\omega)=0$ or $X(\omega)=1$ and $Y(\omega)=0$ or $Y(\omega)=1$. A notation is that $\{X\ne Y\}=\{\omega\in\Omega\mid X(\omega)\ne Y(\omega)\}$. In the present case, $X(\omega)\ne Y(\omega)$ for every $\omega$ in $\Omega$ hence $\{X\ne Y\}=\Omega$.

Distributions, on the other hand, are probability measures on the target space $\{0,1\}$. Here the distribution $\mu$ of $X$ is uniform on $\{0,1\}$, that is, $\mu(\{0\})=\mu(\{1\})=\frac12$ since $P(X=0)=P(X=1)=\frac12$. Likewise, $P(Y=0)=P(Y=1)=\frac12$ hence $\mu$ is also the distribution of $Y$. Thus, $X$ and $Y$ can both be used to sample from $\mu$ although $X(\omega)=Y(\omega)$ happens for no $\omega$ in $\Omega$.


If you are simply told that $X$ and $Y$ share a probability distribution $p(x)$, you don't know that $X$ and $Y$ are different. Both are measurable functions from the sample space to the observation space with the given distribution, but otherwise they could be the same, independent, or different but correlated. What is often the case is that $X$ and $Y$ are independent. In that case, you can construct new variables with the same distribution as old ones by expanding the sample space. For instance, if $X:\Omega\rightarrow\mathbb{R}$ has a particular distribution, then $X_i:\Omega^N\rightarrow\mathbb{R}$, where $X_i(\omega_1,\omega_2,\ldots,\omega_N)\equiv X(\omega_i)$, are $N$ i.i.d. random variables with the same distribution as $X$ (if the sigma-algebra on the product space is appropriately constructed).


TL;DR An analogy is that $x^2+5$ and $x^2+4$ have the same derivative $2x$ but are not the same function.


Elementary probability:

They don't teach this is in elementary probability, but random variables have an explicit representation known as the Skorokhod representation.

Basically, we never really know the formulas for a lot of the $X$'s. We know the $X$'s mainly from the $F_X(x)$'s. It's kinda like talking about $f(x)=x^2+c$'s through their common derivative $f'(x)=2x$: When is $f$ increasing? When $f' > 0$. We know that if $f$ is not unique given an $f'(x)$. We can do that through integration, or just construct an explicit example $f(x)=x^2+5$ and $f(x)=x^2+4$.

How we do similarly here in probability?

For example, consider $X \sim Be(p)$ where $P(X=0):=p$ and $P(X=1):=1-p$ (Usually, textbooks use $p$ for the $P(X=1)$).

If both of the following $X_i$'s satisfy $X \sim Be(p)$, then we've given explicit Bernoulli random variables that can never be the same, i.e. $X \sim Be(p)$ doesn't have a unique Skorokhod representation.

$$X_1(\omega) := 1_{(0,1-p)}(\omega) := 1_{A_1}(\omega)$$

$$X_2(\omega) := 1_{(p,1)}(\omega) := 1_{A_2}(\omega)$$

If $\omega=\frac{1-p}{2}$, then $X_1(\omega)=1$ while $X_2(\omega)=0$.

Let us try to compute the CDF of $X_i$:

$P(X_i(\omega) \le x)$ is 0 for $x<0$ and 1 for $x \ge 1$.

As for $0 \le x < 1$, define

$$P(X_i(\omega) \le x) = P(X_i(\omega) = 0) = P(1_{A_i}(\omega) = 0) = P(\omega \notin A_i) = 1 - P(\omega \in A_i)$$

We have our result if $P(\omega \in A_1) = P(\omega \in A_2) = 1-p$. Is it?

Okay so here, we need to need to make some kind of assumption to say that the interval $(p,1)$ is not only as probable as $(0,1-p)$ but also that probability of each interval is $1-p$. Clearly, the intervals have the same length, but does that mean they have the same probability? Furthermore, if they do, is it equal to That depends on how we define probabilities here. One such assumption is:

A uniformly distributed random variable $U$ on $(0,1)$ has Skorokhod representation $U(\omega) = \omega \sim Unif(0,1)$.

Hopefully this isn't circular, otherwise this half of the answer is nonsense.

Then $P(\omega \in A_i) = \frac{(1-p)-(0)}{1-0}$ or $= \frac{(1)-(p)}{1-0}$

$$P(\omega \in A_i) = \frac{1-p}{1-0} = 1-p$$


Advanced probability:

It can be shown that $$Y(\omega) = \omega \sim Unif(0,1)$$ for $\omega$ in $((0,1),\mathscr B(0,1),\mu)$ where $\mu$ is Lebesgue measure.

Hence,

$$P(\omega \in A_i) = \mu(A_i) = l(A_i) = 1-p$$

where $l$ is length.