Random variables defined on the same probability space with different distributions

Admittedly, a holistic answer to your questions would require more measure-theoretic machinery than what follows. However, I will attempt to give you succinct responses that you might find helpful.

So, let the real-valued random variables $X, Y$ be defined on the same probability space $(\Omega, \Sigma, \mathbb P)$.

1) $X$ and $Y$ are measurable so that, for instance, for the interval of real numbers $[a,b]$, we necessarily have $\left\{X\in[a,b]\right\}, \left\{Y\in[a,b]\right\} \in \Sigma$, while we need not have $$\{X\in[a,b]\} = \{Y\in[a,b]\}.$$

2) Because of 1) above, we need not have $$\mathbb P\left\{X\in[a,b]\right\} = \mathbb P\left\{Y\in[a,b]\right\}.$$

3) Note that, because we may define the probability measure $\mathbb P_X(B):=\mathbb P\{X \in B\}$ over Borel sets $B \in \mathcal B(\mathbb R)$, we can speak of $X$ being distributed according to $\mathbb P_X$. In so doing, we are thinking of $X$ in terms of the probability space $(\mathbb R, \mathcal B(\mathbb R), \mathbb P_X)$, not the probability space $(\Omega, \Sigma, \mathbb P)$. In your example, since $X\sim N(\mu, \sigma^2)$, we have an integral representation of $\mathbb P_X$ with respect to the Lebesgue-measure, so that $$ \mathbb P_X([a,b])=\frac{1}{\sigma\sqrt{2\pi}}\int_{-\infty}^{\infty}{\bf{1}}_{[a,b]}(x)e^{-\frac{1}{2}\left(\frac{x-\mu}{\sigma}\right)^2}\mathrm dx\,. $$

A similar development holds for the uniform random variable $Y$.

4) All of the foregoing is just one way of proceeding; there are alternatives. For instance, one may define $X, Y$ over the same measurable-space $(\Omega, \Sigma)$, but different probability-spaces, $(\Omega, \Sigma, \mathbb P_X)$ and $(\Omega, \Sigma, \mathbb P_Y)$, with different probability measures.


First of all, we have two spaces: the probability space $(\Omega,\mathcal A,\mathbb P)$ and the measurable space $(\mathbb R,\mathcal B(\mathbb R))$, where $\mathcal B(\mathbb R)$ is the Borel $\sigma$-algebra of $\mathbb R$, i.e. the smallest $\sigma$-algebra that contains all open sets.

A random variable $X$ is a measurable function that maps $\Omega$ to $\mathbb R$. Measurable means that $X^{-1}(B)=\{\omega\in\Omega:X(\omega)\in B\}\in\mathcal A$ for each $B\in\mathcal B$. Rouhgly speaking, randomness takes place in the probability space. So we can calculate the probability of an event $A\in\mathcal A$. It is given by $\mathbb P(A)$. However, we are interested in the events $B\in\mathcal B(\mathbb R)$. Measurability enables us to evaluate the probabilities of such events. We have that $\mathbb P(B)=\mathbb P\{\omega\in\Omega:X(\omega)\in B\}$ and here we have the random variable $X$ in the expression. If we take another random variable $Y$, the probability of the event $B$ is then given by $\mathbb P(B)=\mathbb P\{\omega\in\Omega:Y(\omega)\in B\}$ and these probabilities might be different. In general, these probabilities depend on two objects: a random variable and the probability measure $\mathbb P$.

The distribution of the random variable $X$ is the probability measure defined on $\mathcal B(\mathbb R)$ by setting $\mathbb P(B)=\mathbb P\{\omega\in\Omega:X(\omega)\in B\}$ and this distribution might be the uniform distribution, the normal distribution or any other probability measure on $\mathcal B(\mathbb R)$. So the fact that $X$ and $Y$ are defined on the same probability space, but have different probability distribution is not a contradiction. The distribution depends on the random variable, so if we take another random variable defined on the same probability space, we obtain a different distribution.

I hope this helps.