Why not defining random variables as equivalence classes?

One of the first classes of examples coming to mind where this matters concerns the almost sure properties of realizations of random processes indexed by uncountable sets, say the almost sure Hölder continuity of the paths of Brownian motion $(B_t)$. If one allows to modify each random variable $B_t$ on a null set, the resulting paths $t\mapsto B_t(\omega)$ may become ugly for every $\omega$ in an event of positive probability.

Edit: Regarding "ugly" above, user @tomasz mentioned a useful point in a comment below, which I now reproduce: if one allows to modify each random variable on a null set, the supremum of an arbitrary (uncountable) family of measurable functions need not be measurable, not even if the functions are almost everywhere zero (say, indicators of points).


Although every finite moment of a variate $X$ will be equal to the corresponding moment of the two "equivalent" variate $X^\star$, one might consider the range of a distribution as an interesting property. If you consider the range to be interesting, then consider the following two distributions, both derived from an underlying uniform random $U$ on $[0,1]$: $$ X: \begin{array}{lc} X = & \left\{ \begin{array}{cl} U & U \mbox{ irrational}\\ -U & U \mbox{ rational} \end{array}\right.\\ X^\star = &U \end{array} $$ The variate $X$ is almost surely equal to $X^\star$ but the range of $X$ is $[-1,1)$ whilst the range of $X^\star$ is $(0,1)$.