Does every operator from a Hilbert space to $L^0$ factor through a canonical one?

Yes, it is true that every such operator factors through a canonical map.

Theorem: Let $(\Omega,\mathcal{F},\mathbb{P})$ be a probability space and $A\colon H\to L^0(\mathbb{P})$ be a continuous linear operator. Then, there exists a probability measure $\mathbb{Q}$ equivalent to $\mathbb{P}$ such $A$ factors into continuous operators $A=BC$ for $B\colon L^2(\mathbb{Q})\to L^0(\mathbb{Q})=L^0(\mathbb{P})$ the natural inclusion.

From the comments, it seems like Bill Johnson has a reference for this, or a reference to results from which it follows. However, I'll write out my own proof too, based on the following two facts.

Theorem A: There exist universal constants $K,\delta > 0$ such that, for any $a\in\ell^2$ and IID sequence of Rademacher random variables $\epsilon_0,\epsilon_1,\ldots$ then, $$ \mathbb{P}\left(\left(\sum_n\epsilon_na_n\right)^2\ge K\sum_na_n^2\right)\ge\delta. $$

Theorem B: Let $\mathcal{U}$ be a convex subset of $L^1(\mathbb{P})$ which is bounded in probability. Then, there exists an $X\in L^\infty(\mathbb{P})$ such that $X > 0$ almost surely, and $$ \left\lbrace \mathbb{E}[XU]\colon U\in\mathcal{U}\right\rbrace $$ is bounded.

Theorem A is a kind of Khintchine inequality, which I previously discussed in my answer to this question and in this question. Theorem B is a consequence of the Hahn-Banach theorem and the fact that the dual of $L^1(\mathbb{P})$ is $L^\infty(\mathbb{P})$. By setting $\mathbb{Q}=X\cdot\mathbb{P}$ it can be seen that it is equivalent to the existence of a probability measure $\mathbb{Q}\sim\mathbb{P}$ for which $\lbrace\mathbb{E}\_{\mathbb{Q}}[U]\colon U\in\mathcal{U}\rbrace$ is bounded. This does not imply that $\mathcal{U}$ is a bounded subset of $L^1(\mathbb{Q})$, although this implication does hold when $\mathcal{U}$ is a collection of nonnegative random variables. I'm familiar with this result, because it is used in a common proof of the Bichteler-Dellacherie theorem classifying semimartingales, and I'll refer to Protter, Stochastic Integration and Differential Equations for the proof of Theorem B (Section III.9 Lemma 3 in the second edition, although the numbering seems to have changed in Version 2.1).

Theorem A can be used to prove the following.

Lemma: Let $A\colon H\to L^0(\mathbb{P})$ be a continuous operator from Hilbert space $H$ and $B_1\subseteq H$ be the closed unit ball. Then, ${\rm conv}(A(B_1)^2)$ is bounded in probability.

Here, $\rm conv$ refers to the convex hull, and it can be seen that $$ {\rm conv}(A(B_1)^2)=\left\lbrace\sum_n(Av_n)^2\colon\sum_n\Vert v_n\Vert^2\le1\right\rbrace, $$ where the $v_n$ range over the eventually zero sequences in $H$.

Proof: Note that $A(B_1)^2$ is automatically bounded in probability by continuity of $A$. This does not, in general, imply that the convex hull is bounded in probability, because $L^0(\mathbb{P})$ need not be locally convex. So, Theorem A will be needed.

Let $\epsilon_0,\epsilon_1,\ldots$ be an IID sequence of Radamacher random variables defined on some probability space $(S,\mathcal{S},\mu)$. Also, let $K,\delta$ be as in Theorem A. Then, for any $L > 0$ and eventually zero sequence $v_n\in H$ with $\sum_n\Vert v_n\Vert^2\le1$, $$ \begin{align} &\int\mathbb{P}\left(\left(\sum_n\epsilon_nAv_n\right)^2\ge L\right)\,d\mu\cr &\ge\int\mathbb{P}\left(\left(\sum_n\epsilon_nAv_n\right)^2\ge K\sum_n(Av_n)^2\ge L\right)\,d\mu\cr &=\mathbb{E}\left[\mu\left(\left(\sum_n\epsilon_nAv_n\right)^2\ge K\sum_n(Av_n)^2\right)1_{\lbrace K\sum_n (Av_n)^2\ge L\rbrace} \right]\cr &\ge\delta\mathbb{P}\left( K\sum_n (Av_n)^2\ge L\right). \end{align} $$ We also have $$ \int\biggl\lVert\sum_n\epsilon_nv_n\biggr\rVert^2\,d\mu=\sum_n\lVert v_n\rVert^2\le1. $$ The first inequality above implies that $$ \begin{align} \mathbb{P}\left(\left(\sum_n\epsilon_nAv_n\right)^2\ge L\right)\ge\frac\delta2\mathbb{P}\left( K\sum_n (Av_n)^2\ge L\right)&&{\rm(1)} \end{align} $$ for $\epsilon$ in a subset of $S$ with $\mu$-probability at least $\frac\delta2\mathbb{P}\left( K\sum_n (Av_n)^2\ge L\right)$. For any $M > 0$ the second inequality implies that $$ \begin{align} \biggl\lVert\sum_n\epsilon_nv_n\biggr\rVert^2\le M&&{\rm(2)} \end{align} $$ on a set of $\mu$-probability at least $1-1/M$. So, if $1/M$ is less than $\frac\delta2\mathbb{P}(K\sum_n (Av_n)^2\ge L)$ then there exists a sequence $\epsilon_n\in\lbrace\pm1\rbrace$ for which both (1) and (2) hold. Taking $M^{-1}=\frac\delta3\mathbb{P}(K\sum_n (Av_n)^2\ge L)$, $$ \mathbb{P}\left( K\sum_n (Av_n)^2\ge L\right)\le\frac2\delta\sup_{X\in A(B_1)^2}\mathbb{P}\left(MX\ge L\right). $$ Letting $f(y)$ be the supremum of $\mathbb{P}(Y\ge y)$ over $Y\in{\rm conv}(A(B_1)^2)$ we need to show that $f(y)\to0$ as $y\to\infty$. Taking $L=Ky$ in the inequality above, $$ f(y)\le\frac2\delta\sup_{X\in A(B_1)^2}\mathbb{P}\left(X\ge\frac\delta3Kyf(y)\right). $$ If $f(y)\not\to0$ then the right hand side of this inequality would tend to zero by boundedness in probability of $A(B_1)^2$, giving $f(y)\to0$. QED

Proof of Theorem: Let $\mathcal{U}$ be the set of $U\in L^1(\mathbb{P})$ with $0\le U\le Y$ for some $Y\in{\rm conv}(A(B_1)^2)$. This is convex and, by the lemma above, it is bounded in probability. So, by Theorem B, there exists $K > 0$ and a probability measure $\mathbb{Q}\sim\mathbb{P}$ such that $\mathbb{E}\_{\mathbb{Q}}[U]\le K^2$ for all $U\in\mathcal{U}$. So, by monotone convergence, for any $v\in H$ with $\Vert v\Vert\le1$, $$ \mathbb{E}\_{\mathbb{Q}}[(Av)^2]=\lim_{n\to\infty}\mathbb{E}\_{\mathbb{Q}}[\min(n,(Av)^2)]\le K^2. $$ This shows that the map $C\colon H\to L^2(\mathbb{Q})$ given by $Cv=Av$ is bounded with $\Vert C\Vert\le K$, and $A$ factors as required. QED