What is $\frac{\det(\hat{\Sigma}_0)}{\det(\hat{\Sigma})}$ in terms of $\hat{\mu}_1$, $\hat{\mu}_2$ and $\hat{\Sigma}$

After fixing the missing $\hat{}$ over the $\Sigma$ on the right-hand side, the equation is trivial to prove via the Matrix Determinant Lemma. It tells us that $\det(A+uv^T) = (1+v^T A^{-1}u)\det(A)$, and we simply need to match the terms: With $A=\hat\Sigma$, $u=v=\frac{\sqrt{n_1 n_2}}{n_1 + n_2}(\hat \mu_1 -\hat\mu_2)$ it follows that

$$\begin{aligned} \Big(1 + u^T\hat\Sigma^{-1}v\Big)\det(\hat\Sigma) &= \det(\hat\Sigma + uv^T) \end{aligned}$$

Now, all we need to do is simplify until we arrive at $\hat \Sigma_0$. Using $\hat\mu_1 = \frac{1}{n_1} X^T 1$, $\mu_2=\frac{1}{n_2}Y^T1$ we have

$$\begin{aligned} \sum_{i=1}^{n_1} (x_i-\hat\mu_1)(x_i-\hat\mu_1)^T &= (X-1\hat\mu_1^T)^T(X-1\hat\mu_1^T) \\&= X^TX-\hat\mu_11^TX -X^T 1 \hat\mu_1 + \hat\mu_1 1^T1 \hat\mu_1^T \\&=\boxed{X^TX-n_1\hat\mu_1\hat\mu_1^T} \end{aligned}$$

And similarly $Y^TY-n_2\hat\mu_2\hat\mu_2^T$ for the $y$-sum. On the other hand

$$\begin{aligned} \sum_{i=1}^{n_1} (x_i-\hat\mu_0)(x_i-\hat\mu_0)^T &= (X-1\hat\mu_0^T)^T(X-1\hat\mu_0^T) \\&= X^TX-\hat\mu_01^TX -X^T 1 \hat\mu_0 + \hat\mu_0 1^T1 \hat\mu_0^T \\&=\boxed{X^TX - n_1\hat\mu_0\hat\mu_1^T - n_1\hat\mu_1\hat\mu_0 + n_1\hat\mu_0\hat\mu_0^T} \end{aligned}$$

And similarly $Y^TY - n_2\hat\mu_0\hat\mu_2^T - n_2\hat\mu_2\hat\mu_0 + n_2\hat\mu_0\hat\mu_0^T$ for the $y$-sum. Adding both together, and using $\hat \mu_0 = \frac{n_1 \hat\mu_1 + n_2\hat\mu_2}{n_1 + n_2}$ we find

$$\begin{aligned} &X^TX - n_1\hat\mu_0\hat\mu_1^T - n_1\hat\mu_1\hat\mu_0 + n_1\hat\mu_0\hat\mu_0^T \\&+Y^TY - n_2\hat\mu_0\hat\mu_2^T - n_2\hat\mu_2\hat\mu_0 + n_2\hat\mu_0\hat\mu_0^T \\&=X^TX+Y^TY-\hat\mu_0(n_1\mu_1^T + n_2\mu_2^T) - (n_1\mu_1 + n_2\mu_2)\hat\mu_0^T +(n_1+n_2)\hat\mu_0\hat\mu_0^T \\&=X^TX+Y^TY-(n_1+n_2)\hat\mu_0\hat\mu_0^T - (n_1+n_2)\hat\mu_0\hat\mu_0^T +(n_1+n_2)\hat\mu_0\hat\mu_0^T \\&=X^TX+Y^TY-(n_1+n_2)\hat\mu_0\hat\mu_0^T \\&=\boxed{X^TX+Y^TY-\frac{(n_1\hat\mu_1+n_2\hat\mu_2)(n_1\hat\mu_1+n_2\hat\mu_2)^T}{n_1+n_2}} \end{aligned}$$

Thus, our goal of showing $\hat\Sigma_0 = \hat\Sigma +uv^T$ is equivalent to (after cancelling $X^TX$ and $Y^TY$ terms)

$$ -\frac{(n_1\hat\mu_1+n_2\hat\mu_2)(n_1\hat\mu_1+n_2\hat\mu_2)^T}{(n_1+n_2)^2} =\frac{-n_1\hat\mu_1\hat\mu_1^T-n_2\hat\mu_2\hat\mu_2^T}{n_1+n_2} + \frac{n_1n_2(\hat \mu_1 -\hat\mu_2)(\hat \mu_1 -\hat\mu_2)^T}{(n_1+n_2)^2} $$

Which is equivalent to

$$ (n_1\hat\mu_1+n_2\hat\mu_2)(n_1\hat\mu_1+n_2\hat\mu_2)^T + n_1n_2(\hat \mu_1 -\hat\mu_2)(\hat \mu_1 -\hat\mu_2)^T =(n_1+n_2)(n_1\hat\mu_1\hat\mu_1^T+n_2\hat\mu_2\hat\mu_2^T) $$

which is a true statement. q.e.d


Final Remark: Of course, the Matrix Determinant Lemma would also offer a strategy of figuring out the equation in the first place: If we wanted to find the relationship bewtween $\det(\hat\Sigma_0)$ and $\det(\hat\Sigma)$, it would suggest to express $\hat\Sigma_0 = \hat\Sigma + UV^T$. Since both $\hat\Sigma_0$ and $\hat\Sigma$ are symmetric, we are in fact guaranteed $\hat\Sigma_0 = \hat\Sigma + UU^T$ for some $(p\times k)$ matrix $U$ with $k\le p$ (generally we would choose the $U$ matrix with $k$ minimal)