This tells us that the right-hand side has to be an invariant tensor and so must be constructed from Kronecker delta's.

The right hand side must also be anti-symmetric in $\mu,\nu$ and anti-symmetric in $\rho,\sigma$ so that on general grounds
$$
\varepsilon^{\alpha \beta \mu \nu} \varepsilon_{\alpha \beta \rho \sigma} = - A (\delta^{\mu}_{\rho} \delta^{\nu}_{\sigma} - \delta^{\mu}_{\sigma} \delta^{\nu}_{\rho})$$
must hold, where the minus sign is due to the Minkowski metric, and the factor $A=2$ is fixed by the requirement that $\varepsilon^{\mu \nu \rho \sigma} \varepsilon_{\mu \nu \rho \sigma} = - 4!$ holds, so setting $ - A (\delta^{\mu}_{\mu} \delta^{\nu}_{\nu} - \delta^{\mu}_{\nu} \delta^{\nu}_{\mu}) = - 4!$ gives $A(16 - 4) = 24$ or $A = 2$.

More generally, the same thinking tells us that the product $\varepsilon^{\alpha \beta \mu \nu} \varepsilon_{\alpha \beta \rho \sigma}$ must be an anti-symmetric combination of Kronecker delta's up to an overall normalization factor, where the normalization factor must respect the fact that $\varepsilon^{\mu \nu \rho \sigma} \varepsilon_{\mu \nu \rho \sigma} = - 4!$ should hold, but $\delta^{\mu \nu \rho \sigma}_{\alpha \beta \gamma \delta}$ is an anti-symmetric combination of Kronecker delta's such that $\delta^{\mu \nu \rho \sigma}_{\mu \nu \rho \sigma} = 4!$, and so
$$
\varepsilon^{\alpha \beta \mu \nu} \varepsilon_{\alpha \beta \rho \sigma} = - \delta^{\mu \nu \rho \sigma}_{\alpha \beta \rho \sigma}$$
must hold. Thus we can work out contractions like $\varepsilon^{\alpha \beta \mu \nu} \varepsilon_{\alpha \beta \rho \sigma} = - \delta^{\alpha \beta \mu \nu}_{\alpha \beta \rho \sigma}$ directly
\begin{align*}
\varepsilon^{\alpha \beta \mu \nu} \varepsilon_{\alpha \beta \rho \sigma} &= - \delta^{\alpha \beta \mu \nu}_{\alpha \beta \rho \sigma} \\
&= - (\delta^{\alpha}_{\alpha} \delta^{\beta \mu \nu}_{\beta \rho \sigma} - \delta^{\alpha}_{\beta} \delta^{\beta \mu \nu}_{\alpha \rho \sigma} + \delta^{\alpha}_{\rho} \delta^{\beta \mu \nu}_{\alpha \beta \sigma} - \delta^{\alpha}_{\sigma}\delta^{\beta \mu \nu}_{\alpha \beta \rho}) \\
&= - (4\delta^{\beta \mu \nu}_{\beta \rho \sigma} - \delta^{\alpha \mu \nu}_{\alpha \rho \sigma} + \delta^{\beta \mu \nu}_{\rho \beta \sigma} - \delta^{\beta \mu \nu}_{\sigma \beta \rho}) \\
&= - [4(\delta^{\beta}_{\beta} \delta^{\mu \nu}_{\rho \sigma} - \delta^{\beta}_{\rho} \delta^{\mu \nu}_{\beta \sigma} + \delta^{\beta}_{\sigma}\delta^{\mu \nu}_{\beta \rho}) - ( \delta^{\alpha }_{\alpha} \delta^{\mu \nu}_{\rho \sigma} - \delta^{\alpha}_{\rho} \delta^{\mu \nu}_{\alpha \sigma} + \delta^{\alpha}_{\sigma} \delta^{\mu \nu}_{\alpha \rho}) - \delta^{\beta \mu \nu}_{\beta \rho \sigma} + \delta^{\beta \mu \nu}_{\beta \sigma \rho}] \\
&= - [4(4\delta^{\mu \nu}_{\rho \sigma} - \delta^{\mu \nu}_{\rho \sigma} + \delta^{\mu \nu}_{\sigma \rho}) - ( 4 \delta^{\mu \nu}_{\rho \sigma} - \delta^{\mu \nu}_{\rho \sigma} + \delta^{\mu \nu}_{\sigma \rho}) - \delta^{\beta \mu \nu}_{\beta \rho \sigma} + \delta^{\beta \mu \nu}_{\beta \sigma \rho}] \\
&= - [4(2\delta^{\mu \nu}_{\rho \sigma} ) - 2 \delta^{\mu \nu}_{\rho \sigma} - 2 \delta^{\mu \nu}_{\rho \sigma} + 2 \delta^{\mu \nu}_{\sigma \rho}] \\
&= - [8 \delta^{\mu \nu}_{\rho \sigma} - 6 \delta^{\mu \nu}_{\rho \sigma} ] \\
&= - 2 \delta^{\mu \nu}_{\rho \sigma} \\
&= - 2 (\delta^{\mu}_{\rho} \delta^{\nu}_{\sigma} - \delta^{\mu}_{\sigma} \delta^{\nu}_{\rho}).
\end{align*}
Similarly, in $d$ dimensions we have
$$\varepsilon^{\mu_1 .. \mu_d} \varepsilon_{\mu_1 .. \mu_d} = - d!$$
From this one immediately sees that
$$
\varepsilon^{\mu_1 .. \mu_d} \varepsilon_{\nu_1 .. \mu_d} = - \delta^{\mu_1 .. \mu_d}_{\nu_1 .. \nu_d}
$$
and so contractions obey identities like
$$\varepsilon^{\mu_1 \ldots \mu_r \mu_{r+1} .. \mu_d} \varepsilon_{\mu_1 .. \mu_r \nu_{r+1} \ldots \nu_d} = - A \delta^{\mu_{r+1} .. \mu_d}_{\nu_{r+1} .. \nu_d}$$
must hold in $d$ dimensions, where $A$ can be fixed by expanding
$$\varepsilon^{\mu_1 \ldots \mu_r \mu_{r+1} .. \mu_d} \varepsilon_{\mu_1 .. \mu_r \nu_{r+1} \ldots \nu_d} = - \delta^{\mu_1 .. \mu_r \mu_{r+1} .. \mu_d}_{\mu_1 .. \mu_r \nu_{r+1} .. \nu_d}$$
one step at a time as in the example above, and obviously $A = r!$ should hold so that $A = - d!$ when $r = d$, as can be proven by induction.

I'll give an alternative derivation. Let me just say that I will work entirely in Euclidean signature for simplicity. Generalizing to Lorentzian just requires an extra $\det(\eta_{\mu\nu}) = (-1)^{n-1}$ overall.

By definition of determinant, given an $n\times n$ matrix $A_{ij}$ one has
$$
\det A = \epsilon_{i_1\cdots i_n} A_{1,i_1}\cdots A_{n,i_n}\,.\label{1}\tag{1}
$$
Here we will use Einstein convention throughout. This is also equivalent to
$$
\det A = \frac{1}{n!}\epsilon_{i_1\cdots i_n}\epsilon_{j_1\cdots j_n} A_{j_1,i_1}\cdots A_{j_n,i_n}\,.\tag{2}\label{2}
$$
See the part at the botton if you want a proof of it. At the same time we have
$$
\begin{aligned}
\epsilon_{k_1\cdots k_n}\det A &= \epsilon_{k_1\cdots k_n}\epsilon_{i_1\cdots i_n} A_{1,i_1}\cdots A_{n,i_n} =\\&=
\epsilon_{i_1\cdots i_n} A_{k_1,i_1}\cdots A_{k_n,i_n}\,.
\end{aligned}\tag{3}\label{3}
$$
The way to show \eqref{3} is by sorting the $A_{a,i_a}$ so that they appear with the *row* indices matching $k_a$, then we apply the inverse permutation to the $i_a$ to put them back in the initial order (call this permutation $\sigma$). This comes at a cost of introducing a $\mathrm{sgn}(\sigma)$, i.e. the parity of the permutation, which is precisely $\epsilon_{k_1\cdots k_n}$, thus cancelling that factor (since either $1^2$ or $(-1)^2$ is $1$).

Just as a sanity check: contract \eqref{3} with $\epsilon_{k_1\cdots k_n}$ and compare with \eqref{2} to see that, indeed
$$
\epsilon_{k_1\cdots k_n}\epsilon_{k_1\cdots k_n} = n!\,.
$$
But back to business. I will now do something weird. Let me take the entries of $A$ to be *symbols*, namely
$$
A_{i,j} \equiv \delta_{i, K_j}\,.
$$
By that I do *not* mean the identity matrix but rather
$$
A = \left(\begin{matrix}
\delta_{1,K_1} &\delta_{1,K_2} &\ldots&\delta_{1,K_n}\\
\delta_{2,K_1} &\delta_{2,K_2} \\
\vdots && \ddots\\
\delta_{n,K_1}&&&\delta_{n,K_n}
\end{matrix}\right)\,.
$$
The symbols still commute, so everything goes through, but now \eqref{2} and \eqref{3} say
$$
\epsilon_{i_1\cdots i_n} \delta_{k_1,K_{i_1}}\cdots \delta_{k_n,K_{i_n}} = \frac1{n!} \epsilon_{k_1\cdots k_n}\epsilon_{i_1\cdots i_n}\epsilon_{j_1\cdots j_n}\delta_{j_1,K_{i_1}}\cdots \delta_{j_n,K_{i_n}}\,.
$$
That's *a lot* of indices, I apologize. But we can work our way to a more readable expression. First let's just contract the $\delta$'s in the right hand side
$$
\epsilon_{i_1\cdots i_n} \delta_{k_1,K_{i_1}}\cdots \delta_{k_n,K_{i_n}} = \frac1{n!} \epsilon_{k_1\cdots k_n}\epsilon_{i_1\cdots i_n}\epsilon_{K_{i_1}\cdots K_{i_n}}\,.
$$
Now notice that the sum over the $i_a$'s in the RHS produces $n!$ times the same term, because I can either have an even or odd permutation, but the sign of the $\epsilon_{i_1\ldots}$ tensor is the same as that of the $\epsilon_{K_{i_1}\cdots}$ tensor. So
$$
\epsilon_{i_1\cdots i_n} \delta_{k_1,K_{i_1}}\cdots \delta_{k_n,K_{i_n}} = \epsilon_{k_1\cdots k_n}\epsilon_{K_1\cdots K_n}\,.
$$
Finally, the left hand side is by definition the antisymmetrization of the $\delta$'s over the second index. By convention the antisymmetrization has weight $1$ (see here or here), meaning that it doesn't overcount. Since here we instead have $n!$ terms we have to multiply by that.
$$
n!\,\delta_{k_1,[K_1}\cdots \delta_{k_n,K_n]} = \epsilon_{k_1\cdots k_n}\epsilon_{K_1\cdots K_n}\,.
$$
I also replaced $i_a$ by $a$ because they are antisymmetrized so the order they had initially doesn't matter and I can re-sort them as I please (paying signs of course).

### Proof of the equality \eqref{1} \eqref{2}.

Take the expression \eqref{1} and exchange the position of $A_{p, i_p}$ and $A_{q, i_q}$. For the sake of concreteness let's say $p<q$. This does nothing because the entries of the matrix are just numbers!
$$
\begin{aligned}
\det A &= \epsilon_{i_1\cdots i_n} A_{1,i_1}\cdots A_{p, i_p}\cdots A_{q, i_q}\cdots A_{n,i_n}
\\& = \epsilon_{i_1\cdots i_n} A_{1,i_1}\cdots A_{q, i_q}\cdots A_{p, i_p}\cdots A_{n,i_n}\,.
\end{aligned}
$$
Ok, nothing happened, but let me now swap $i_p$ and $i_q$ in the $\epsilon$:
$$
\epsilon_{i_1\cdots i_p\cdots i_q\cdots i_n} = -\epsilon_{i_1\cdots i_q\cdots i_p\cdots i_n}\,.
$$
Obvious! We get a minus sign. So therefore I'll just rename $i_p$ to $i_q$ and $i_q$ to $i_p$ (I'm always free to do so since there are summed over)
\begin{aligned}
\det A &= \epsilon_{i_1\cdots i_n} A_{1,i_1}\cdots A_{p, i_p}\cdots A_{q, i_q}\cdots A_{n,i_n}
\\& = -\epsilon_{i_1\cdots i_n} A_{1,i_1}\cdots A_{q, i_p}\cdots A_{p, i_q}\cdots A_{n,i_n}\,.
\end{aligned}
We have just proven that in the product $A_{1,i_1}\cdots A_{n,i_n}$ we can antisymmetrize over the *row* indices as well. That's because, as we saw, swapping any two row indices gives the same contribution up to a sign. We can then say
$$
\det A = \frac{1}{n!}\sum_{\sigma \in S_n}\mathrm{sgn}(\sigma)\,\epsilon_{i_1\cdots i_n} A_{\sigma(1),i_1}\cdots A_{\sigma(n),i_n}\,,
$$
where $S_n$ is the permutation group of $n$ elements and $\mathrm{sgn}(\sigma)$ is the parity of the permutation. I divided by $n!$ because every term is now counted $|S_n|= n!$ times. You might know that, in general, for any tensor
$$
\sum_{\sigma \in S_n}\mathrm{sgn}(\sigma) \,T_{\sigma(1)\cdots \sigma(n)} = \epsilon_{i_1\ldots i_n} T_{i_1\ldots i_n}\,.
$$
This is basically by definition of $\epsilon_{i_1\ldots i_n}$. Looking back, we just proved \eqref{2}.