Nonnegativity of an integral over the unitary group

This is not the requested proof, but it may give an indication how to arrive at one. As discussed in the OP, the unknown part of the expression is the integral $$I_{\sigma_1,\sigma_2}=\int_{U(n)} w_{\sigma_1}w_{\sigma_2}\,dU,\;\;w_\sigma=(-1)^\sigma\det(U^*)\prod_{i=1}^n U_{i,\sigma(i)}.$$ I have evaluated this for $n$ up to 5, using the intU package, and based on that data I surmise $$I_{\sigma_1,\sigma_2}= \frac{2^n}{2^{C}n!(n+1)!},$$ with $C$ the Cayley distance between $\sigma_1$ and $\sigma_2$ (the minimal number of transpositions needed to convert $\sigma_1$ into $\sigma_2$).

For example, when $n=5$ and $\sigma_1=\{1,2,3,4,5\}$, $\sigma_2=\{2, 3, 4, 5, 1\}$, one has $C=4$ and $I=\frac{1}{43200}$. For $\sigma_2=\{1, 2, 3, 5, 4\}$ one has $C=1$ and $I=\frac{1}{5400}$.

As a check, one should be able to verify that $$\sum_{\sigma_1,\sigma_2}I_{\sigma_1,\sigma_2}=\int_{U(n)}(\det U^\ast \det U)^2\,dU=1.$$

The number of permutations at Cayley distance $C\in\{0,1,2,\ldots,n-1\}$ from $\{1,2,3,\ldots n\}$ equals $|s_{n,n-C}|$, the absolute value of the Stirling number of the first kind. [Thank you, Martin Rubey.] So let us calculate $$\sum_{\sigma_1,\sigma_2}I_{\sigma_1,\sigma_2}=n!\sum_{C=0}^{n-1}\frac{2^n}{2^{C}n!(n+1)!}|s_{n,n-C}|,$$ which equals unity as far as I can check. So it all works out.

I find it remarkable that the integral of $w_{\sigma_1}w_{\sigma_2}$ has such a simple explicit expression, while the integral of $w_{\sigma_1}\bar{w}_{\sigma_2}$ has only an implicit expression in terms of a Weingarten function.


Here is a proof of Carlo's formula for $I_{\sigma_1,\sigma_2}$.

Firstly, one can sample the Haar-distributed $U\in U(n)$ as $U=\lambda V$ where $\lambda$ and $V$ are independent random variables. $\lambda\in U(1)$ is a Haar-distributed complex number of modulus one, and $V$ is a Haar-distributed element of $SU(n)$. Now with these subtitutions we have $$ w_\sigma=\epsilon(\sigma)\ {\rm det}(\bar{\lambda}V^{\ast})\ \lambda^n\ \prod_{i=1}^{n} V(i,\sigma(i))= \epsilon(\sigma)\ \prod_{i=1}^{n} V(i,\sigma(i)) $$ where I used $\epsilon(\sigma)$ instead of $(-1)^{\sigma}$ for the sign of a permutation and, for better readability, I write matrices and tensors as $V(i,j)$ instead of $V_{i,j}$, etc.

Thus, following the explanations for $SU(n)$ integration I gave here: How to constructively/combinatorially prove Schur-Weyl duality?

we have that $$ I_{\sigma_1,\sigma_2}=\epsilon(\sigma_1)\epsilon(\sigma_2) \int_{SU(n)}\prod_{i=1}^{n} V(i,\sigma_1(i))\ \prod_{i=1}^{n} V(i,\sigma_2(i))\ dV $$ $$ =\epsilon(\sigma_1)\epsilon(\sigma_2)\ c_2\ \mathscr{D}^2 \ \ \left. \prod_{i=1}^{n} V(i,\sigma_1(i))\ \prod_{i=1}^{n} V(i,\sigma_2(i))\right|_{V:=0} $$ where $$ c_2=\frac{0!1!}{n!(n+1)!} $$ and the "propagator" $\mathscr{D}$ is the differential operator $$ \mathscr{D}={\rm det}(\partial V)=\frac{1}{n!} \sum_{a,b}\epsilon(a_1,\ldots,a_n)\epsilon(b_1,\ldots,b_n) \frac{\partial}{\partial V(a_1,b_1)}\cdots \frac{\partial}{\partial V(a_n,b_n)}\ . $$ The sum is over all sequences of indices $a_1,\ldots,a_n,b_1,\cdots,b_n$ in $[n]:=\{1,\ldots,n\}$ and now $\epsilon$ is to be understood as the Levi-Civita tensor.

Then one computes the action of the two propagators $\mathscr{D}\times\mathscr{D}$ on the $2n$ legs $$ \prod_{i=1}^{n} V(i,\sigma_1(i))\ \prod_{i=1}^{n} V(i,\sigma_2(i)) $$ as in basic applications of Wick's theorem in quantum field theory, namely by summing over Wick contractions. Here the sum is organized according to subsets $A\subset [n]$ corresponding to the legs in the first group $$ \prod_{i=1}^{n} V(i,\sigma_1(i)) $$ selected by the first propagator $\mathscr{D}$. Let us denote $A=\{i_1<\cdots<i_k\}$ and $A^{c}=\{j_1<\cdots< j_{n-k}\}$. Then $$ \mathscr{D}^2 \prod_{i=1}^{n} V(i,\sigma_1(i))\ \prod_{i=1}^{n} V(i,\sigma_2(i)) =\sum_{A} \left\{ \mathscr{D}\ V(i_1,\sigma_1(i_1))\cdots V(i_k,\sigma_1(i_k))\ V(j_1,\sigma_2(j_1))\cdots V(j_{n-k},\sigma_2(j_{n-k})) \right\} $$ $$ \times \left\{ \mathscr{D}\ V(j_1,\sigma_1(j_1))\cdots V(j_{n-k},\sigma_1(j_{n-k}))\ V(i_1,\sigma_2(i_1))\cdots V(i_{k},\sigma_2(i_{k})) \right\} $$ $$ =\sum_{A} \epsilon(i_1,\ldots,i_k,j_1,\ldots,j_{n-k})\ \epsilon(\sigma_1(i_1),\ldots,\sigma_1(i_k),\sigma_2(j_1),\ldots,\sigma_2(j_{n-k})) $$ $$ \times\ \epsilon(j_1,\ldots,j_{n-k},i_1,\ldots,i_k)\ \epsilon(\sigma_1(j_1),\ldots,\sigma_1(j_{n-k}),\sigma_2(i_1),\ldots,\sigma_2(i_k))\ . $$ The condition imposed on the set $A$ by these epsilon factors is that $\sigma_1(A)^{c}=\sigma_2(A^{c})$ which is equivalent to $\sigma_1(A)=\sigma_2(A)$, i.e., $\sigma_2^{-1}\circ\sigma_1(A)=A$. This means that $A$ must be a union of cycle supports for $\sigma_2^{-1}\circ\sigma_1$. Hence the number of $A$'s is $2$ at the power the number of such cycles, i.e., $2^{n-C}$ where $C$ is the Cayley distance between $\sigma_1$ and $\sigma_2$.

Finally all the terms add up with the same sign. Indeed $$ \epsilon(\sigma_1(i_1),\ldots,\sigma_1(i_k),\sigma_2(j_1),\ldots,\sigma_2(j_{n-k}))=\epsilon(\rho) $$ where (with hopefully understandable notations) $$ \rho=(\sigma_1\circ \sigma_2^{-1}|_{\sigma_2(A)}\oplus Id_{\sigma_2(A)^c})\circ\sigma_2\circ \tau $$ with the $\tau$ denoting the shuffle permutation $$ \tau=\left( \begin{array}{cccccc} 1 & \cdots & k & k+1 & \cdots & n \\ i_1 & \cdots & i_k & j_1 & \cdots & j_{n-k} \end{array} \right)\ . $$ we then have $$ \epsilon(i_1,\ldots,i_k,j_1,\ldots,j_{n-k})\ \epsilon(\sigma_1(i_1),\ldots,\sigma_1(i_k),\sigma_2(j_1),\ldots,\sigma_2(j_{n-k})) =\epsilon(\sigma_1\circ \sigma_2^{-1}|_{\sigma_2(A)}\oplus Id_{\sigma_2(A)^c})\ \epsilon(\sigma_2)\ \epsilon(\tau)^2 $$ and likewise $$ \epsilon(j_1,\ldots,j_{n-k},i_1,\ldots,i_k)\ \epsilon(\sigma_1(j_1),\ldots,\sigma_1(j_{n-k}),\sigma_2(i_1),\ldots,\sigma_2(i_k))= \epsilon(\sigma_1\circ \sigma_2^{-1}|_{\sigma_2(A^c)}\oplus Id_{\sigma_2(A)})\ \epsilon(\sigma_2) $$ So the product of four epsilons gives $$ \epsilon(\sigma_1\circ \sigma_2^{-1}|_{\sigma_2(A)}\oplus Id_{\sigma_2(A)^c})\ \epsilon(\sigma_1\circ \sigma_2^{-1}|_{\sigma_2(A^c)}\oplus Id_{\sigma_2(A)}) =\epsilon(\sigma_1\circ \sigma_2^{-1})=\epsilon(\sigma_1)\epsilon(\sigma_2)\ . $$


I don't understand the matrix integrals, but the sum in the last paragraph is easy to deal with. Here is a story to visualize the permutation groups. Imagine $n$ couples lined up in two lines in a hall, perhaps to dance. A permutation in $C$ permutes each line within itself, breaking up partnerships. A permutation in $R$ makes some couple switch places with each other. Note that every permutation in $R$ is an involution.

Let's first prove positivity. Let $\tau_s$ switch $k_s$ pairs of couples. If we are to have $\tau_1 \sigma \tau_2 \in C$, then each person who switches under $\tau_1$ must switch again until $\tau_2$ to get back to their original line. So $k_1 = k_2$ and $(-1)^{\tau_1} = (-1)^{\tau_2}$. This implies that $(-1)^{\pi} = (-1)^{\tau_1 \sigma \tau_2}$ and thus $(-1)^{\sigma \pi}=1$.

Now, let's compute the sum. Let $T_s$ be the set of indices $k$ for which $\tau_s(k) = k+n$. In order to have $\tau_1 \sigma \tau_2 \in C$, each person must cross the hall under $\tau_1$ if and only if they do under $\tau_2$. Looking at the people who start on the $\{1,2,\ldots, n \}$ side of the hall, we must have $\sigma_2(T_1) = T_2$. Looking at the people who start on the other side, we must also have $\sigma_1(T_1) = T_2$. Thus $\sigma_1^{-1} \sigma_2 (T_1) = T_1$. So $T_1$ is a union of orbits of $\sigma_1^{-1} \sigma_2$. Each such union of orbits has a unique compatible $T_2$, by the equation $\sigma_1(T_1) = T_2$, and each such $(\tau_1, \tau_2)$ determines a unique $\pi$ by $\pi = \tau_1 \sigma \tau_2$. So the number of nonzero terms is the number of subsets of orbits of $\sigma_1^{-1} \sigma_2$. The number of orbits is $n-\ell_T(\sigma_1^{-1} \sigma_2)$ where $\ell_T$ is the length with respect to the set of all reflections, so the sum is $2^{n-\ell_T(\sigma_1^{-1} \sigma_2)}$ as requested.