How to prove this determinant is positive?

Here are some ideas how to decide the conjecture. (EDIT: In fact these ideas lead to a proof of the conjecture as Terry Tao explained in two comments below.)

As Christian Remling and Will Sawin showed, the conjecture is equivalent to $\det(I+T)\geq 0$ for any $T\in\mathrm{SO}^0(n,n)$.

We can assume that $-1$ is not an eigenvalue of $T$. Up to conjugacy, $T$ is a sum of indecomposable blocks as in Theorem 1 of Nishikawa's 1983 paper, and then $\det(I+T)$ is the product of the determinants of the corresponding blocks of $I+T$. Hence, by the idea of jjcale, we can forget about the blocks that are of exponential type. By page 83 in Djoković's 1980 paper, the remaining blocks are of type $\Gamma_m(\lambda,\lambda^{-1})$ with $\lambda<0$ and $\lambda\neq -1$, which in turn are described on page 77 of the same paper. Such a block contributes $(1+\lambda)^{2m+2}/\lambda^{m+1}$ to $\det(I+T)$, hence we can forget about the blocks where $m$ is odd.

To summarize, we can assume that $T$ is composed of $(2m+2)\times(2m+2)$ blocks of type $\Gamma_m(\lambda,\lambda^{-1})$ with $\lambda<0$ and $\lambda\neq -1$ and $m$ even. The conjecture is true if and only if the number of such blocks is always even. For this, the explicit description of $\mathrm{SO}^0(n,n)$ on page 64 of Nishikawa's 1983 paper might be useful (see also page 68 how to use this criterion for $m=1$). Based on this, I verified by hand that one cannot have a single block for $m=2$, which also shows that the smallest possible counterexample to the conjecture is of size $10\times 10$ (i.e. $n\geq 5$).

Added 1. Terry Tao realized and kindly added that in the remaining case we are done. Read his comments below. To summarize and streamline his ideas, we have in this case \begin{align*}\det(I_{2n}+T) &=\det(I_n+A)\det(I_n+A^{*-1})\\ &=\det(A)\det(I_n+A^{-1})\det(I_n+A^{*-1})\\ &=\det(A+A^{*-1})\frac{\det(I_n+A^{-1})^2}{\det(I_n+A^{-1}A^{*-1})}, \end{align*} where $(A+A^{*-1})/2$ can be described as the restriction of $T$ to a totally positive subspace followed by the orthogonal projection to this subspace. Now we have $\det(A+A^{*-1})>0$ by $T\in\mathrm{SO}^0(n,n)$, while the fraction on the right is clearly positive, hence we conclude $\det(I_{2n}+T)>0$.

Added 2. Terry Tao wrote a great blog entry on this topic.

Added 3. Let me add a variation on Terry's original argument. Djoković defines $\mathrm{SO}(n,n)$ via $J:=\begin{pmatrix} 0 & I_n \\ I_n & 0 \end{pmatrix}$, while Nishikawa defines it via $K:=\begin{pmatrix} I_n & 0 \\ 0 & -I_n\end{pmatrix}$. These two matrices are connected via $J=M^*KM$, where $M:=\frac{1}{\sqrt{2}}\begin{pmatrix} I_n & I_n\\ -I_n & I_n\end{pmatrix}$, hence any matrix $T$ in Djoković's $\mathrm{SO}(n,n)$ corresponds to $MTM^*$ in Nishikawa's $\mathrm{SO}(n,n)$. We need to examine the case of $T = \begin{pmatrix} A & 0 \\ 0 & A^{*-1} \end{pmatrix}$, which corresponds to $MTM^*=\frac{1}{2}\begin{pmatrix} A+A^{*-1} & -A+A^{*-1} \\ -A+A^{*-1} & A+A^{*-1} \end{pmatrix}$. This lies in Nishikawa's $\mathrm{SO}^0(n,n)$, whence $\det(A+A^{*-1})>0$.


Update 1: Thanks to jjcale for pointing out a fatal flaw. Indeed $SO(n,n)$ has two components, see here, and it looks suspiciously like my $T$ below is in the wrong component. I don't really know what Wikipedia means by "preserving/reversing orientation," but certainly $T=\textrm{diag}(-1,1,-1,1)$ is in the wrong component, and my $T$ below feels like it probably is in the same component.

It's hard to be sure about anything after so many mistakes, but there seems to be some circumstantial evidence that indeed $\det (1+T)$ could be $\ge 0$ on $SO^+(n,n)$. For example, $T_0=-1$ looks like a reasonable starting point that, evolved a little along a suitable flow, should produce a counterexample if there is one, but this isn't working.

Update 2: Thanks also to GH from MO for moral support. So I'll leave it up for now, as a monument to my ignorance (plus who wouldn't like to keep the reputation). I really shouldn't dabble in areas I don't understand. (But amazing what one can learn from an innocuous looking question.)


I believe the OP's conjecture is false. This is going to be a bit light on details. Essentially, I'll elaborate on Terry's and the OP's comments above.

The Lie (matrix) algebra generated by the matrices $A=\left( \begin{smallmatrix} 0 & B\\ B^t & 0\end{smallmatrix}\right)$ is (certainly contained in, but I believe equal to) $$ g=\left\{ M=\begin{pmatrix} A & B \\ B^t & D \end{pmatrix}: A=-A^t, D=-D^t \right\} . $$ As observed by the OP, this can equivalently be described as all $M$ with $MI=-IM^t$, where $I=\textrm{diag}(1,-1)$. Since this is the Lie algebra of $O(n,n)$, defined as all matrices $T$ with $$ TIT^t=I , \quad\quad\quad\quad (1) $$ I believe that this means we can make the product of matrix exponentials approach any matrix in $O(n,n)$ that is in the connected component of the identity (but I haven't thought through this step very carefully).

Now it's easy to find counterexamples $T\in O(n,n)$ to the claim that $\det (1+T)>0$. For example, for $n=1$, we could take $T=\left( \begin{smallmatrix} -2 & \sqrt{3} \\ \sqrt{3} & -2 \end{smallmatrix}\right)$. It is easily checked that $T\in O(1,1)$ and $\det (1+T)=-2$. Obviously, this is not a counterexample to the OP's conjecture, which is trivially true for $n=1$ (since any two $A_j$'s commute). We cannot reach this $T$ because when written out, (1) for $n=1$ in particular demands that $T_{11}^2=1+T_{12}^2\ge 1$, so we cannot get to negative values.

However, as usual, these obstructions disappear in higher dimensions. Note that we have the full Lie algebra $so(n)$ available for the diagonal blocks. A counterexample for $n=2$ is given by the matrix $$ T = \begin{pmatrix} -\sqrt{2} & 0 & 1 & 0 \\ 0 & 1 & 0 & 0\\ 1 & 0 & -\sqrt{2} & 0\\ 0 & 0& 0& 1\end{pmatrix} . $$ For this $T$, we have that $T\in O(2,2)$, $\det (1+T)=8(1-\sqrt{2})<0$.

Finally, let me point out that this only shows that the determinant can not be always positive for arbitrarily large $N$; it is still conceivable that this is true for certain small values of $N$.


The set of matrices of the form $e^{A_1} \dots e^{A_n}$ with $A_1, \dots , A_n$ of this form are a group. This is because it is clearly closed under multiplication and if $A$ is of this form then $-A$ is of this form, so they are closed under inverses. The closure of this set remains a group, hence is a Lie subgroup of $GL_n(\mathbb R)$.

So if we compute its Lie algebra, and find the associated connected Lie subgroup, then all elements in the Lie group will be limits of these products.

The Lie algebra certainly contains matrices of the form $A_i$ by multiplication. Observe that the commutator is:

$$\left[ \biggl(\begin{matrix} 0 & B_1 \\ B_1^T & 0 \end{matrix} \biggr), \biggl(\begin{matrix} 0 & B_2 \\ B_2^T & 0 \end{matrix} \biggr) \right] = \biggl(\begin{matrix} 0 & B_1 \\ B_1^T & 0 \end{matrix} \biggr)\biggl(\begin{matrix} 0 & B_2 \\ B_2^T & 0 \end{matrix} \biggr) - \biggl(\begin{matrix} 0 & B_2 \\ B_2^T & 0 \end{matrix} \biggr) \biggl(\begin{matrix} 0 & B_1 \\ B_1^T & 0 \end{matrix} \biggr) $$

$$= \biggl(\begin{matrix} B_1 B_2^T & 0\\ 0 & B_1^T B_2 \end{matrix} \biggr) - \biggl(\begin{matrix} B_2 B_1^T & 0\\ 0 & B_2^T B_1 \end{matrix} \biggr) = \biggl(\begin{matrix} B_1 B_2^T - B_2 B_1^T & 0\\ 0 & B_1^T B_2 - B_2^T B_1 \end{matrix} \biggr) $$

Observe that $B_1 B_2^T - B_2 B_1^T = M - M^T$ where $M= B_1 B_2^T$ is arbitrary so we can get any skew-symmetric matrix. Moreover, we can get any rank $1$ trace $0$ matrix as $B_1 B_2^T$ when $B_2^T B_1$ is $0$, so that the other square is $0$. So by summing we get arbitrary skew-symmetric matrices in the top left, and independently in the bottom right, so we get every element of $so(n,n)$. (There may be easy ways to show this.)

This shows that every matrix in $SO(n,n)^{+}$ can be reached.