An $n\times n$ matrix that has exactly one $1$ and one $-1$ in each row and column and others are $0$

Call your matrix $A$. If we remove all $-1$s in $A^\top$, we obtain a permutation $P$. Then all diagonal entries of $B=PA$ are equal to $1$.

Define a directed graph $G$ with $n$ nodes $1,2,\ldots,n$, such that node $i$ is connected to node $j$ if and only if $b_{ij}=-1$. Since each row of $B$ contains exactly one $-1$, the graph $G$ can be partitioned into some $m$ disjoint cycles of lengths $l_1,l_2,\ldots,l_m$ respectively. That is, there exists a permutation $\sigma\in S_n$ such that $G$ consists of the cycles \begin{aligned} &\sigma(1)\to\sigma(2)\to\cdots\to\sigma(l_1)\to\sigma(1),\\ &\sigma(l_1+1)\to\sigma(l_1+2)\to\cdots\to\sigma(l_1+l_2)\to\sigma(l_1+1),\\ &\sigma\left(\sum_{k=1}^2l_k+1\right)\to\sigma\left(\sum_{k=1}^2l_k+2\right)\to\cdots\to\sigma\left(\sum_{k=1}^2l_k+l_3\right)\to\sigma\left(\sum_{k=1}^2l_k+1\right)\\ &\cdots\\ &\sigma\left(\sum_{k=1}^{m-1}l_k+1\right)\to\sigma\left(\sum_{k=1}^{m-1}l_k+2\right)\to\cdots\to\sigma\left(\sum_{k=1}^{m-1}l_k+l_m\right)\to\sigma\left(\sum_{k=1}^{m-1}l_k+1\right). \end{aligned} It follows that if we define a permutation $Q$ such that $Q_{i,\sigma(i)}=1$ for each $i$, then $D=QBQ^\top=C_1\oplus C_2\oplus\cdots\oplus C_k$, where each $C_i$ is a circulant matrix of the following form: $$ C_i=\pmatrix{1&-1\\ &1&-1\\ &&\ddots&\ddots\\ &&&1&-1\\ -1&&&&1}. $$ If $C_i$ has $n_i$ rows, flip $I_{n_i-1}$ from left to right to obtain an $(n_i-1)\times(n_i-1)$ matrix $S_i$. Then $$ \pmatrix{1\\ &S_i}C_i\pmatrix{0&1\\ I_{n_i-1}&0}\pmatrix{1\\ &S_i}=-C_i. $$ It follows that there exists two permutation matrices $R_1$ and $R_2$ such that $R_1DR_2=-D$. Thus $$ R_1QPAQ^\top R_2 =R_1QBQ^\top R_2 =R_1DR_2 =-D =-QBQ^\top =-QPAQ^\top, $$ i.e. $$ (P^\top Q^\top R_1QP)A(Q^\top R_2Q)=-A.\tag{1} $$


Illustrative example. Consider the example in Michael Hoppe's answer: $$ A=\begin{pmatrix} -1 & 0 & 1 & 0\\ 0 & -1 & 0 & 1\\ 0 & 1 & -1 & 0\\ 1 & 0 & 0 & -1 \end{pmatrix}. $$ Note that $$ P=\begin{pmatrix}0&0&0&1\\ 0&0&1&0\\ 1&0&0&0\\ 0&1&0&0\end{pmatrix} \Rightarrow B=PA=\pmatrix{1&0&0&-1\\ 0&1&-1&0\\ -1&0&1&0\\ 0&-1&0&1}. $$ The graph $G$ is a single cycle $1\to4\to2\to3\to1$. Let $\sigma(1)=1,\sigma(2)=4,\sigma(3)=2$ and $\sigma(4)=3$. Then $$ Q=\pmatrix{1&0&0&0\\ 0&0&0&1\\ 0&1&0&0\\ 0&0&1&0} \Rightarrow QBQ^\top=D=\pmatrix{1&-1\\ &1&-1\\ &&1&-1\\ -1&&&1}. $$ Finally, $$ \pmatrix{1&0&0&0\\ 0&0&0&1\\ 0&0&1&0\\ 0&1&0&0} D \pmatrix{0&0&0&1\\ 1&0&0&0\\ 0&1&0&0\\ 0&0&1&0} \pmatrix{1&0&0&0\\ 0&0&0&1\\ 0&0&1&0\\ 0&1&0&0}=-D. $$ Thus $(1)$ gives $$ \pmatrix{0&1&0&0\\ 1&0&0&0\\ 0&0&1&0\\ 0&0&0&1} A \pmatrix{0&0&0&1\\ 0&0&1&0\\ 0&1&0&0\\ 1&0&0&0}=-A. $$


Not a solution, but a direction in which to go

Your idea of "difference of permutations" is a nice one for describing these "good" matrices, but as you observe, it doesn't, in its current form, seem to be leading you anywhere.

You've said that not every difference of permutations is "good", and that's true. And you want to find a property that's characterizes the ones that are good. And you've actually identified the property: they never have a "1" in the same position.

Now if you have a difference of permutations that's "good", and you left-multiply by a permutation, you STILL have a difference of permutations, i.e., $P_1(P-Q) = (P_1P) - (P_1 Q)$. The only question is "do the matrices $P_1P$ and $P_1Q$ still have the "no 1s in the same position" property.

(You then have to do the same thing for right-multiplying, but that'll be easy of the left-multiply thing works out).

So here's a lemma to prove:

If $A, B, P$ are permutations, and $A$ and $B$ have no $1$s in the corresponding positions, then $PA$ and $PB$ have no $1$s in corresponding positions either.

That should get you going.


As said in the question, it is enough to work with $A=I-R$, where $R$ is a permutation matrix without 1 on the diagonal. Suppose that $R$ is the matrix of the permutation $p$. We will show later that every permutation is a product of two involutions, that is, we can write $p=fg$ where $f^2=g^2=id$. (Here a product $fg$ maps $i$ to $f(g(i))$ for all $i$). If $F,G$ are the matrices corresponding to $f,g$, then we have $R=FG$ and $F^2=G^2=I$. Then the statement follows from the fact that $$F(I-R)G=FG-F^2G^2=R-I=-(I-R).$$ It remains to show that every permutation $p$ is the product of two involutions. We can write $p=c_1c_2...c_k$ as a product of disjoint cycles $c_j$ (see here). Therefore it is sufficient to write cycles as a product of two involutions. Furthermore it is sufficient to do this for the cycle corresponding to the mapping $c:i\mapsto i+1 \mod m$. Here we can write it as a product $c=fg$ where $f:i\mapsto m+1-i\mod m$ and $g:i\to m-i \mod m$. More explicitly, a cycle $c=(a_1\,a_2\,\dots\,a_m)$ is the product $c=fg$ of the involutions $$f=\begin{pmatrix}a_1&a_2&\dots &a_m\\a_m&a_{m-1}&\dots &a_1\end{pmatrix} \mbox{ and }g=\begin{pmatrix}a_1&a_2&\dots&a_{m-1} &a_m\\a_{m-1}&a_{m-2}&\dots &a_1&a_m\end{pmatrix}.$$ The factorisations of different cycles in the product $f=c_1c_2\dots c_k$ do not interfere with each other as they concern different sets.

This completes the proof.