Characterizing the dual cone of the squares of skew-symmetric matrices

Claim: $D$ has at most one negative eigenvalue, and the absolute value of the negative eigenvalue is less than or equal to the next-smallest eigenvalue.

Proof: Let $E_{ij}$ denote the matrix with a $1$ in the $i,j$ entry and zeros elsewhere.

It suffices to show that if the $i$th and $j$th diagonal entries of $D$ have a negative sum, then $D$ cannot satisfy the criterion. To that end, it suffices to note that there exists a skew-symmetric matrix with $B^2 = -(E_{ii} + E_{jj})$ (take $B = E_{ij} - E_{ji}$ for instance). $\square$

I am not sure whether this condition is equivalent to your inequality.


We can also prove that the condition above is sufficient as follows. Suppose that $D$ has at most one negative eigenvalue, and the absolute value of the negative eigenvalue is less than or equal to the next-smallest eigenvalue.

We first note that every matrix of the form $M = B^2$ for a skew-symmetric $B$ can be written in the form $$ M = -[a_1 \, (x_1x_1^T + y_1y_1^T) + \cdots + a_k \, (x_kx_k^T + y_ky_k^T)]. $$ where the coefficients $a_i$ are non-negative and $x_i,y_i$ are a pair of orthonormal unit vectors for all $i$. So, it suffices to show that $\langle D,M\rangle \leq 0$ where $M = -(xx^T + yy^T)$ for some orthonormal $x,y$.

Now, let $v_1,\dots,v_n$ be an orthonormal basis for $\Bbb R^n$ such that $x = v_1$ and $y = v_2$. Let $V$ be the orthogonal matrix whose columns are $v_1,\dots,v_n$, and let $A = V^TDV$. We now note that $$ \langle D, xx^T + yy^T \rangle = x^TDx + y^TDy = a_{11} + a_{22}. $$ From here, it suffices to apply the $(\implies)$ direction of the Schur-Horn theorem to $-A$ in order to conclude that $a_{11} + a_{22} \geq \lambda_{n}(D) + \lambda_{n-1}(D)$.


About the squares of skew-symmetric matrices: by the spectral theorem, there exists a unitary $U$ with columns $u_1,u_2,\dots,u_n$ such that $$ B = U \pmatrix{i \lambda_1 \\ & - i\lambda_1 \\ && \ddots \\ &&& i \lambda_k \\ &&&& - i \lambda_k \\ &&&&& 0 } U^* \\ = \lambda_1 i \ [u_1u_1^* - u_2 u_2^*] + \cdots + i\lambda_{k}\ [u_{2k-1}u_{2k-1}^* - u_{2k}u_{2k}^*] $$ where each $\lambda_i$ is positive. Thus, squaring $B$ yields $$ B^2 = -(\lambda_1^2 \ [u_1u_1^* + u_2 u_2^*] + \cdots + \lambda_{k}^2\ [u_{2k-1}u_{2k-1}^* + u_{2k}u_{2k}^*]). $$ We could equivalently have used the canonical form (with a real, orthogonal $U$) $$ B = U \pmatrix{0 & -\lambda_1 \\ \lambda_1 & 0 \\ && \ddots \\ &&& 0 & -\lambda_k \\ &&& \lambda_k & 0 \\ &&&&& 0 } U^T \\ = \lambda_1 \ [u_2u_1^T - u_1 u_2^T] + \cdots + \lambda_{k}\ [u_{2k}u_{2k-1}^T - u_{2k-1}u_{2k}^T] $$


Here is a slightly different proof of the sufficiency of the condition $d_i+d_j\geq 0$ for all $i\neq j,$ which is the same as the condition in Omnomnomnom's answer.

Note that

\begin{align*} (B^2)_{ii} &=\sum_{j} b_{i,j}b_{j,i}\\ &=-\sum_{j:i\neq j} b_{i,j}^2 \end{align*}

So

$$\langle D, B^2\rangle = \sum_{i,j:i\neq j}-d_i b_{i,j}^2.\tag{1}$$ Swapping the roles of $i$ and $j,$ $$\langle D, B^2\rangle = \sum_{i,j:i\neq j}-d_j b_{i,j}^2.\tag{2}$$ Averaging (1) and (2) gives $$\langle D, B^2\rangle = \sum_{i,j:i\neq j}-\tfrac12(d_i+d_j) b_{i,j}^2\leq 0.$$