Complex symmetric Matrices over the the field of Laurent series

The answer is yes -- this follows from the general theory of reduction of hermitian matrices.

The matrix $A$ such that $^tA(z) = A(-z)$ is an hermitian matrix in the terminology of Bourbaki, Algèbre, Chap. 9, $\S$ 3, n°1. The reduction theory ($\S$ 6, n°1, Cor. 2 in loc.cit.) tells you that there exists an invertible matrix $P \in \mathrm{GL}_r(K)$ such that $A(z)= {}^tP(z) D(z) P(-z)$ with $D(z)=\operatorname{diag}(f_1(z), \ldots , f_m(z),0, \ldots, 0)$, where $f_i$ are elements of $K$ such that $f_i(z)=f_i(-z)$ and $m$ is the rank of $A$.

It remains to check that any $f \in K$ fixed under the involution can be written $f(z)=g(z) g(-z)$ for some $g \in K$. Note that $f \in \mathbb{C}((z^2))$ and we may assume without loss of generality that $f(z)=1+\sum_{n\geq 1} a_n z^{2n}$. Using the Taylor expansion of the square root, we get $f(z) =g(z^2)^2$ for some $g \in K$, and thus $f(z)=g(z^2)g((-z)^2)$ as desired.

The proof of the reduction result is entirely similar to the proof for classical hermitian matrices. Letting $\phi$ be the hermitian form associated to $A$, the starting point is to find a vector $x \in K^r$ such that $\phi(x,x) \neq 0$, and then to proceed by induction on $r$. To find $x$, choose vectors $x_0,y_0 \in K^r$ such that $\phi(x_0,y_0) \neq 0$, then at least one of the vectors $x_0,y_0,x_0+y_0,x_0+zy_0$ will work. This provides an algorithm (but I don't claim it is optimal).


If that factorization is possible for scalar series ($1\times 1$ matrices), then you can try to use the same algorithm that is used to compute a Cholesky factorization (which is a LU factorization/Gaussian elimination with a different diagonal scaling) by replacing:

  • all operations in $\mathbb{C}$ with the corresponding operations in $\mathbb{C}((z))$
  • the transpose-conjugate symbol with $M^T$ with $M(z)^\star := {^t M}(-z)$
  • the square root operation with taking the factor $B(z)$ from the factorization $A(z) = B(z) \, B(z)^\star = B(z) B(-z)$ for scalar power series (which is essentially the same as your relation).

In this way, the algorithm should produce a factorization of the form $A(z) = L(z) \, L(z)^\star$, with $L$ lower triangular, which is in the format that you require.

There is still some issues to settle with non-invertible entries, though, to get a working proof.

Partition the matrix as $$ A = \begin{bmatrix} B & C\\ C^\star & D \end{bmatrix}, $$ with $B=B^\star \in \mathbb{C}((z))^{1\times 1}$, $D=D^\star \in \mathbb{C}((z))^{(r-1)\times (r-1)}$.

If $B$ is invertible, and we know how to factor $B=EE^\star$ and $D-C B^{-1}C^\star = F F^\star$, then $$ A = \begin{bmatrix} E & 0\\ CE^{-1} & I\\ \end{bmatrix} \begin{bmatrix} I & 0\\ 0 & D-CB^{-1}C^\star \end{bmatrix} \begin{bmatrix} E & 0\\ CE^{-1} & I\\ \end{bmatrix}^\star = \left(\begin{bmatrix} E & 0\\ CE^{-1} & I\\ \end{bmatrix} \begin{bmatrix} I & 0\\ 0 & F \end{bmatrix}\right) \left(\begin{bmatrix} E & 0\\ CE^{-1} & I\\ \end{bmatrix} \begin{bmatrix} I & 0\\ 0 & F \end{bmatrix}\right)^\star. $$ However, it is not clear to me at present how to reduce to a case where $B$ is invertible, in general. It looks like it can be done with some fiddling, but it is tricky. If $A(0) \neq 0$, then we can change variables to ensure that its 1,1 entry is nonzero (if ${^t x}Ax=0$ for all $x\in\mathbb{C}^n$, then ${^t y}Ax=0$ for all $x,y$ with the usual parallelogram law/polarization identity trick and hence $A=0$). If $A = z^{2k}A'$ with $A'(0) \neq 0$, then we can pull out a factor $z^{2k} = z^k \cdot z^k$. But, as @Z.A.Z.Z. notes, it may be the case that $A = z^{2k+1}A'$ with $A'(0)$ nonzero, which is more problematic.