Asymptotic question about time ordered exponentials

This question has a solution presented in this paper even if with the jargon and notation of theoretical physics. So, I will use a somewhat different notation and I will change

$${\bf A}(t)\rightarrow -i{\bf A}(t).$$

Then, I will compute the eigenvalues and eigenvectors of ${\bf A}(t)$ through

$${A}(t)|n;t\rangle=\lambda_n(t)|n;t\rangle.$$

Now, you get a series with a leading order term

$${\bf B}(r)=\sum_n e^{i\gamma_n}e^{-ir\int_{-1}^1 dt\lambda_n(t)}|n;1\rangle\langle n;-1| \qquad r\rightarrow\infty$$

being $\gamma_n=\int_{-1}^1dt\langle n;t|i\partial_t|n;t\rangle$ known as geometric phase. Then, an expansion in the inverse of $r$ can be obtained with the matrix

$$\tilde {\bf A}(t)=-\sum_{n,m,n\ne m}e^{i(\gamma_n(t)-\gamma_m(t))}e^{-ir\int_{t_0}^tdt[\lambda_m(t)-\lambda_n(t)]}\langle m;t|i\partial_t|n;t\rangle|m;t_0\rangle\langle m;t_0|$$

being in this case

$$\tilde {\bf B}(r)=\prod_{-1}^1e^{-i\tilde {\bf A}(t)dt}$$

so that

$$B(r)=\sum_n e^{i\gamma_n}e^{-ir\int_{-1}^1 dt\lambda_n(t)}|n;1\rangle\langle n;-1|\tilde {\bf B}(r).$$

This represents a solution of the Schroedinger equation

$$-ir{\bf A}(t)B(r;t,t_0)=\partial_tB(r;t,t_0)$$

in the interval $t\in [-1,1]$ and $r\rightarrow\infty$.

An example:

$$ A(t) = \frac{1}{1+t^2} \begin{pmatrix} 2 & t\\ -t & -2 \end{pmatrix} $$

and one has to solve the problem $$ \dot U(t)=rA(t)U(t) $$ with $r\gg 1$. We want to apply the technique outlined above. We note that $A(t)$ is not Hermitian and so, solving the eigenvalue problem, we get $\lambda_{\pm}=\pm r\frac{\sqrt{4-t^2}}{1+t^2}$ and $$ v_+=\frac{1}{2}\begin{pmatrix} \sqrt{2+\sqrt{4-t^2}}\\ -\frac{t}{\sqrt{2+\sqrt{4-t^2}}}\end{pmatrix} \qquad v_-=\frac{1}{2}\begin{pmatrix}-\frac{t}{\sqrt{2+\sqrt{4-t^2}}} \\ \sqrt{2+\sqrt{4-t^2}}\end{pmatrix}. $$ But $v_+^Tv_-\ne 0$ and so these vectors are not orthogonal. We need to solve also the eigenvalue problem $u^T(A-\lambda I)=0$ producing the following eigenvectors $$ u_+=\frac{1}{2}\begin{pmatrix} \sqrt{2+\sqrt{4-t^2}}\\ \frac{t}{\sqrt{2+\sqrt{4-t^2}}}\end{pmatrix} \qquad u_-=\frac{1}{2}\begin{pmatrix} \frac{t}{\sqrt{2+\sqrt{4-t^2}}} \\ \sqrt{2+\sqrt{4-t^2}}\end{pmatrix}. $$ It is easy to see that $u_+^Tv_-=u_-^Tv_+=0$. It is important to note that $\lambda(t)=\lambda(-t)$ and $u_+(-t)=v_-(t)$ and $u_-(-t)=v_+(t)$ and so, these eigenvectors are just representing a backward evolution in time. Now, we want to study the time evolution of a generic eigenvector $$ \phi(t)=\begin{pmatrix}\phi_+(t) \\ \phi_-(t)\end{pmatrix} $$ and this can be done by putting $$ \phi(t)=c_+(t)e^{r\int_0^tdt'\frac{\sqrt{4-t^{'2}}}{1+t^{'2}}}v_+(t)+ c_-(t)e^{-r\int_0^tdt'\frac{\sqrt{4-t^{'2}}}{1+t^{'2}}}v_-(t) $$ that will produce the set of equations $$ \dot c_+=\gamma_+c_++e^{-2r\int_0^tdt'\frac{\sqrt{4-t^{'2}}}{1+t^{'2}}}\frac{u_+^T\frac{dv_-}{dt}}{u_+^Tv_+}c_- $$

$$ \dot c_-=\gamma_-c_-+e^{2r\int_0^tdt'\frac{\sqrt{4-t^{'2}}}{1+t^{'2}}}\frac{u_-^T\frac{dv_+}{dt}}{u_-^Tv_-}c_+ $$ having set $\gamma_+=\frac{u_+^T\frac{dv_+}{dt}}{u_+^Tv_+}$ and $\gamma_-=\frac{u_-^T\frac{dv_-}{dt}}{u_-^Tv_-}$. These equations are interesting because they provide the way time evolution is formed in a non-hermitian case. But this is also saying to us that each component may evolve in time differently: One can be really smaller than the other for $r\gg 1$. But we can also understand the form of the higher order corrections:

$$ c_+(t)=c_+(0)+\int_0^tdt'e^{\int_0^{t'}dt''(\gamma_+(t'')-\gamma_-(t''))}e^{-2r\int_0^{t'}dt''\frac{\sqrt{4-t^{''2}}}{1+t^{''2}}}\frac{u_+^T\frac{dv_-}{dt''}}{u_+^Tv_+}c_-(0)+\ldots. $$

Using a saddle point technique, we can uncover here that the correction is exponentially small and cannot be stated that is something like $e^{r}/r^k$ in the general case.

Now, we consider the simple case $c_+(0)=1$ and $c_-(0)=0$. The approximate solution will be

$$ \phi_+(t)=\frac{1}{2}\sqrt{2+\sqrt{4-t^2}}e^{r\int_0^{t}dt'\frac{\sqrt{4-t^{'2}}}{1+t^{'2}}} \qquad \phi_-(t)=-\frac{1}{2}\frac{t}{\sqrt{2+\sqrt{4-t^2}}}e^{r\int_0^{t}dt'\frac{\sqrt{4-t^{'2}}}{1+t^{'2}}} $$

and solving numerically the set of differential equations for $r=50$ we get the following

     (source: Wayback Machine)

The agreement is strikingly good.


I think I might see what was confusing me. This is really a comment, but it's too long for the comment thread. As my example, let's take $$A(t) = \frac{1}{1+t^2} \begin{pmatrix} 2 & t \\ -t & -2 \end{pmatrix}$$ So we want to solve the differential equation $U'(t) = r A(t) U(t)$, where $U$ is a $2 \times 2$ matrix with initial condition $U(-1) = \mathrm{Id}$.

We can actually compute the eigenvalues of $A(t)$ explicitly: They are $\sqrt{4-t^2}/(1+t^2)$. We compute $\int_{-1}^1 \pm \sqrt{4-t^2}/(1+t^2) dt \approx \pm 3.03022$. So your formula, as I understand it, is $$U(1) = e^{3.03022 r} u_1 v_1^T + e^{-3.03022 r} u_2 v_2^T + \cdots$$ where $u_i$ and $v_i$ are the eigenvectors of $A(-1)$ and $A(1)$.

What I think was confusing me is that it is somewhat misleading to call this the leading terms. The later terms in the series look like $e^{3.03022 r} r^{-k} (\mbox{stuff})$, right? So they actually dominate the $e^{-3.03022 r}$ term.


I wish I weren't having so much trouble getting good numerical data, it would probably clear up my confusion a lot. In the meantime, here is why I am worried.

Let $A(t)$, $B(t)$ and $C(t)$ be three $2 \times 2$ matrix-valued functions as above, with $A(1)=B(1)=C(1)$ (and hence the same at $-1$.) Let $X(r)$, $Y(r)$ and $Z(r)$ be the parallel transport from $-1$ to $1$ be the differential equations $\phi'(t) = r A(t) \phi(t)$, $\phi'(t) = r B(t) \phi(t)$ and $\phi'(t) = r C(t) \phi(t)$. As I understand it, your method gives asymptotic expansions $$X(r) \approx U \begin{pmatrix} e^{x_1 r} & 0 \\ 0 & e^{x_2 r} \end{pmatrix} V \quad Y(r) \approx U \begin{pmatrix} e^{y_1 r} & 0 \\ 0 & e^{y_2 r} \end{pmatrix} V \quad Z(r) \approx U \begin{pmatrix} e^{z_1 r} & 0 \\ 0 & e^{z_2 r} \end{pmatrix} V \quad (1)$$ where I have the SAME matrices $U$ and $V$ in each cases, because they depend only on the eigenvectors of $A(1)=B(1)=C(1)$ and of $A(-1)=B(-1)=C(-1)$.

Am I right about $(1)$?

If so, here is the issue. Look at the quadratic form $$\det(x X(r) + y Y(r) + z Z(r)) \approx \det(U) \left( e^{r x_1} x + e^{r y_1} y + e^{r z_1} z \right) \left( e^{r x_2} x + e^{r y_2} y + e^{r z_2} z \right) \det(V).$$

The matrix of this form has leading terms $$\begin{pmatrix} \exp(r(x_1+x_2)) & & \\ \exp(r\max(x_1+y_2, x_2+y_1)) & \exp(r(y_1+y_2)) & \\ \exp(r\max(x_1+z_2, x_2+z_1)) & \exp(r\max(y_1+z_2, y_2+z_1)) & \exp(r(z_1+z_2)) \\ \end{pmatrix}$$ as long as the approximations in $(1)$ are good enough that we don't get extra cross terms.

Unless I am very confused, I can construct $A(t)$, $B(t)$, $C(t)$ such that this quadratic form looks like $x^2+y^2+z^2 + (e^r+e^{-r}) (xy+xz+yz)$. And there are no real numbers $(x_1, x_2, y_1, y_2, z_1, z_2)$ with $x_1+x_2=y_1+y_2=z_1+z_2=0$ and $\max(x_1+y_2, x_2+y_1)=\max(x_1+z_2, x_2+z_1)=\max(y_1+z_2, y_2+z_1)=1$. So something is wrong...