Solution to at least one ODE in a family of ODE's

As I said, let's get rid of some junk like $i$ and conjugation first. Note that in the original equation we can replace $\eta$ by $\zeta\eta$ for any complex number $\zeta$ with $|\zeta|=1$ and can replace $t$ by $t+\tau$. These two degrees of freedom allow us to rotate the coefficients at $\varepsilon$ and $r$ independently, so I will rotate the equation to $$ i\eta_t=(re^{it}+\varepsilon)\bar\eta\,. $$ Now we go to the Fourier side, as Chris suggested, and get the system $$ -ka_k=r\bar a_{1-k}+\varepsilon \bar a_{-k} $$ with real coefficients. The key point is that if this system has a complex solution $a_k\in\mathbb C$, then $\Re a_k$ solve exactly the same system and $\Im a_k$ solve the similar system with $-r$ and $-\varepsilon$ instead of $r,\varepsilon$. Conversely, if we have a real solution of one of those syestems, we can turn it into a real or purely imaginary solution of the original system respectively. Thus, we can forget about the conjugation altogether and just consider the linear system
$$ -ka_k=ra_{1-k}+\varepsilon a_{-k} $$ with small $\epsilon$ of arbitrary sign and $r\in\mathbb R$. Rewrite it as the eigenvalue problem $Ta=ra$ where $$ (Ta)_m=(m-1)a_{1-m}-\varepsilon a_{m-1} $$ The operator $T$ is easy to invert: If $b=Ta$ we get $$ b_{m+1}=ma_{-m}-\varepsilon a_m,\qquad b_{1-m}=-ma_m-\varepsilon a_{-m} $$ whence $$ a_m=-\frac{m}{m^2+\varepsilon^2}b_{1-m} -\frac{\varepsilon}{m^2+\varepsilon^2}b_{m+1} $$ So $a=Sb$ where $S=T^{-1}$ is a compact operator. Now we know that eigenvectors of $S$ and $T$ are the same and all we need is to figure out if $S$ has any non-zero real eigenvalues for small $\varepsilon$.

The next part is to look at the possible size of a real eigenvalue of $T$. Write $T-rI=P_r-\varepsilon Q$ where $(Qa)_m=a_{m-1}$. Note that $P_r$ is a block operator with blocks $P_{r,m}$ of size $2$ corresponding to the variables $A_m=(a_m,a_{1-m})$ ($m=0,2,3,4,\dots$). The matrix of the block $P_{r,m}$ is $\begin{bmatrix} -r& m-1\\-m&-r \end{bmatrix}$. It is easily seen that this block is not contracting too much if $r$ is not too small. An accurate computation of the inverse $2\times 2$ matrix shows that if $r\ge C\sqrt{|\varepsilon|}$ with some sufficiently large $C>0$, then $\|P_{r,m}A_m\|\ge 4\varepsilon\|A_m\|$ ($P_{r,0}$ is the only block that can contract that much; every other block has an inverse bounded by some constant independent of $r$). On the other hand, $\|Q\|=1$ and $Q$ acts only either within the block (again, that is possible only if $m=0$), or between adjacent blocks. Thus, if $r>\sqrt{|\varepsilon|}$ and we have any non-trivial $\ell^2$ eigenvector, we can choose $m$ such that $\|A_m\|$ is the largest, in which case the norm of the $m$-th block in the image under $T-rI$ is at least $4\varepsilon\|A_m\|-3\varepsilon\|A_m\|>0$, which is a contradiction. Thus, our only chance is to have a real eigenvalue of $T$ less than $C\sqrt{|\varepsilon|}$ in absolute value. So we only need to look for large real eigenvalues $\rho$ of $S$.

Now $S-\rho I$ also has an $\varepsilon$-perturbation of a block operator, call it $S_{\rho}$, with the same $2\times 2$ blocks. A typical ($m\ge 2$) block is now $\begin{bmatrix}-\rho &-\frac{m}{m^2+\varepsilon^2}\\ \frac{m-1}{(m-1)^2+\varepsilon^2} & -\rho \end{bmatrix}$, which has a bounded (and even small) inverse if $\rho$ is large and, say, the imaginary part of $\rho$ does not exceed the real part of $\rho$ in the absolute value (to guarantee that $\Re(\rho^2)\ge 0$ so that two terms in the determinant of the block essentially add up in absolute value). However, there is one exceptional block corresponding to $m=0$ with the matrix $\begin{bmatrix}-\rho &-\varepsilon^{-1} \\ -\frac{1}{1+\varepsilon^2} & -\rho \end{bmatrix}$.

The rest of the operator $S-\rho I$ is $-\varepsilon V_{\varepsilon}$ where $V_\varepsilon$ is a compact operator of norm at most $1$ (so we write $S-\rho I=S_\rho-\varepsilon V_{\varepsilon}$). Thus, we can apply the perturbation theory considering the continuous family $S_\rho-\delta V_{\varepsilon}$ with $0\le \delta\le \varepsilon$. Note that $S_\rho$ is invertible with bounded inverse as long as $\rho$ is large, $|\Im\rho|\le|\Re\rho|$, and $|\rho^2-\frac{\varepsilon^{-1}}{1+\varepsilon^2}|$ is approximately the sum of the absolute values of the individual terms under the absolute value sign, which is always the case if $\varepsilon<0$ and is so under the condition $|\rho\pm \varepsilon^{-1/2}|\ge 0.1\varepsilon^{-1/2}$, say, if $\varepsilon>0$. Thus $S_{\rho}-\delta V_{\varepsilon}$ is invertible for such $\rho$ for all $\delta\in[0,\varepsilon]$. This means that the only chance to get some real eigenvalues is to assume that $\varepsilon>0$ and they can be only in the disks $|\rho\pm \varepsilon^{-1/2}|< 0.1\varepsilon^{-1/2}$. Moreover, since nothing can travel through the boundaries of those disks, we have the same number of (complex) eigenvalues in each of those disks for the operator $S$ we are interested in and the block operator $S_{0}$. The latter number is $1$. Since the non-real eigenvalues of $S$ come in complex conjugate pairs, we conclude that the eigenvalues of $S$ in those disks are also real. The rest of the story about the existence and uniqueness of the solution must be clear. It is also clear that $r\approx \sqrt\varepsilon$. If you want more information, you should apply the perturbation theory more carefully.
Unfortunately, explicit formulae seem to be out of question, so I cannot really answer the question "What is $\eta$?" in a meaningful way (though we can say that its Fourier coefficients decay fast enough, so it is real analytic, etc.).

Please accept my apologies for a) not posting earlier and b) changing my opinion about what the answer should be several times (my mental Algebra skills are totally dismal and that is one of the reasons why I prefer Analysis, where the correctness of a properly laid out proof is independent of the choices of signs and constant coefficients in the formulae :-) ).

The end.


This is an expanded version of my comments, to make them somewhat less cryptic. I'm not answering the question, and, I don't really expect to ever make further progress.


Update: After I learned from fedja's answer that an $r\simeq \epsilon^{1/2}$ works (much to my surprise), I can now show that too by a perturbation expansion of $D$ below, and this is now included in this new version.


If we write $u=(x,y)^t$, then we can rephrase the question as: Does the Dirac operator $L(\epsilon, r)$, $$ Lu = Ju'+\epsilon A u + r R(t) u, \quad J=\begin{pmatrix} 0 & -1 \\ 1 & 0 \end{pmatrix},\: A =\begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix},\: R= \begin{pmatrix} \cos t & \sin t \\ \sin t & -\cos t \end{pmatrix} $$ on $L^2(0,2\pi)$ have $E=0$ as a periodic eigenvalue (that is, with boundary conditions $u(0)=u(2\pi)$), for suitable $r$?

The transfer matrix $T(t,E)$ is defined as the $2\times 2$ matrix solution of $Lu=Eu$ with initial value $T(0,E)=1$. Since the coefficient matrices $JA$ and $JR$ have trace zero, it follows that $\det T=1$, so the eigenvalues of $T(2\pi,E)$ are determined by its trace $D(E):=\textrm{tr }T(2\pi, E)$. In particular, the periodic eigenvalues of $L$ are the solutions of $D(E)=2$.

Now $D(E=0,\epsilon,r=0)=2\cosh 2\pi\epsilon$, so existence of an $r$ as desired could be established by showing that $D(0,\epsilon,r)<2$ for some $r$. Moreover, since we now know from fedja's answer that this happens at $r\simeq \epsilon^{1/2}$, we can obtain this from a Taylor expansion of (the entire function) $D$ about $r=0$.

Denote the transfer matrix of $L(\epsilon,0)u=0$ by $$ T_0(t) = \begin{pmatrix} e^{-\epsilon t} & 0 \\ 0 & e^{\epsilon t} \end{pmatrix} . $$ I'll do variation of constants about $r=0$, so write $T(t,0)=T_0 M$. Then $M$ solves $M'= r T_0^{-1}JRT_0 M$, and solving this by iteration delivers the Taylor coefficients one after the next. So $M(t,r)=\sum r^n M_n(t)$, with $M_0=1$, $$ M_{n+1}(t) = \int_0^t \begin{pmatrix} -\sin s & e^{2\epsilon s}\cos s \\ e^{-2\epsilon s}\cos s & \sin s \end{pmatrix} M_n(s)\, ds . $$ Everything can be worked out explicitly, though at some risk to one's sanity. We are ultimately interested in $$ D(0,\epsilon,r)=\textrm{tr }T(2\pi,0) = e^{-2\pi\epsilon}a(2\pi, r) + e^{2\pi\epsilon}d(2\pi, r) , \quad \quad M\equiv \begin{pmatrix} a & b\\ c & d \end{pmatrix} . $$ Expanding everything, this becomes $$ D(0) = 2 + 4\pi^2\epsilon^2 + 2r^2\pi\epsilon (d_2-a_2) + \sum_{n=1}^4 r^n (a_n+d_n) + O(r^5) + O(\epsilon^3) \quad\quad\quad\quad (1) $$ (and everything taken at $t=2\pi$, of course). Brute force calculation will show that $d_2-a_2=0$ and $a_1+d_1=0$ (in fact, $a_1=d_1=0$) and $a_2+d_2=O(\epsilon^2)$. As for $a_3+d_3$, maybe the best way is to check that this vanishes when $\epsilon=0$, so we'll pick up at least one extra $\epsilon$ when we reintroduce this, so the third order term is $O(r^3\epsilon)$.

The big surprise now (for me) is that after everything vanished at $\epsilon=0$ or went the wrong way, out of the blue comes $a_4+d_4=-4\pi^2$ at $\epsilon=0$. So the bottom line is that when $r=\epsilon^{1/2}$, then this term cancels the $4\pi^2\epsilon^2$ exactly, and everything else in (1) is smaller.

Conclusion: There is a solution $r=r(\epsilon)$ of $D(0,\epsilon,r)=2$, and $\epsilon^{-1/2}r\to 1$ as $\epsilon\to0$.

Of course, with this perturbative approach, we cannot get the other half of fedja's result, namely that this is the only solution.