A variant of Kronecker's approximation theorem?

Your claim is true, and here is why.

Summary of proof. A compactness argument allows one to use a strengthened version of Kronecker's theorem that is "more uniform" in $x$ namely :

Main lemma. There is a constant $M$ (depending only on $\sigma,\tau$ and $\epsilon$ and not on $x$) such that for any $x \geq 0$, there are integers $(n,m)\in[0,M]\times {\mathbb N}$ with $|x+n\tau-m\sigma| \lt \epsilon$.

Detailed proof. Replacing $(x,\tau,\sigma,\epsilon)$ with $(\frac{x}{\sigma},\frac{\tau}{\sigma},1,\frac{\epsilon}{\sigma})$, we may assume without loss that $\sigma=1$.

For $n,m\in {\mathbb N}$, let

$$A_{n,m}= \bigg\lbrace X\in {\mathbb R} \bigg| |X+n\tau-m| \lt\epsilon\bigg\rbrace.\tag{3}$$

Then, Kronecker's usual theorem says that whenever $\tau$ is irrational, there are nonnegative integers $n(x),m(x)$ with $x\in A_{n(x),m(x)}$.

Then $\bigcup_{x\in [0,1]} A_{n(x),m(x)}$ is an open covering of $[0,1]$. Since $[0,1]$ is compact, there is a finite subset $I\subseteq [0,1]$ such that $\bigcup_{x\in I} A_{n(x),m(x)}$ is still a covering of $[0,1]$. Denote by $M$ the maximum value of $n(x)$ or $m(x)$ when $x$ varies in the finite set $I$. We have then that

$$ [0,1] \subseteq \bigcup_{0 \leq n,m \leq M} A_{n,m}. \tag{4} $$

(4) means that for any $x\in [0,1]$, we can find $n,m$ with $0 \leq n,m \leq M$ such that $$(*) : \quad |x+n\tau-m| \leq \epsilon.$$ Now, if $x\geq 1$, and we put $x'=x-\lfloor x \rfloor$ (the fractional part of $x$), then $x'\in [0,1]$ so that $|x'+n'\tau-m'| \leq \epsilon$ for some $(n',m')=(n(x'),m(x'))$. But then ($*$) holds also for $(n',m'+\lfloor x \rfloor)$ in place of $(n,m)$. We deduce that

$$ {\mathbb R}^+ \subseteq \bigcup_{0 \leq n \leq M, m\geq 0} A_{n,m}. \tag{4'} $$

This concludes the proof of the main lemma. Let us now prove (2). Using $\frac{\epsilon}{2}$ instead of $\epsilon$ in the main lemma, there is a $M>0$ such that for any $y \geq 0$, there are integers $(n(y),m(y))\in[0,M]\times {\mathbb N}$ with

$$|y+n(y)\tau-m(y)| \lt \frac{\epsilon}{2}.\tag{5}$$

Let $\delta >0$ be a positive constant whose value is to be decided later. By hypothesis, there is a $k_0$ such that $x+\tau_1+\sum_{k=1}^{k_0-1}\tau_{k+1}-\tau_k \geq 0$ and $|\tau_{k+1}-\tau_k-\tau| \leq \delta$ for any $k\geq k_0$.

Let $y=x+\tau_1+\sum_{k=1}^{k_0-1}\tau_{k+1}-\tau_k=x+\tau_{k_0}$ ; we know that $y$ is nonnegative. By (5),

$$\bigg|x+\tau_{k_0}+n(y)\tau-m(y)\bigg| \lt \frac{\epsilon}{2}.\tag{6}$$

On the other hand, we have

$$ \bigg| \sum_{k=k_0}^{k_0+n(y)-1} \tau_{k+1}-\tau_k-\tau \bigg| \leq n(y)\delta \leq \delta M. \tag{7}$$

Adding (6) and (7) and using the triangle inequality, we obtain

$$ \bigg|x+\tau_{k_0+n(y)}-m(y)\bigg|=\bigg|x+\sum_{k=1}^{k_0+n(y)-1}(\tau_{k+1}-\tau_k)-m(y)\bigg| \lt \frac{\epsilon}{2}+\delta M. $$

Taking $\delta=\frac{\epsilon}{2M}$, we are done.


Thank you so very, very much, Ewan Delanoy! Your Lemma is exactly what I needed. Let me say that I'm only posting this as an answer, because it is too long for a comment.

Your Lemma basically says that the number $n$ in (1) can in fact always be taken from the set $\{0,\ldots,M=M(\epsilon)\}$. This is indeed all I need to prove (2) (since this allows to argue in the same manner, as if the series were convergent). Your proof of (2) is a bit off though, as you seem to demand that $\tau_k$ converges to $\tau$, which is not what I assume. For the sake of completeness and clarity, let me rework that part as follows:

Let $\epsilon>0$. Without loss of generality we can assume that $x+\tau_0\ge 0$ and $$ |(\tau_{k}-\tau_{k-1})-\tau|<\frac{\epsilon}{2M(\epsilon/2)} \quad \text{for all $k\in\mathbb N$.}$$ Hence for all $n\in\{0,\ldots,M(\epsilon/2)\}$ and $m\in\mathbb N$ we have \begin{align*} |x+\tau_n-m\sigma|&=\left|x+\tau_0+\sum_{k=1}^{n}((\tau_{k}-\tau_{k-1})-\tau)+n\tau-m\sigma\right| \\ &\le \sum_{k=1}^{M(\epsilon/2)}|(\tau_{k}-\tau_{k-1})-\tau| + |x+\tau_0+n\tau-m\sigma| \\ &\le \frac{\epsilon}{2} + |x+\tau_0+n\tau-m\sigma|. \end{align*} Now, thanks to your Lemma, we can choose $n$ and $m$ such that the second summand is also smaller than $\frac{\epsilon}{2}$.

Once again thank you very much for providing me with this essential ingredient (and its neat proof).