One dimension random walk. Is hitting time Lipschitz with respect to target?

could you elaborate more on your high tech solution?

OK, but it is just a standard boring exercise giving one no intellectual pleasure whatsoever. Write $X=t+Y$ where $EY=0$. Note that $t\ge\frac 1{2M}$. Let $f=f_1$ be the pdf of $Y$. Then the pdf $f_n$ of $Y+\dots+Y$ ($n$ times) is $f*\dots*f$, so $\widehat {f_n}=(\widehat f)^n$. Now $\widehat f(z)$ is an analytic function that is bounded by $e^{-\delta}$ on $(\mathbb R\setminus[-\rho,\rho])\cup [-\rho-i\tau,-\rho+i\tau]\cup[\rho-i\tau,\rho+i\tau]$ and $|f(x+iy)|\le e^{-ax^2+by^2}$ whenever $|x|\le\rho, |y|\le\tau$ with some efficient $a,b,\rho,\tau,\delta>0$ depending on $M$ only. Now, for $n\ge 2$, we can write $$ f_n(u)=\frac{1}{2\pi}\int_C (\widehat f(z))^n e^{iuz}dz $$ where $C$ is the contour $-\infty\to -\rho\to -\rho+iy \to \rho+iy\to\rho\to +\infty$ with any $y\in[-\tau,\tau]$ we want such that $uy>0$. Bounding $|f(z)|^n$ by $e^{-\delta(n-2)}|f(z)|^2$ on $C\setminus [-\rho+iy,\rho+iy]$, we see that that part of the contour contributes at most $Ce^{-\delta n}$ (recall that $\int_{\mathbb R}|\widehat f|^2=\int_{\mathbb R}|f|^2$ is controlled and the integral over the remaining interval is at most $e^{(bny^2-uy)}\int_\mathbb R e^{-anx^2}\,dx$. We would like to take $bny=\frac u2$, i.e. $y=cu/n$ to get $n^{-1/2}e^{-\gamma u^2/n}$ from here. That is fine as long as $|u|<c'n$. If $|u|>c'n$, we only get $e^{-\tau|u|/2}$, but we have allowed ourselves an exponential error already, so the final bound $$ f_n(u)\le C(e^{-\delta n}+n^{-1/2}e^{-\gamma u^2/n}) $$ holds always. The pdf of the sum of $n$ independent copies of $X$ is just $f_n(u-nt)$. I leave the estimation of the sum to you.

Edit. OK, here is a cute (in my taste) way to do it. Note that the statement can be reformulated as follows: For every $y>0,0<\varepsilon<\varepsilon_0$, the expected number of times the random walk $W_n=X_1+\dots+X_n$ hits $[y,y+\varepsilon]$ is at most $C\varepsilon$. Now, conditioning on $n$ such that $W_1,\dots,W_n<y-1; W_{n+1}>y-1$, we see that it is enough to get the bound for $y<1$. Now the probability to hit our interval after $n$ steps is at most $P\{W_{n-1}<1+\varepsilon_0\}M\varepsilon$, so it will suffice to sum $P_n=P\{W_n<1\}$. But now we obviously have $P_n\le \frac{M^n}{n!}$, so the Lipschitz constant is at most $Me^M$ (of course, this bound can be greatly improved, but, when doing it, try not to raise the level of complexity :-) ).

Next edit: Now we shall remove the condition that $X<1$ (but will still keep the iid assumption; the next question will be if we need the second "i" in it).

Let $t$ be the median of $X$. Split the pdf of $X$ into $f_-=f\chi_{[0,t]}$ and $f_+=f\chi_{(t,+\infty)}$. Now consider $F=f+f*f+\dots+f^{*N}$. It is a bounded function ($N$ is finite), so let $Z$ be its maximum. Write $$ F\le f+F*f=f+F*f_++F*f_-\le M+\frac Z2+F*f_- $$ The last term has a clear meaning: it is the chance to hit the target from distance $t$ or less. Now, if our target is $[y,y+\varepsilon]$, as before, then we can condition on $n$ such that $W_1,\dots,W_{n-1}<y-t, W_n\ge y-t$. Now, even if $W_n$ hits the target, it is not the hit we are interested in because in this case $X_n$ is a long shot and we count short shots only. However the score from the short range cannot be too large because every time you have $\frac 12$ chance to overshoot, which will terminate your attempts. Thus, the total probability to hit the target from the short range is at most $M\varepsilon(1+\frac 12+\frac 14+\dots)=2M\varepsilon$, so $$ Z=\max F\le M+\frac Z2+2M $$ so $Z\le 6M$ regardless of $N$.

Final edit. I discussed the problem with Alexander Magazinov and we have finally managed to remove the identical distribution assumption, thus solving the problem in full generality. I shall also assume that $M=1$, which can be always achieved by scaling (multiply all steps by $M$).

Let us generalize the setup a bit. We shall consider a walking shooter who starts with firing a shot $Y_1$, then moves one step $X_1$, fires another shot $Y_2$, moves another step $X_2$ and so on. In other words, if $f_k$ and $g_k$ are the pdf's of $X_k$ and $Y_k$ respectively, then we want to bound the sum $$ g_1+f_1*g_2+f_1*f_2*g_3+f_1*f_2*f_3*g_4+\dots $$ As usual, we want to estimate the probability of hitting the target with one of the shots. The original setup is just the case when the shots and the steps are identical.

It will be also convenient to introduce the classes $\mathcal F_n$ of pdf's. $\mathcal F_1$ is just the class of pdf's supported on $[0,+\infty)$ and bounded by $1$. For $n>1$, $\mathcal F_n=\mathcal F_{n-1}*\mathcal F_1$, i.e. $\mathcal F_n$ consists of all possible convolutions of $n$ (possibly different) functions in $\mathcal F_1$.

The key high-tech (well, not too high, but still requiring some play with the Fourier transform) result we need is the bound $\|f\|_\infty\le cn^{-1/2}$ for all $f\in\mathcal F_n$ with some absolute constant $c>0$.

Our assumptions on the non-negative functions $f_k$ and $g_k$ supported on $[0,+\infty)$ are the following:

1) $f_k\in \mathcal F_n$ for all $k$.

2) $\|g_k\|_\infty\le Gn^{-1/2}$ with some absolute $G>0$ that can (and will) be explicitly computed in terms of $c$ from the high-tech result.

3) $f_k$ dominate $g_k$ in the sense that for every $t\ge 0$, we have $\int_t^\infty g_k\le\int_t^\infty f_k$.

It is worth mentioning here that we don't restrict $g_k$ in any other way. We do not even require that $\int_{-\infty}^\infty g_k=1$ (that it is $\le 1$ follows directly from the domination property), i.e., we allow a positive probability of a misfire.

The claim is that then our infinite sum is bounded by $Zn^{-1/2}$ where $Z$ is some constant to be determined in terms of $c$ and $G$.

Proof

Split $g_k$ into $g_k^-+g_k^+$ where $g_k^-$ is the part lying to the left from the median of $X_k$ (it depends on $k$ now; note also that $X_k$ is not a misprint: we, indeed, split the shot according to the median of the step) and $g_k^+$ is the part lying to the right of it. Split $f_k$ in the same way. It is easy to see that $g_k^+$ is dominated by $f_k^+$.

Our first relatively trivial (by now) observation is that the sum $$ g_1^-+f_1*g_2^-+f_1*f_2*g_3^-+\dots $$ (close range shooting) is bounded by $2Gn^{-1/2}$ regardless of the length of the walk for exactly the same reason as before: condition on the first time we are closer to the target than the median of the next step; the probability of the hit is then $Gn^{-1/2}\varepsilon$ but the chance that we shall still remain to the left of the target after the next step is only $1/2$ and if we do, the whole game starts over.

Another trivial observation is that if the length of the walk is $0$ (only the initial shot is fired), then we can bound the corresponding one-term sum by $Gn^{-1/2}$, so if $Z\ge G$, we have no trouble with walks of length $0$ for all $n$.

Now we shall run the induction on the walk length. Suppose that the bound for all shorter walks is already established and we want to investigate some walk of the currently suspicious length corresponding to some $n$. Chopping off the $g^-$ part, we see that our main task is to bound $$ Q=g_1^++f_1*g_2^++f_1*f_2*g_3^++f_1*f_2*f_3*g_4^++f_1*f_2*f_3*f_4*g_5^++f_1*f_2*f_3*f_4*f_5*g_6^++\dots $$ Write $$ \frac 32 Q= \\ (\frac {g_1^+}2+f_1*g_2^+)+(f_1*f_2)*(\frac{g_3^+}2+f_3*g_4^+)+(f_1*f_2)*(f_3*f_4)*(\frac{g_5^+}2+f_5*g_6^+)+\dots \\ +f_1*[(\frac {g_2^+}2+f_2*g_3^+)+(f_2*f_3)*(\frac{g_4^+}2+f_4*g_5^+)+(f_2*f_3)*(f_4*f_5)*(\frac{g_6^+}2+f_6*g_7^+)+\dots] \\ +\text{ a few ($\le 8$, say) individual endpoint terms due to the boundary effect and possibly wrong parity} $$ Estimate the individual terms by $Gn^{-1/2}$ each and note that each of the long lines represents a shooting walk of smaller length with steps $f_k*f_{k+1}\in \mathcal F_{2n}$ and the corresponding shots $\frac{g_k^+}2+f_k*g_{k+1}^+$ (the initial convolution with $f_1$ in the second long line is harmless for bounding the maximum and can be safely forgotten).

We would like to use the induction assumption now but we need to check the conditions first. That the new steps are in $\mathcal F_{2n}$ is a no-brainer. Next $$ \frac{g_k^+}2+f_k*g_{k+1}^+\le \frac{Gn^{-1/2}}2+\frac {cn^{-1/2}}2\le G(2n)^{-1/2} $$ provided that $\frac G2+\frac c2\le \frac{G}{\sqrt 2}$. Note that we played the uniform norm of $f_k$ and the $L^1$ norm of $g_{k+1}^+$, not the other way around, to bound the second term. Finally, $\frac {g_k^+}2$ is dominated by $\frac{f_k^+}2$, which, in turn, is dominated by $f_k^+*f_{k+1}^-$ (one half of any function is dominated by its convolution with any non-negative function supported on $[0,+\infty)$ and having integral $\frac 12$). Since $g_{k+1}^+$ is dominated by $f_{k+1}^+$, we get the whole shot dominated by $f_k^+*f_{k+1}^-+f_k*f_{k+1}^+\le f_k*f_{k+1}$, so the domination condition is satisfied as well.

Thus, by the induction assumption, the final bound on our entire suspicious shooting walk is $$ 10Gn^{-1/2}+\frac 23[2Z(2n)^{-1/2}]=\left[10G+\frac{4}{3\sqrt 2}Z\right]n^{-1/2} $$ (note that it is $2n$ instead of $n$ that gives us the crucial $\sqrt 2$ improvement). Since $\frac {4}{3\sqrt 2}<1$, we can choose $Z$ to be an appropriate positive multiple of $G$ to get the sum in brackets equal to $Z$, thus finishing the story.

I talked to a few probabilists about whether this is "well-known". The consensus seems to be that it is not but also that it is of no interest because nobody cares about random walks with positive steps anyway. Still, I found it rather amusing, so I'm posting it here to preserve it for posterity. Yuval Peres also noted that the sharp constant should be about $2$ rather than about $1000$ this proof yields, but, I guess, I'll leave the question of finding the sharp bound to somebody else.

The End.