Some interesting observations on a sum of reciprocals

Both your claims are true.

if you call $$ f(x) = \frac1{x-1}+\frac1{x-2}+\cdots+\frac1{x-k}-\frac1{x-k-1} $$ then $f(1^+) = +\infty$, $f(2^-) = -\infty$ and $f$ is continuous in $(1,2)$, so it has a root in $(1,2)$. The same you can say about $(2,3)$, $(3,4), \cdots, (k-1,k)$, so there are at least $k-1$ real distinct roots. $f$ is also equivalent to a $k$-degree polynomial with the same root, but a $k$-degree polynomial with $k-1$ real roots has in reality $k$ real roots.

The last root lies in $(k+1,+\infty)$, since $f(k+1^+) = -\infty$ and $f(+\infty) = +\infty$.

The least root $x_{\min}$ must lie in $(1,2)$, since $f(x)<0$ for every $x<1$. Moreover, $$ f(x) = 0\implies x = 1 + \frac{1}{\frac1{x-k-1}-\frac1{x-2}-\cdots-\frac1{x-k}} $$ and knowing $1<x<2$, we infer $\frac1{x-k-1}>\frac1{x-2}$ and $$ 1<x = 1 + \frac{1}{\frac1{x-k-1}-\frac1{x-2}-\cdots-\frac1{x-k}} < 1 - \frac{1}{\frac1{x-3}+\cdots+\frac1{x-k}}\to 1 $$ so $x_{\min}$ converges to $1$


About the third claim, notice that you may repeat the same argument for any root except the biggest. Let us say that $x_r$ is the $r-th$ root, with $r<k$, and we know that $r<x_r<r+1$. $$ f(x_r) = 0\implies x_r = r + \frac{1}{\frac1{x_r-k-1}-\frac1{x_r-1}-\cdots-\frac1{x_r-k}} $$ but $\frac1{x_r-k-1}>\frac1{x_r-1}$ holds, so $$ r<x_r = r + \frac{1}{\frac1{x_r-k-1}-\frac1{x_r-1}-\cdots-\frac1{x_r-k}} < r - \frac{1}{\frac1{x_r-2}+\cdots+\frac1{x_r-k}}\to r $$ so $x_r$ converges to $r$.

For the biggest root, we know $k+1<x_k$ and $$ f(x_k) = 0\implies k+1 < x_k = k+1 + \frac{1}{\frac1{x_k-1}+\cdots+\frac1{x_k-k}} \to k+1 $$


For the base case, $$\tag1f_2(x)=\frac1{x-1}+\frac1{x-2}-\frac1{x-3}, $$ one readily verifies that there is a root in $(1,2)$ and a root $x^*$ in $(3,+\infty)$.

If we multiply out the denominators of $$f_k(x)=\frac1{x-1}+\frac1{x-2}+\ldots+\frac1{x-k}-\frac1{x-k-1},$$ we obtain the equation $$\tag2(x-1)(x-2)\cdots(x-k-1)f_k(x)=0,$$ which is a polynomial of degree (at most) $k$, so we expect $k$ solutions, but some of these may be complex or repeated or happen to be among $\{1,2,\ldots, k+1\}$ and thus not allowed for the original equation. But $f_k(x)$ has simple poles with jumps from $-\infty$ to $+\infty$ at $1,2,3,\ldots, k$, and a simple pole with jump from $+\infty$ to $-\infty$ at $k+1$, and is continuous otherwise. It follows that there is (at least) one real root in $(1,2)$, at least one in in $(2,3)$, etc. up to $(k-1,k)$, so there are at least $k-1$ distinct real roots. Additionally, for $x>k+1$ and $k\ge2$, we have $$f_k(x)\ge f_2(x+k-2).$$ It follows that there is another real root between $k+1$ and $x^*+k-2$. So indeed, we have $k$ distinct real roots.

From the aboive, the smallest root is always in $(1,2)$. If follows from $f_{k+1}(x)>f_k(x)$ for $x\in(1,2)$ and the fact that all $f_k$ are strictly decreasing there, that $x_\min $ decreases with increasing $k$. As a decreasing bounded sequence, it does have a limit.


Considering that you look for the first zero of function $$f(x)=\sum_{i=1}^k \frac 1{x-i}-\frac1 {x-k-1}$$ which can write, using harmonic numbers, $$f(x)=H_{-x}-H_{k-x}-\frac{1}{x-k-1}$$ remove the asymptotes using $$g(x)=(x-1)(x-2)f(x)=2x-3+(x-1)(x-2)\left(H_{2-x}-H_{k-x}-\frac{1}{x-k-1} \right)$$ You can approximate the solution using a Taylor expansion around $x=1$ and get $$g(x)=-1+(x-1) \left(-\frac{1}{k}+\psi ^{(0)}(k)+\gamma +1\right)+O\left((x-1)^2\right)$$ Ignoring the higher order terms, this gives as an approximation $$x_{est}=1+\frac{k}{k\left(\gamma +1+ \psi ^{(0)}(k)\right)-1}$$ which seems to be "decent" (and, for sure, confirms your claims). $$\left( \begin{array}{ccc} k & x_{est} & x_{sol} \\ 2 & 1.66667 & 1.58579 \\ 3 & 1.46154 & 1.46791 \\ 4 & 1.38710 & 1.41082 \\ 5 & 1.34682 & 1.37605 \\ 6 & 1.32086 & 1.35209 \\ 7 & 1.30238 & 1.33430 \\ 8 & 1.28836 & 1.32040 \\ 9 & 1.27726 & 1.30914 \\ 10 & 1.26817 & 1.29976 \\ 11 & 1.26055 & 1.29179 \\ 12 & 1.25403 & 1.28489 \\ 13 & 1.24837 & 1.27884 \\ 14 & 1.24339 & 1.27347 \\ 15 & 1.23895 & 1.26867 \\ 16 & 1.23498 & 1.26433 \\ 17 & 1.23138 & 1.26039 \\ 18 & 1.22810 & 1.25678 \\ 19 & 1.22510 & 1.25346 \\ 20 & 1.22233 & 1.25039 \end{array} \right)$$ For infinitely large values of $k$, the asymptotics of the estimate would be $$x_{est}=1+\frac{1}{\log \left({k}\right)+\gamma +1}$$

For $k=1000$, the exact solution is $1.12955$ while the first approximation gives $1.11788$ and the second $1.11786$.

Using such estimates would make Newton method converging quite fast (shown below for $k=1000$).

$$\left( \begin{array}{cc} n & x_n \\ 0 & 1.117855442 \\ 1 & 1.129429575 \\ 2 & 1.129545489 \\ 3 & 1.129545500 \end{array} \right)$$

Edit

We can obtain much better approximations if, instead of using a Taylor expansion of $g(x)$ to $O\left((x-1)^2\right)$, we build the simplest $[1,1]$ Padé approximant (which is equivalent to an $O\left((x-1)^3\right)$ Taylor expansion). This would lead to $$x=1+ \frac{6 (k+k (\psi ^{(0)}(k)+\gamma )-1)}{\pi ^2 k+6 (k+\gamma (\gamma k+k-2)-1)-6 k \psi ^{(1)}(k)+6 \psi ^{(0)}(k) (2 \gamma k+k+k \psi ^{(0)}(k)-2)}$$ Repeating the same calculations as above, the results are $$\left( \begin{array}{ccc} k & x_{est} & x_{sol} \\ 2 & 1.60000 & 1.58579 \\ 3 & 1.46429 & 1.46791 \\ 4 & 1.40435 & 1.41082 \\ 5 & 1.36900 & 1.37605 \\ 6 & 1.34504 & 1.35209 \\ 7 & 1.32741 & 1.33430 \\ 8 & 1.31371 & 1.32040 \\ 9 & 1.30266 & 1.30914 \\ 10 & 1.29348 & 1.29976 \\ 11 & 1.28569 & 1.29179 \\ 12 & 1.27897 & 1.28489 \\ 13 & 1.27308 & 1.27884 \\ 14 & 1.26787 & 1.27347 \\ 15 & 1.26320 & 1.26867 \\ 16 & 1.25899 & 1.26433 \\ 17 & 1.25516 & 1.26039 \\ 18 & 1.25166 & 1.25678 \\ 19 & 1.24844 & 1.25346 \\ 20 & 1.24547 & 1.25039 \end{array} \right)$$

For $k=1000$, this would give as an estimate $1.12829$ for an exact value of $1.12955$.

For infinitely large values of $k$, the asymptotics of the estimate would be $$x_{est}=1+\frac{6 (\log (k)+\gamma +1)}{6 \log (k) (\log (k)+2 \gamma +1)+\pi ^2+6 \gamma (1+\gamma )+6}$$