Expected number of terms needed to get a sum greater than $T$, for i.i.d. random variables uniformly distributed in $(0,1)$

$S_n$ follows the Irwin–Hall distribution. Therefore,

$$ \mathsf{P}(N> n)=\mathsf{P}(S_{n+1}\le T)=\frac{1}{(n+1)!}\sum_{k=0}^{\lfloor T\rfloor}(-1)^k\binom{n+1}{k}(T-k)^{n+1} $$ and \begin{align} \mathsf{E}N &=\sum_{n\ge 1}\frac{1}{n!}\sum_{k=0}^{\lfloor T\rfloor}(-1)^k\binom{n}{k}(T-k)^n \\ &=\sum_{k=0}^{\lfloor T\rfloor}\frac{(-1)^k}{k!}\sum_{n\ge 1\vee k}\frac{(T-k)^n}{(n-k)!} \\ &=\sum_{k=0}^{\lfloor T\rfloor}\frac{(-1)^k}{k!}(T-k)^ke^{T-k}-1. \end{align}

In particular, for $T\in [0,1]$, $\mathsf{E}N=e^T-1$.


Alternatively, is there an easier way to compute $\mathbb E[N]$?

Indeed, there is. For every nonnegative $t$, consider $N_t=\inf\{ n : S_n>t\}$ then your $N$ is $N_T-1$ hence it suffices to compute every $E(N_t)$. Conditioning on $X_1$, one sees that, for $t<1$, $$E(N_t)=1+\int_0^tE(N_{t-s})ds=1+\int_0^tE(N_s)ds$$ while, for $t>1$, $$E(N_t)=1+\int_0^1E(N_{t-s})ds=1+\int_{t-1}^tE(N_s)ds$$ Thus, the function $n(t)=E(N_t)$ solves the differential equation $$n'(t)=n(t)$$ on $(0,1)$, and the delayed differential equation $$n'(t)=n(t)-n(t-1)$$ on $t>1$, with the initial condition $n(0)=1$. Alternatively, $m(t)=e^{-t}n(t)$ solves the differential equation $$m'(t)=0$$ on $(0,1)$, and the delayed differential equation $$m'(t)=-e^{-t}n(t-1)=-e^{-1}m(t-1)$$ on $t>1$, with the initial condition $m(0)=1$.

Thus, $m(t)=1$ on $(0,1)$, $m(t)=1-e^{-1}(t-1)$ on $(1,2)$, and one can deduce recursively similar formulas for $m(t)$, hence for $n(t)$, on each interval $(k,k+1)$ with $k$ a natural integer. In the end, on $(k,k+1)$, $m(t)$ is a polynomial of degree $k$, and $n(t)=e^tm(t)$. The result is related to the Irwin-Hall distribution.

The case of the interval $(0,1)$ is especially pleasing since $m(t)=1$ on $(0,1)$ hence one gets simply $$E(N_t)=e^t$$ and, in particular, the a priori surprising value $$E(N_1)=e$$


The simpler version in the post you mentioned is based on the fact that the mean of all non-negative integer $\mathbb{E}[N]=\sum_{i=0}^{\infty}Np(N=k)=\sum_{i=1}^{\infty}Pr\{N\geq i\}$.(Refer to this question: Find the Mean for Non-Negative Integer-Valued Random Variable for the proof)

Take the simple case where $X_i\sim U(0,1)$ and $T\in [0,1]$ as an example: By induction, $Pr\{N\geq n+1\}=\frac{t^n}{n!}$ can be proved, so $$ \mathbb{E}(N)=\sum_{i=1}^\infty Pr\{N\geq i\}=\sum_{i=0}^\infty \frac{x^n}{n!}=e^t $$