Is there a mean value theorem for higher order differences?

Given a function $f(x)$ defined on $[a, a+nh]$ continuously differentiable up to $n$ times.
Let $P(x)$ be a polynomial of degree $n$ coincides with $f(x)$ on the $n+1$ points $a, a+h, \ldots, a+nh$. It is clear

$$\left. \Delta_h^n ( f(x) - P(x) )\right|_{x=a} =\sum_{k=0}^n (-1)^{n-k} \binom{n}{k}( f(a+kh)-P(a+kh) ) = 0 $$ This implies $$\left.\Delta_h^n f(x)\right|_{x=a} = \left.\Delta_h^n P(x)\right|_{x=a} = h^n P^{(n)}(a)\tag{*1}$$

On the other hand, $f(x) - P(x)$ vanishes on $n+1$ points $a, a+h, a+2h, \ldots, a+nh$.
By Rolle's theorem, we can find $n$ points $x_1, x_2, \ldots, x_n$:

$$a < x_1 < a+h < x_2 < a+2h < \cdots a + (n-1)h < x_n < a+nh$$

such that $f'(x) - P'(x)$ vanishes on these $x_i$. Repeat applying Rolle's $(n-1)$ more times, we can conclude there is a $b \in (a, a+nh)$ such that

$$f^{(n)}(b) - P^{(n)}(b) = 0 \quad\iff\quad f^{(n)}(b) = P^{(n)}(b)\tag{*2}$$

Since $P(x)$ is a polynomial of degree $n$, $P^{(n)}(x)$ is a constant. This means $P^{(n)}(a) = P^{(n)}(b)$ and by combining $(*1)$ and $(*2)$, we get:

$$\frac{1}{h^n}\left.\Delta_h^n f(x)\right|_{x=a} = f^{(n)}(b)\quad\text{ for some }b \in (a, a + nh)$$


I refer to Wikipedia for notation.

I will give you an exact representation formula for finite differences, in the same spirit of the Taylor theorem with remainder in integral form.

The key point is that finite differences and derivatives commute: $D\Delta_h=\Delta_hD$.

For $f\in C^1(\mathbb R)$ you can compute $$ \frac1h\Delta_h[f](x) = \frac{f(x+h)-f(x)}h = \frac1h \int_0^h D[f](x+x_1) \,dx_1 $$

For $f\in C^n(\mathbb R)$, iterating the above formula, you get $$ \begin{split} \frac1{h^n}\Delta_h^n[f](x) &= \frac{\frac1{h^{n-1}}\Delta_h^{n-1}[f](x+h) - \frac1{h^{n-1}}\Delta_h^{n-1}[f](x)} {h} \\ &= \frac1h \int_0^h \frac1{h^{n-1}}D\Delta_h^{n-1}[f](x+x_1) \,dx_1 \\ &= \frac1{h^2} \int_0^h \int_0^h \frac1{h^{n-1}} D^2\Delta_h^{n-2}[f](x+x_1+x_2) \,dx_1dx_2 \\ &= \,\cdots \\ &= \frac1{h^n} \int_0^h\dotsi\int_0^h D^n[f](x+x_1+\dotsb+x_n) \,dx_1\dotsm dx_n . \end{split} $$

The last integral is a weighted average of $D^n[f]$ over the segment $[x,x+nh]$. More precisely, let $X_1,\dotsc,X_n\sim\mathrm{Uniform}(0,h)$ be i.i.d. and $S_n=X_1+\dotsb+X_n$. Then the previous expression can be viewed as $$ \frac1{h^n}\Delta_h^n[f](x) = \mathbb{E}\left[D^n[f](x+S_n)\right]. $$

This can be also written as $$ \frac1{h^n}\Delta_h^n[f](x) = \int_0^{nh} D^n[f](x+t) \sigma_n(t) \,dt $$ where $\sigma_n$ is the probability density function of $S_n$, which is a rescaling of the Irwin–Hall distribution (refer to here for a beautiful derivation of the PDF).

The mean value theorem for integrals applies and tells us that there exists $x^*\in(x,x+nh)$ such that $$ \frac1{h^n}\Delta_h^n[f](x) = D^n[f](x^*). $$