Why does Taylor’s series “work”?

I had this same problem, too. The trick with it is realizing that there's an important difference between Taylor series and Taylor approximations or polynomials, whose behavior is described by Taylor's theorem. Yes, very often I suspect a common mistake is that you first see Taylor polynomials and theorem, and then you get Taylor series and that becomes the focus and suddenly you forget about the rest.

But here, what we're actually doing when we "truncate" a Taylor series is that we are going back to a Taylor polynomial, since that is what a truncated Taylor series is - or alternatively, a Taylor series is the natural extension of to infinite order. In that context, Taylor's theorem tells you exactly how it does or does not behave as an approximation and - surprise - it doesn't require anything about analyticity at all. Analyticity only comes into play when you consider the full series: in fact, what Taylor's theorem tells you is that a finite Taylor polynomial will still work as an approximation for even a non-analytic function, so long as you get suitably close to the point at which you're taking the polynomial and the function is differentiable enough to be able to make the polynomial of the given degree possible to take.

Specifically, Taylor's theorem tells you that, analytic or not, if you cut the Taylor series so that the highest term has degree $N$, to form the Taylor polynomial (or truncated Taylor series) $T_N(a, x)$, where $a$ is the expansion point, you have

$$f(x) = T_N(a, x) + o(|x - a|^N),\ \ \ \ \ x \rightarrow a$$

where the last part defines the behavior of the remainder term: this is the "little-o notation" and means that the error pales in comparison to the bound $|x - a|^N$.

As an example in elementary mathematical physics, consider the analysis of the "pathological" potential in Newtonian mechanics given by

$$U(x) := \begin{cases} e^{-\frac{1}{x^2}},\ x \ne 0\\ 0,\ \mbox{otherwise} \end{cases}$$

which is smooth everywhere, but not analytic when $x = 0$. In particular, it is so bad that not only is it not analytic, the Taylor series exists and even converges ... just to the wrong thing!:

$$U(x)\ "="\ 0 + 0x + 0x^2 + 0x^3 + 0x^4 + \cdots,\ \ \ \ \mbox{near $x = 0$}$$

... and yes, that is literally 0s on every term, so the right-hand expression equals $0$!

(ADD - see comments: no... not THAT 0! ... uh ... Ooops... uhhh ... )

Nonetheless, while that is technically "wrong", the usual analysis methods you have for this system will still tell you the "right thing", provided you're careful: in particular, we note that $x = 0$ looks like some kind of "equilibrium" since $U'$ is zero there, but we also note that we are told - correctly! - that we should not apply the harmonic oscillator approximation because we also have that the coefficient out in front of $x^2$ is 0 as well.

We are justified in both conclusions because while this Taylor series is "bad", it is still A-OK by Taylor's theorem to write the truncated series, and thus Taylor polynomial,

$$U(x) \approx 0 + 0x + 0x^2,\ \ \ \ \mbox{near $x = 0$}$$

even though it "equals $0$", because this $U(x)$ is "so exquisitely approximated by the constant function $U^{*}(x) := 0$" that it is $o(|x|^N)$ for every order $N > 0$ and thus, in particular, also $N = 2$! Hence, the harmonic analysis and conclusion of failure thereof are still 100% justified!


ADD (IE+1936.6817 Ms - 2018-05-16): Per a comment added below, there is an additional wrinkle in this story which had been thinking of mentioning but didn't, yet for which, in light of that, I thought maybe I now should.

There are actually two different kinds of ways in which the Taylor series can fail when a function is not analytic at a point and it is taken at that point. One of these is the way I showed above - where the Taylor series converges, but it converges to the "wrong" thing in that it does not equal the function in any non-trivial interval around that point (you might be able to have it equal it on some weird dusty/broken-up set, but not on any interval), i.e. no interval $[a - \epsilon, a + \epsilon]$ with $\epsilon \ne 0$. Such a point is called a Cauchy point, or C-point.

The other way is for the Taylor series to have actually radius of convergence 0, i.e. it does not converge in any non-trivial interval of the same form with $\epsilon \ne 0$. This kind of point is called a Pringsheim point, or P-point. This case was not demonstrated, but even in such a case, the Taylor series is still an asymptotic series in the sense that it will at least try to start to converge if you're close enough and, moreover, the closer you are to the expansion point $a$, the more terms you can take before it stops converging and starts to diverge again. Since in physics, we are usually interested - and esp. for the harmonic oscillator - in only a few low-order terms, the ultimate behavior of the series is not important and we can still take it to get, say, the harmonic approximation near a point of equilibrium even if the function is not analytic there - e.g. consider the potential $U_3(x) := U(x) + \frac{1}{2} kx^2$ with $k > 0$, where we used the first potential we just gave above. This is not analytic at $x = 0$ either, but nonetheless, the harmonic approximation will not only work, but work exquisitely well, and with the frequency $\omega := \sqrt{\frac{k}{m}}$ as usual.

See:

https://math.stackexchange.com/questions/620290/is-it-possible-for-a-function-to-be-smooth-everywhere-analytic-nowhere-yet-tay


The Stone-Weierstrass theorem says that any continuous function on a compact interval is arbitrarily well approximated by polynomials. Thus, as long as we're only interested in explaining experimental results (and not in the exact solutions of theoretical models), series expansions are plenty good enough. That is, for whatever we'd like to describe, there is a model (at least in the statistics sense) which describes it to any good enough accuracy and which is analytic. So it doesn't seem obvious that we'll ever be able to tell whether the world is analytic or just $C^\infty$, or even $C^0$!

Of course in practice, our theories make infinitely many predictions for the values of such functions, for instance mechanics predicts them as solutions to differential equations, field theory by some integrals. Typically we cannot evaluate our theoretical predictions exactly and so we use numerics or asymptotic series methods. Things that come out of our models tend not to be analytic, so I think we're a bit spoiled in our physics education with all these analytic and solvable models.

The question of why so many (but not all!) exact solutions to theoretical models are real or even complex analytic is a whole different discussion, and much more mysterious, although causality does have some bear on it. For instance, response functions in time always have an extension to complex time in the upper-half-plane.

But more mysteriously, there are things like the KdV equation, the first equation to describe solitons, whose integrability turned out to be closely related to elliptic curves. So integrability seems not just related to analyticity, but even to algebraicity! But it is a rather hidden connection, because the solutions to KdV themselves are transcendental. Anyway, I do recommend the book I linked. It's written for undergraduates and it's a lot of fun.


If we know the value of $f$ at $t$, and we want to know the value of $f(t+\Delta t)$ for small $\Delta t$, then the most basic thing to do is to just assume that $f(t+\Delta t) = f(t)$. This is known as the "zeroth order approximation". In calculus, you learned about tangent lines. With a tangent line, instead of approximating the function with a fixed value, you approximate it with a line, and the slope of the line is the derivative: $f(t+\Delta t) = f(t)+(\Delta t) f'(t)$. This is the "first order approximation". So this is treating the derivative as a constant, i.e. a first order approximation of the function is given in term of a zeroth order approximation of the derivative.

We could instead calculate a first order approximation of the derivative, and use that to approximate the function. This would then be a second order approximation of the function. We have $f'(t+\Delta t) = f'(t) +(\Delta t)f''(t)$, and integrating that we get $f(t+\Delta t) = f(t)+(\Delta t)f'(t)+\frac {(\Delta t)^2}{2}f''(t)$. We can continue this process, and the nth order approximation will then simply be the first n (with zero indexing) terms of the Taylor Series. This isn't assuming that the function is analytic; it's simply applying an intuitive strategy to approximate the function.

So is it valid? Well, if we have a bound on the nth derivative of $f$ over the interval, then we can use that to put a bound on the (n-1)th derivative, which can then give a bound on the (n-2)th, and so. So even without knowing the $f$ is analytic, having a bound on the nth derivative gives a bound on the error for the nth order approximation.