When do Taylor series provide a perfect approximation?

Limits are exact

You have a misunderstanding about limits! A limit, when it exists is just a value. An exact value.

It doesn't make sense to talk about the limit reaching some value, or there being some error. $\lim_{x \to 1} x^2$ is just number, and that number is exactly one.

What you are describing — these ideas about "reaching" a value with some "error" — are descriptions of the behavior of the expression $x^2$ as $x \to 1$. Among the features of this behavior is that $x^2$ is "reaching" one.

By its very definition, the limit is the exact value that its expression is "reaching". $x^2$ may be "approximately" one, but $\lim_{x \to 1} x^2$ is exactly one.

Taylor polynomials

In this light, nearly everything you've said in your post is not about Taylor series, but instead about Taylor polynomials. When a Taylor series exists, the Taylor polynomial is given simply by truncating the series to finitely many terms. (Taylor polynomials can exist in situations where Taylor series don't)

In general, the definition of the $n$-th order Taylor polynomial for an $n$-times differentiable function is the sum

$$ \sum_{k=0}^n f^{(k)}(x) \frac{x^k}{k!} $$

Taylor polynomials, generally, are not exactly equal to the original function. The only time that happens is when the original function is a polynonial of degree less than or equal to $n$.

The sequence of Taylor polynomials, as $n \to \infty$, may converge to something. The Taylor series is exactly the value that the Taylor polynomials converge to.

The error in the approximation of a function by a Taylor polynomial is something people study. One often speaks of the "remainder term" or the "Taylor remainder", which is precisely the error term. There are a number of theorems that put constraints on how big the error term can be.

Taylor series can have errors!

Despite all of the above, one of the big surprises of real analysis is that a function might not be equal to its Taylor series! There is a notorious example:

$$ f(x) = \begin{cases} 0 & x = 0 \\ \exp(-1/x^2) & x \neq 0 \end{cases} $$

you can prove that $f$ is infinitely differentiable everywhere. However, all of its derivatives have the property that $f^{(k)}(0) = 0$, so its Taylor series around zero is simply the zero function.

However, we define

A function $f$ is analytic at a point $a$ if there is an interval around $a$ on which $f$ is (exactly) equal to its Taylor series.

"Most" functions mathematicians actually work with are analytic functions (e.g. all of the trigonometric functions are analytic on their domain), or analytic except for obvious exceptions (e.g. $|x|$ is not analytic at zero, but it is analytic everywhere else).


This is the fundamental question behind remainder estimates for Taylor's Theorem. Typically (meaning for sufficiently differentiable functions, and assuming without loss of generality that we are expanding at $0$), we have estimates of the form $$ f(x) = \sum_{n = 0}^N f^{(n)}(0) \frac{x^n}{n!} + E_N(x)$$ where the error $E_N(x)$ is given explicitly by $$ E_N(x) = \int_0^x f^{(N+1)}(t) \frac{(x-t)^N}{N!} dt,$$ though this is frequently approximated by $$ |E_N(x)| \leq \max_{t \in [0,x]} |f^{(N+1)}(t)| \frac{x^{N+1}}{(N+1)!}.$$

A Taylor series will converge to $f$ at $x$ if $E_N(x) \to 0$ as $N \to \infty$. If the derivatives are well-behaved, then this is relatively easy to understand. But if the derivatives are very hard to understand, then this question can be very hard to determine.

There are examples of infinitely differentiable functions whose Taylor series don't converge in any neighborhood of the central expansion point, and there are examples of functions with pretty hard-to-understand derivatives whose Taylor series converge everywhere to that function. Asking for more is a bit nuanced for each individual function.


Suppose we have a smooth function $f$. Its Taylor series at a point $a$ is the series$$\sum_{n=0}^\infty\frac{f^{(n)}(a)}{n!}(x-a)^n.\tag1$$So, what you want to know is this: when does this series converge to $f(x)$ for each $x$ in the domain of $f$. In order to determine that, we study the remainder of the Taylor series, which is$$f(x)-\sum_{k=0}^n\frac{f^{(k)}(a)}{k!}(x-a)^k.$$Given $x$ in the domain of $f$, the series $(1)$ converges to $f(x)$ if and only if the limit of the sequence of the remainders is $0$. This is what happens (for every $x$) in the case of the exponential function, the sine function or the cosine function.

The worst case is when you have a function such as$$f(x)=\begin{cases}e^{-1/x^2}&\text{ if }x\neq0\\0&\text{ if }x=0.\end{cases}$$In this case, the Taylor series at $0$ is just the null series, which converges to $f(x)$ when (and only when) $x=0$.