What are the practical applications of the Taylor Series?

One reason is that we can approximate solutions to differential equations this way: For example, if we have

$$y''-x^2y=e^x$$

To solve this for $y$ would be difficult, if at all possible. But by representing $y$ as a Taylor series $\sum a_nx^n$, we can shuffle things around and determine the coefficients of this Taylor series, allowing us to approximate the solution around a desired point.

It's also useful for determining various infinite sums. For example:

$$\frac 1 {1-x}=\sum_{n=0}^\infty x^n$$ $$\frac 1 {1+x}=\sum_{n=0}^\infty (-1)^nx^n$$ Integrate: $$\ln(1+x)=\sum_{n=0}^\infty \frac{(-1)^nx^{n+1}}{n+1}$$ Substituting $x=1$ gives

$$\ln 2=1-\frac12+\frac13-\frac14+\frac15-\frac16\cdots$$

There are also applications in physics. If a system under a conservative force (one with an energy function associated with it, like gravity or electrostatic force) is at a stable equilibrium point $x_0$, then there are no net forces and the energy function is concave upwards (the energy being higher on either side is essentially what makes it stable). In terms of taylor series, the energy function $U$ centred around this point is of the form

$$U(x)=U_0+k_1(x-x_0)^2+k_2(x-x_0)^3\cdots$$

Where $U_0$ is the energy at the minimum $x=x_0$. For small displacements the high order terms will be very small and can be ignored. So we can approximate this by only looking at the first two terms:

$$U(x)\approx U_0+k_1(x-x_0)^2\cdots$$

Now force is the negative derivative of energy (forces send you from high to low energy, proportionally to the energy drop). Applying this, we get that

$$F=ma=mx''=-2k_1(x-x_0)$$

Rephrasing in terms of $y=x-x_0$:

$$my''=-2k_1y$$

Which is the equation for a simple harmonic oscillator. Basically, for small displacements around any stable equilibrium the system behaves approximately like an oscillating spring, with sinusoidal behaviour. So under certain conditions you can replace a potentially complicated system by another one that's very well understood and well-studied. You can see this in a pendulum, for example.

As a final point, they're also useful in determining limits:

$$\lim_{x\to0}\frac{\sin x-x}{x^3}$$ $$\lim_{x\to0}\frac{x-\frac16x^3+\frac 1{120}x^5\cdots-x}{x^3}$$ $$\lim_{x\to0}-\frac16+\frac 1{120}x^2\cdots$$ $$-\frac16$$

which otherwise would have been relatively difficult to determine. Because polynomials behave so much more nicely than other functions, we can use taylor series to determine useful information that would be very difficult, if at all possible, to determine directly.

EDIT: I almost forgot to mention the granddaddy:

$$e^x=1+x+\frac12x^2+\frac16x^3+\frac1{24}x^4\cdots$$ $$e^{ix}=1+ix-\frac12x^2-i\frac16x^3+\frac1{24}x^4\cdots$$ $$=1-\frac12x^2+\frac1{24}x^4\cdots + ix-i\frac16x^3+i\frac1{120}x^5\cdots$$ $$=\cos x+i\sin x$$ $$e^{ix}=\cos x+i\sin x$$

Which is probably the most important equation in complex analysis. This one alone should be motivation enough, the others are really just icing on the cake.


In the calculator era, we often don't realize how deeply nontrivial it is to get an arbitrarily good approximation for a number like $e$, or better yet, $e^{\sin(\sqrt{2})}$. It turns out that in the grand scheme of things, $e^x$ is not a very nasty function at all. Since it's analytic, i.e. has a Taylor series, if we want to compute its values we just compute the first few terms of its Taylor expansion at some point.

This makes plenty of sense for computing, say, $e^{1/2}: 1+1/2+1/2!(1/2)^2+1/3!(1/2)^3+...$ is obviously going to converge very quickly: $1/4!2^4<1/100$ and $1/5!2^5<1/1000$, so we know for instance we can get $e^{1/2}$ to $2$ decimal places by summing the first $5$ terms of the Taylor expansion.

But why should this work for computing something like $e^{100}$? Now the expansion looks like $1+100+100^2/2+100^3/3!+...$, and initially it blows up incredibly fast. This is where analytic functions really show how special they are: the denominators $n!$ grow so fast that it doesn't matter what $x^n$ we have in the numerators, before too long the series will converge. That's the essence of the Taylor approximation: analytic functions are those that are unreasonably close to polynomials.

There are much faster methods for getting approximations like the one for $\sqrt{e}$, in theory: using Newton's method to solve $x^2-e=0$ will give you an approximation to $\sqrt{e}$ accurate to a number of places that goes like the square of the number of iterations you've done. But how do we apply Newton's method here? The first formula is $$x_1=x_0-\frac{2x_0}{x_0^2-e}$$ So, if we want a decimal expansion of $\sqrt{e}$, we'd better be able to get one of $x_0^2-e$. And how are we going to get that? The Taylor series.


  • You might wanna read this Taylor Series as Definitions.

Taylor Series are studied because polynomial functions are easy and if one could find a way to represent complicated functions as series (infinite polynomials) then one can easily study the properties of difficult functions.

  1. Evaluating definite Integrals: Some functions have no antiderivative which can be expressed in terms of familiar functions. This makes evaluating definite integrals of these functions difficult because the Fundamental Theorem of Calculus cannot be used. If we have a polynomial representation of a function, we can oftentimes use that to evaluate a definite integral.

  2. Understanding asymptotic behaviour: Sometimes, a Taylor series can tell us useful information about how a function behaves in an important part of its domain.

  3. Understanding the growth of functions

  4. Solving differential equations

I'm pretty sure this is not all but with a little research you can find as many as possible.