How are the Taylor Series derived?

$\newcommand{\+}{^{\dagger}} \newcommand{\angles}[1]{\left\langle #1 \right\rangle} \newcommand{\braces}[1]{\left\lbrace #1 \right\rbrace} \newcommand{\bracks}[1]{\left\lbrack #1 \right\rbrack} \newcommand{\ceil}[1]{\,\left\lceil #1 \right\rceil\,} \newcommand{\dd}{{\rm d}} \newcommand{\down}{\downarrow} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,{\rm e}^{#1}\,} \newcommand{\fermi}{\,{\rm f}} \newcommand{\floor}[1]{\,\left\lfloor #1 \right\rfloor\,} \newcommand{\half}{{1 \over 2}} \newcommand{\ic}{{\rm i}} \newcommand{\iff}{\Longleftrightarrow} \newcommand{\imp}{\Longrightarrow} \newcommand{\isdiv}{\,\left.\right\vert\,} \newcommand{\ket}[1]{\left\vert #1\right\rangle} \newcommand{\ol}[1]{\overline{#1}} \newcommand{\pars}[1]{\left( #1 \right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\pp}{{\cal P}} \newcommand{\root}[2][]{\,\sqrt[#1]{\vphantom{\large A}\,#2\,}\,} \newcommand{\sech}{\,{\rm sech}} \newcommand{\sgn}{\,{\rm sgn}} \newcommand{\totald}[3][]{\frac{{\rm d}^{#1} #2}{{\rm d} #3^{#1}}} \newcommand{\ul}[1]{\underline{#1}} \newcommand{\verts}[1]{\left\vert\, #1 \,\right\vert} \newcommand{\wt}[1]{\widetilde{#1}}$ Note that $$ \fermi\pars{x} = \fermi\pars{0} + \int_{0}^{x} \fermi'\pars{t}\,\dd t \,\,\,\stackrel{t\ \mapsto\ x - t}{=}\,\,\, \fermi\pars{x} = \fermi\pars{0} + \int_{0}^{x}\fermi'\pars{x - t}\,\dd t $$

Integrating by parts: \begin{align} \color{#00f}{\fermi\pars{x}}&= \fermi\pars{0} + \fermi'\pars{0}x + \int_{0}^{x}t\fermi''\pars{x - t}\,\dd t \\[5mm] & = \fermi\pars{0} + \fermi'\pars{0}x + \half\,\fermi''\pars{0}x^{2} +\half\int_{0}^{x}t^{2}\fermi'''\pars{x - t}\,\dd t \\[8mm]& = \cdots = \color{#00f}{\fermi\pars{0} + \fermi'\pars{0}x + \half\,\fermi''\pars{0}x^{2} + \cdots + {\fermi^{{\rm\pars{n}}}\pars{0} \over n!}\,x^{n}} \\[2mm] & + \color{#f00}{{1 \over n!}\int_{0}^{x}t^{n} \fermi^{\rm\pars{n + 1}}\pars{x - t}\,\dd t} \end{align}


This is the general formula for the Taylor series:

$$\begin{align} &f(x) \\ &= f(a) + f'(a) (x-a) + \frac{f''(a)}{2!} (x - a)^2 + \frac{f^{(3)}(a)}{3!} (x - a)^3 + \dots + \frac{f^{(n)}(a)}{n!} (x - a)^n + \cdots \end{align}$$

You can find a proof here.

The series you mentioned for $\sin(x)$ is a special form of the Taylor series, called the Maclaurin series, centered $a=0$.

The Taylor series is an extremely powerful because it shows that every function can be represented as an infinite polynomial (with a few disclaimers, such as interval of convergence)! This means that we can differentiate a function as easily as we can differentiate a polynomial, and we can compare functions by comparing their series expansions.

For instance, we know that the Maclaurin series expansion of $\cos(x)$ is $1-\frac{x^2}{2!}+\frac{x^4}{4!}-\dots$ and we know that the expansion of $\sin(x)$ is $x-\dfrac{x^3}{3!}+\dfrac{x^5}{5!}-\dfrac{x^7}{7!}\dots$. If we do term-by-term differentiation, we can clearly confirm that the derivative of $\sin(x)$ is $\cos(x)$ by differentiating its series.

We can also use the Maclaurin series to prove that $e^{i\theta}=\cos{\theta}+i\sin{\theta}$ and thus $e^{\pi i}+1=0$ by comparing their series:

$$\begin{align} e^{ix} &{}= 1 + ix + \frac{(ix)^2}{2!} + \frac{(ix)^3}{3!} + \frac{(ix)^4}{4!} + \frac{(ix)^5}{5!} + \frac{(ix)^6}{6!} + \frac{(ix)^7}{7!} + \frac{(ix)^8}{8!} + \cdots \\[8pt] &{}= 1 + ix - \frac{x^2}{2!} - \frac{ix^3}{3!} + \frac{x^4}{4!} + \frac{ix^5}{5!} - \frac{x^6}{6!} - \frac{ix^7}{7!} + \frac{x^8}{8!} + \cdots \\[8pt] &{}= \left( 1 - \frac{x^2}{2!} + \frac{x^4}{4!} - \frac{x^6}{6!} + \frac{x^8}{8!} - \cdots \right) + i\left( x - \frac{x^3}{3!} + \frac{x^5}{5!} - \frac{x^7}{7!} + \cdots \right) \\[8pt] &{}= \cos x + i\sin x \ . \end{align}$$

Also, you can use the first few terms of the Taylor series expansion to approximate a function if the function is close to the value on which you centered your series. For instance, we use the approximation $\sin(\theta)\approx \theta$ often in differential equations for very small values of $\theta$ by taking the first term of the Maclaurin series for $\sin(x).$


Taylor's theorem can be proved using only the Fundamental Theorem of Calculus, basic algebraic and geometric facts about integration, and some combinatorics. Although it's a little long to write out, the basic ideas are pretty simple.

The FTOC gives us: $$f(x) = f(a) + \int_a^x f'(x_1)dx_1$$ $$f'(x_1) = f'(a) + \int_a^{x_1} f''(x_2)dx_2$$ $$f''(x_2) = f''(a) + \int_a^{x_2} f'''(x_3)dx_3$$ $$\ldots$$ $$f^{(m)}(x_m) = f^{(m)}(a) + \int_a^{x_{m}} f^{(m+1)}(x_{m+1}) dx_{m+1},$$ where $f^{(m)}$ denotes the $m$'th derivative of $f$. Substitute the second, third, ... expressions successively into the first gives: $$f(x) = f(a) + \int_{a<x_1<x} f'(a) dx_1 +\iint_{a<x_2<x_1<x} f''(a)dx_2dx_1 + \ldots + {\int \ldots \int}_{a<x_{m}< \ldots < x_1 < x} f^{(m)}(a)dx_{m} \ldots dx_1 + {\int \ldots \int}_{a<x_{m+1}< \ldots < x_1 < x} f^{(m+1)}(x_{m+1})\,dx_{m+1} \ldots dx_1 $$ For all the multiple integrals except the last one, the integrand is constant and can be pulled outside the integral. This gives us terms of the form: $$f^{(m)}(a){\int \ldots \int}_{a<x_{m}< \ldots < x_1 < x} \,dx_{m} \ldots dx_1$$ The ordering of variables $a<x_{m}< \ldots < x_1 < x$ is one of $m!$ orderings of the variables $x_1,\ldots,x_m$. Each one of these orderings corresponds to a region in $m$-dimensional space. These regions are all disjoint, by symmetry (or change of variable) they all have the same volume, and their union is an $m$-cube with volume $(x-a)^m$. From this we conclude: $${\int \ldots \int}_{a<x_{m}< \ldots < x_1 < x} dx_{m} \ldots dx_1 = \frac{(x-a)^m}{m!}.$$ Hence we have $$f(x) = f(a) + f'(a)(x-a) + f''(a)\frac{(x-a)^2}{2!} + f^{(m)}(a)\frac{(x-a)^m}{m!} + {\int \ldots \int}_{a<x_{m+1}< \ldots < x_1 < x} f^{(m+1)}(x_{m+1})dx_{m+1} \ldots dx_1 $$ As to the last integral, we have bounds on the integrand: $$ \min_{a<y<x} f^{(m+1)}(y) \le f^{(m+1)}(x_{m+1}) \le \max_{a<y<x} f^{(m+1)}(y),$$ which gives us: $$f(x) = f(a) + f'(a)(x-a) + f''(a)\frac{(x-a)^2}{2!} + \ldots + f^{(m)}(a)\frac{(x-a)^m}{m!} + R_{m+1} $$ where $$\left(\min_{a<y<x} f^{(m+1)}(y) \right) \frac{(x-a)^{m+1}}{(m+1)!} \le R_{m+1} \le \left(\max_{a<y<x} f^{(m+1)}(y) \right) \frac{(x-a)^{m+1}}{(m+1)!}.$$ Note that this proof does not even require that $f^{(m+1)}$ be continuous. If $f^{(m+1)}(y)$ is continuous on $a \le y \le x$, then the more conventional form of the remainder follows immediately from the intermediate value theorem.