How to prove Euler's formula: $e^{i\varphi}=\cos(\varphi) +i\sin(\varphi)$?

Proof: Consider the function $f(t) = e^{-it}(\cos t + i \sin t)$ for $t \in \mathbb{R}$. By the product rule \begin{eqnarray} f^{\prime}(t) = e^{-i t}(i \cos t - \sin t) - i e^{-i t}(\cos t + i \sin t) = 0 \end{eqnarray} identically for all $t \in \mathbb{R}$. Hence, $f$ is constant everywhere. Since $f(0) = 1$, it follows that $f(t) = 1$ identically. Therefore, $e^{it} = \cos t + i \sin t$ for all $t \in \mathbb{R}$, as claimed.


Assuming you mean $e^{ix}=\cos x+i\sin x$, one way is to use the MacLaurin series for sine and cosine, which are known to converge for all real $x$ in a first-year calculus context, and the MacLaurin series for $e^z$, trusting that it converges for pure-imaginary $z$ since this result requires complex analysis.

The MacLaurin series: \begin{align} \sin x&=\sum_{n=0}^{\infty}\frac{(-1)^n}{(2n+1)!}x^{2n+1}=x-\frac{x^3}{3!}+\frac{x^5}{5!}-\cdots \\\\ \cos x&=\sum_{n=0}^{\infty}\frac{(-1)^n}{(2n)!}x^{2n}=1-\frac{x^2}{2!}+\frac{x^4}{4!}-\cdots \\\\ e^z&=\sum_{n=0}^{\infty}\frac{z^n}{n!}=1+z+\frac{z^2}{2!}+\frac{z^3}{3!}+\cdots \end{align}

Substitute $z=ix$ in the last series: \begin{align} e^{ix}&=\sum_{n=0}^{\infty}\frac{(ix)^n}{n!}=1+ix+\frac{(ix)^2}{2!}+\frac{(ix)^3}{3!}+\cdots \\\\ &=1+ix-\frac{x^2}{2!}-i\frac{x^3}{3!}+\frac{x^4}{4!}+i\frac{x^5}{5!}-\cdots \\\\ &=1-\frac{x^2}{2!}+\frac{x^4}{4!}+\cdots +i\left(x-\frac{x^3}{3!}+\frac{x^5}{5!}-\cdots\right) \\\\ &=\cos x+i\sin x \end{align}


Let $\mathbf{A}$ be an $n \times n$ matrix. Recall that the system of differential equations

$$\mathbf{x}' = \mathbf{Ax}$$

has the unique solution $\mathbf{x} = e^{\mathbf{A}t} \mathbf{x}(0)$, where $\mathbf{x}$ is a vector-valued differentiable function and $e^{\mathbf{A}t}$ denotes the matrix exponential. In particular, let $\mathbf{J} = \left[ \begin{array}{cc} 0 & 1 \\\ -1 & 0 \end{array} \right]$. Then the system of differential equations

$$x' = y, y' = -x$$

with initial conditions $x(0) = 1, y(0) = 0$ has the unique solution $\left[ \begin{array}{cc} x \\\ y \end{array} \right] = e^{\mathbf{J}t} \left[ \begin{array}{cc} 1 \\\ 0 \end{array} \right]$. On the other hand, the above equations tell us that $x'' = -x$ and $y'' = -y$, and we know that the solutions to this differential equation are of the form $a \cos t + b \sin t$ for constants $a, b$. By matching initial conditions we in fact find that $x = \cos t, y = \sin t$. Now verify that on vectors multiplying by $\mathbf{J}$ has the same effect as multiplying a complex number by $i$, and you obtain Euler's formula.

This proof has the following attractive physical interpretation: a particle whose $x$- and $y$-coordinates satisfy $x' = y, y' = -x$ has the property that its velocity is always perpendicular to and has proportional magnitude to its displacement. But from physics lessons you know that this uniquely describes particles which move in a circle.

Another way to interpret this proof is as a description of the exponential map from the Lie algebra $\mathbb{R}$ to the Lie group $\text{SO}(2)$. Euler's formula generalizes to quaternions, and this in turn can be thought of as describing the exponential map from the Lie algebra $\mathbb{R}^3$ (with the cross product) to $\text{SU}(2)$ (which can then be sent to $\text{SO}(3)$). This is one reason it is convenient to use quaternions to describe 3-d rotations in computer graphics; the exponential map makes it easy to interpolate between two rotations.

Edit: whuber's answer reminded me of the following excellent graphic.

This is what is happening geometrically in whuber's answer, and is essentially what happens if you apply Euler's method to the system of ODEs I described above. Each step is a plot of the powers of $1 + \frac{i}{N}$ up to the $N^{th}$ power.