Fourier Transform: Understanding change of basis property with ideas from linear algebra

If you have an orthonormal basis $\{ e_{k} \}_{k=1}^{N}$ on a finite-dimensional space, such as what you would obtain with Gram-Schmidt, then every vector $x$ is expressed as $$ x = \sum_{k} (x,e_{k})e_{k}. $$ This extends to $L^{2}[0,2\pi]$ using $e_{k} =\frac{1}{\sqrt{2\pi}}e^{ikx}$: $$ f = \sum_{k}(f,e_{k})e_{k} $$ The importance of this basis is that it consists of eigenvectors of $Lf=\frac{1}{i}\frac{d}{dx}$ because $Le_{k}=ke_{k}$. So this basis diagonalizes the differentiation operator. Finally, the same thing holds in a continuous sense on $L^{2}(\mathbb{R})$ with \begin{align} f & = \int_{k} (f,e_{k})e_{k} \\ & = \frac{1}{2\pi}\int_{-\infty}^{\infty}\left(\int_{-\infty}^{\infty}f(t)e^{-ikt}dt\right)e^{ikx}dk \end{align} This is a generalization rather than a precise extension because $e_{k}=\frac{1}{\sqrt{2\pi}}e^{ikx}$ is not--strictly speaking--an eigenfunction of $L=\frac{1}{i}\frac{d}{dx}$ because $e_{k} \notin L^{2}(\mathbb{R})$ due to the fact that the function is not square integrable. However, for every $\delta > 0$, the following is square integrable $$ e_{k,\delta}=\frac{1}{\sqrt{2\pi}}\int_{k-\delta}^{k+\delta}e_{k}(x)dk, $$ and, in the norm of $L^{2}$, it becomes closer and closer to an eigenvector with eigenvalue $k$ as $\delta\downarrow 0$: $$ \|Le_{k,\delta}-ke_{k,\delta}\| < \delta\|e_{k,\delta}\|. $$ So the Fourier transform is the coefficient function and the expansion of $f$ looks very much like a "continuous" (i.e., integral) expansion of $f$ in approximate eigenfunctions of the differentiation operator. (The $e_{k,\delta}$ are even mutually orthogonal if the intervals $(k-\delta,k+\delta)$ do not overlap.)

As a final note, to make this generalization more precise, $$ \|x\|^{2} = \sum_{k}|(x,e_k)|^{2} $$ also holds for the continuous orthogonal expansion: $$ \|f\|^{2} = \int_{k} |(x,e_{k})|^{2}dk. $$ This is how Parseval saw it, who is the person after whom Parseval's identity is named: $$ \int_{-\infty}^{\infty}|f(x)|^{2}dx = \int_{-\infty}^{\infty}|\hat{f}(k)|^{2}dk. $$


You can imagine derivative as a matrix such as this, as the corresponding values of each element approach zero (limit definition):

enter image description here

Or integration as like this(riemann sum):

enter image description here

Moreover, fourier transform already has a matrix representation for discrete case

https://en.wikipedia.org/wiki/DFT_matrix

You need extend this matrix to infinity and shrink the corresponding interval to zero (riemann sum) to get the continuous transform.

$$\int_{-\infty}^{\infty}e^{0}dx=\infty$$

Think of this as summing up all the ones in the diagonal. The diagonal components have the value 1, they correspond to an infinitesimal interval (dx) and when you sum infinitely many of them you get infinity.

Alternatively think about what would happen if you multiplied the derivative matrix and the integral matrix. You would get the identity matrix. By integrating you would be summing the ones on the diagonal.