Do we lose any solutions when applying separation of variables to partial differential equations?

Consider your purported solution $u(x,t)$ at fixed $t$, i.e., think of it as a function only of $x$. Such a function can be expanded in a complete set of functions $f_n (x)$, $$ u(x,t)=\sum_{n} u_n f_n (x) $$ What happens when you now choose a different fixed $t$? As long as the boundary conditions in the $x$ direction don't change (which is the case in your example), you can still expand in the same set $f_n (x)$, so the only place where the $t$-dependence enters is in the coefficients $u_n $ - they are what changes when you expand a different function of $x$ in the same set of $f_n (x)$. So the complete functional dependence of $u(x,t)$ can be written as $$ u(x,t)=\sum_{n} u_n (t) f_n (x) $$ Thus, when we make a separation ansatz, we are not assuming that our solutions are products. We are merely stating that we can construct a basis of product form in which our solutions can be expanded. That is not a restriction for a large class of problems. As is evident from the preceding argument, this goes wrong when the boundary conditions in the $x$ direction do depend on $t$ - then we cannot expand in the same set $f_n (x)$ for each $t$. For example, if the domain were triangular such that the length of the $x$-interval depends on $t$, the frequencies in the sine functions in your example would become $t$-dependent.


As you correctly noted, in the end we write our solution as a superposition of separable solutions, so the right question really 'can we express every solution to our PDE as a sum of separable solutions'?

A thorough answer to this question requires a little linear algebra. What we want to do is find a set of functions $\{\varphi_n(x): n \in \mathbb{N}\}$ so that for each time $t$ write our solution $f$ as $f = \sum_{n=0}^{\infty} \varphi_n(x) G_n(t)$ where the $G_n$ are just some coefficients which are allowed to depend on time. Not only does such a set of functions exists, we can actually find a set of these functions through the process of separation of variables.

Let's consider the heat equation again. When we separate variables, we reduce the situation to two ODEs:

$$G'(t) = EG(t), \varphi''(x) = \frac{E}{k}\varphi(x) $$ where $E$ is some unknown constant.

Remember that the differentiation is linear: that is, for functions $f$ and $g$ and constants $a,b$ we have $\frac{d}{dx}(af(x)+bg(x)) = a\frac{df}{dx} + b \frac{dg}{dx}$. What this means is that our two ODEs are eigenvalue problems: we have an eigenvalue problem for the operator $\frac{d}{dx}$ with eigenvalue $E$, and an eigenvalue problem for the operator $\frac{d^2}{dx^2}$ with eigenvalue $\frac{E}{k}$.

We need the eigenvectors of $\frac{d^2}{dx^2}$ (i.e. the solutions to our $\varphi$ ODE) to form a basis for our space of functions. Luckily, there is a theorem that does exactly this sort of thing for us.

Spectral Theorem:

Let $V$ be a Hilbert space and $T: V \to V$ a (sufficiently nice) self-adjoint map. Then there exists an orthonormal basis for $V$ which consists of eigenvectors for $T$.

In order to make sense of this, we need one final ingredient: an inner product. This is just something which generalises the familiar `dot product' in three dimensions. The inner product of two functions $f$, $g$ is a real number, defined as $$\langle f,g\rangle := \int_{0}^{\infty} f(x)g(x) dx$$.

A basis of functions $\{f_n: n \in \mathbb{N}\}$ is called orthonormal if $\langle f_n, f_n \rangle = 1$ and $\langle f_n, f_m \rangle = 0$ when $n \neq m$.

Finally, we just need to check that the operator $\frac{d}{dx}$ is self-adjoint. What this means is that for any two functions $f$, $g$ we have that $\langle \frac{d^2 f}{dx^2},g\rangle = \langle f,\frac{d^2g}{dx^2} \rangle$. This can be done by integration by parts:

$$\int_{0}^{L} f''(x)g(x) dx = - \int_{0}^{L} f'(x)g'(x) dx = \int_{0}^{L} f(x)g''(x) dx$$ where we have thrown away the boundary terms because the boundary conditions tell us that they are zero.

Hence, the operator $\frac{d^2}{dx^2}$ is self-adjoint, and so the spectral theorem tells us that its eigenvectors form a basis for our function space, so for any given $t$ we can express any chosen function as $$f = \sum_{n=0}^{\infty} \varphi_n(x) G_n(t)$$ Thus, we haven't lost any solutions in that we can write the equation like this. I have skipped a few technical issues here: I haven't told you what the Hilbert space is, and when I say 'any' function, I really mean 'any square-integrable' function. But I don't think these technicalities are important in the understanding.


As a fun extra, now that we have our inner product, we can use it to simply derive the coefficients in our series solution. We write our solution as $$f(x,t) = \sum_{n=0}^{\infty} \varphi_n(t) G_n(x)$$ and now lets take the inner product of $f$ with the basis element $\varphi_n(x)$. This gives us

$$\langle f(x,0), \varphi_n(x) = \langle \sum_{k=0}^{\infty} \varphi_k(x) G_k(0), \varphi_n(x) \rangle = \sum_{k=0}^{\infty} G_k(0) \langle \varphi_k(x) , \varphi_n(x) \rangle = \sum_{k=0}^{\infty} G_k(0) \langle \varphi_k(x) , \varphi_n(x) \rangle $$

Here we have interchanged integration and summation. Finally, the orthonormality of the basis $\{\varphi_k(x)\}$ means that all of the terms but one are zero, so we get $$ \langle f(x,0), \varphi_n(x) = G_n(0) $$ Recall that $G_n(t) = B_n e^{-k\frac{n\pi}{L}^2 t}$, so $B_n = G_n(0)$ and writing our inner product formula in terms of an integral, we get $$\int_{0}^{L} f(x,0) \varphi_n(x) dx = \int_{0}^{L} f(x,0) \sin(\frac{n\pi x}{L}) dx $$ which is our usual expression for the series coefficients!


The method of separation of variables derives from the symmetries of the equation, refer e.g. W. Miller's book Symmetry and Separation of Variables (out of print, but available here.)

Separation of variables for nonlinear equations is treated by Victor A. Galaktionov, Sergey R. Svirshchevskii in their book Exact Solutions and Invariant Subspaces of Nonlinear Partial Differential Equations, Chapman and Hall/CRC 2007.