Significance of Sobolev spaces for numerical analysis & PDEs?

Sobolev spaces are useful because they are complete function spaces with a norm that

  1. reflects the differentiability of functions (unlike $L^p$ norm)
  2. has nice geometry (unlike $C^k$ norm)
  3. allows approximation by $C^\infty$ functions (unlike $C^k$ norm)

"Nice geometry" means: uniformly convex norm (often, even inner-product norm). This property gives reflexivity which in turn yields

  1. Concrete representation of linear functionals. This enables reformulation of problems using duality.
  2. Weak compactness of closed bounded sets. With compactness arguments one can show the existence of extremals in variational problems.

Even problems that are not obviously variational at first can be usefully treated as such (like solving $Ax=b$ sometimes turns into minimization of $\|Ax-b\|^2$).

Approximation by $C^\infty$ functions makes it possible to prove estimates for smooth functions first, using the machinery of derivatives, and then extend to the whole space by density.


Suppose you want to find a number $r$ whose square $r^{2}$ is $2$. That has no meaning for numerical analysis because all numbers on a computer are rational, and $\sqrt{2}$ is not rational. It wasn't until the late 1800's that Mathematicians found a logically consistent way to define a real number. But once such a beast could be defined, then one can prove that various algorithms will get you closer and closer to $r$ to $\sqrt{2}$, knowing that it has something to converge to. The existence of such a thing in the extended "real" number system became important to the discussion.

Sobolev spaces are to the ordinary differentiable functions what the real numbers are to the rational numbers. In the late 1800's it was discovered that Calculus of variations didn't have minimizing or maximizing functions. It was the same type of problem: a larger class of functions had to be considered, and the corresponding definitions of integrals had to be extended in order to make sense of and to find a unique minimizer or maximizer that would solve variational problems. So new functions spaces emerged, Lebesgue integration extended the integral expressions to new function classes, and solutions could be found. Once minimizing or maximizing functions could be found, their properties could be deduced, and it validated various algorithms used to find solutions that couldn't converge before because there was nothing to converge to.