What is the purpose of defining a Hilbert Space?

The issue is that there are many different infinite-dimensional vector space analogues, and not all of them have nice properties.

Consider $C^\infty(\mathbb R)$, the space of infinitely-differentiable functions on $\mathbb R$. We have that if $f,g$ are infinitely differentiable functions, then so is $f+g$, and for any $\lambda\in\mathbb R$ we have that $\lambda f$ is as well infinitely differentiable.

So, as we don't care the nature of the space we're working in, let's just do some calculations right? What's the inner product on this space? Is there one? Is there even a norm on this space?

The answer to both of those turns out to be no, but without saying "As $C^\infty(\mathbb R)$ is a Frechet space", you may have to do a non-trivial amount of work to prove it. If you fail to do the work, you may run into issues such as: $$\langle f,g\rangle = \int_{\mathbb R}f(x)\overline{g(x)}dx$$ being very infinite for certain values. You may recall that inner product spaces (which Hilbert spaces are) are a subset of normed spaces, as $|\cdot| = \sqrt{\langle\cdot,\cdot\rangle}$ always defines a norm. So, can we put a norm on $C^\infty(\mathbb R)$? Again, no (this time it's again because it's a Frechet Space, Banach Spaces have norms).

So, without ensuring you're in a nice space beforehand, you can get results that really throw off some of your computations.

Additionally, Hilbert Spaces have a property called the Minimum Principle. If $H$ is a Hilbert Space, and $E$ is a "nice" subset of it (meaning closed, convex, and non-empty), then $E$ has an element of minimum norm. This is not true in a generic function space. Famously, while working in PDE's Riemann assumed this was true in a non-Hilbert space, until it was shown by Weierstrauss proved a counterexample (see this history section).

So, we say we work in Hilbert space because not all spaces are created equal, and when we want them to have certain nice properties we have to specify that we're assuming that.


There are many examples of Hilbert spaces that one might not immediately think of as vector spaces. Many famous examples are actually spaces of functions.

For example, $L^2[0,1]$, the space of square-integrable functions on $[0,1]$, is a Hilbert space. The elements of $L^2[0,1]$ are functions; they are not vectors in the "high-school" sense. Nonetheless, you can add them, and you can multiply them by scalars, just like you can do with ordinary vectors.

In fact, $L^2[0,1]$ has something else in common with more familiar spaces of vectors. It has a "basis": the functions $\exp(2\pi n i x)$ with $n \in \mathbb N$. Indeed, every $f \in L^2[0,1]$ is equal (in a certain sense) to a sum of these exponentials of the form $$ f = \sum_{n \in \mathbb N} c_n \exp (2\pi n i x).$$ Perhaps you recognise this kind of expansion: it is the Fourier series of $f$.

Moreover, there is a notion of a "dot product": $$ \langle f , g \rangle = \int_0^1 f^\star g = \sum_{n \in \mathbb N} c_n^\star d_n,$$ where $f = \sum_{n \in \mathbb N} c_n \exp (2\pi n i x)$ and $g = \sum_{n \in \mathbb N} d_n \exp (2\pi n i x)$. If you are indeed a physicist as Matt Samuel claims, you will recognise this as the pairing between "bra"s and "ket"s in quantum mechanics.

So why should you define the formal notion of a vector space, or of a Hilbert space? It is to enable you to treat many different kinds of spaces on the same footing. Once you have checked that a certain space satisfies the definition of a Hilbert space, you are guaranteed that all the general theorems about Hilbert spaces apply to the space you are dealing with, whether the objects in your space are genuine vectors, or whether they are functions, or something else entirely.

For example, the equation I wrote down, $$ \langle f , g \rangle = \sum_{n \in \mathbb N} c_n^\star d_n ,$$ looks very similar to the dot product of two vectors in $\mathbb C^3$, $$\vec{v}.\vec{w} = \sum_i v_i^\star w_i$$ One can prove that a formula of this kind holds for any (separable) Hilbert space, regardless of context.

As people have mentioned in the comments and in the other answer, the process of checking that a given space really does satisfy the definition of a Hilbert space is both important and non-trivial. Not every infinite dimensional inner-product space is a Hilbert space, and if you try to apply Hilbert space theorems to non-Hilbert spaces, you can get in trouble.


You can do computations without defining the vector space, and without dealing with the inner product. However, the notion of a complete space is indispensable to the general theory, just as the completion of the rational numbers to obtain the real numbers is critical.

The concept of a complete space comes into play when you begin performing iterative processes to solve an equation. Not having a limit for the sequence to converge to is crippling. And many concise theorems of complete spaces are false in incomplete spaces. This is arguably the most important concept of 19th century Mathematics. Before that nobody had managed to come up with a consistent definition of a real number, and people had been trying unsuccessfully for over two thousand years. Simple equations have solutions using real numbers that do not have solutions over the rational; so this is fundamental. In the early 19th century, Cauchy looked at sequences of rational numbers that "converge on themselves" (which we now call Cauchy sequences,) and he didn't know what to do with such sequences. Cauchy was baffled by such things. By the end of the 19th century, Cantor presented axioms for basing Mathematics on sets, and used Cauchy's sequences to define a real number. This revolutionized Mathematics, as it ushered in the age of rigor in the field, and allowed Mathematicians to define a real number for the first time, in a way that extended very generally to other types of settings such as inner product spaces and normed spaces.

The Cantor construction used to define a real number could easily be extended to vector spaces through a metric such as a norm, and this soon led to the notion of a Hilbert space. Sequences of orthogonal functions obtained while solving equations of Math-Physics could now have limits in an abstract way, and not just converge on themselves, without something to converge to. Principles laid out much earlier to solve problems involving the Laplacian, such as the Dirichlet principle, could now be made to give actual solutions in a very general context. Iterative schemes produced sequences that had something to converge to. Of course the objects that were considered to be solutions needed to be non-classical objects in a Hilbert or Banach space. This led to general existence results for equations of Math-Physics for the first time. And it allowed us to think of spaces where points could be complicated objects such as functions, and to define distance and topological closeness for such points. Modern Mathematics would be crippled without such abstraction.

The question I would pose to you is this: Would you be willing to live without real numbers in dealing with Calculus? You could certainly deal with solutions as being Cauchy sequences, but I suspect you would grow weary of that constraint very quickly, if you could not work with a solution as a single point in a space. You're still left to solve equations numerically through iteration, but the notion that there is an abstract solution through iteration is something requiring you to move to the completion, where topological techniques exist that are not there for the incomplete spaces. Dealing with rationals as opposed to real numbers would be bad enough, but the problems of dealing with incomplete inner product and normed spaces would be worse by an order of magnitude, leaving the general theory of equation solving to be very incomplete. Avoiding the abstraction, and focusing only on computation, could well have left us without the critically important topological tools of analysis.

Abstract vector spaces with a topology are the natural evolution of Mathematicians' efforts to solve the equations of Math/Physics. Abstract linearity came out of the principle of superposition. Norm, distance, topology were abstractions coming out of the solution process. Calculations without these abstractions are inadequate for the job of solving equations because, without these, existence is elusive, and we're taken backwards in time to where the notion of a real number remained horribly confusing, and where the process of finding solutions of important differential equations was incomprehensible.