Why is it useful to show the existence and uniqueness of solution for a PDE?

  1. Some existence proofs proceed by just writing down the solution. This isn't true of many, but when it is, I think the usefulness of these are clear. The routine examples where separation of variables works are options here.
  2. Other existence proofs are literally an approximation recipe, even though they don't come up with an explicit formula. The classic example is Picard-Lindelof in ODE: Picard iteration is one way to construct a solution to an ODE. It's not great, simply because the integrals it asks for are in general expensive to compute, but it can be done.
  3. Other existence proofs suggest approximation techniques. For example, Lax-Milgram tells us that there is just one solution to an elliptic PDE (given certain boundary conditions) and that it satisfies an infinite collection of equations drawn from the test function space. In addition, elliptic regularity tells us how nice we should expect the solution to be. This hints at the idea of the finite element method: restrict attention to just a $d$-dimensional subspace of the solution space which has the amount of regularity we should expect, and solve the system of equations given by the weak form of the equation with $d$ carefully chosen linearly independent test functions. Proofs of existence of solution to this finite dimensional system, and even convergence proofs, use some of the same techniques as used in the actual proof of Lax-Milgram itself.
  4. As mentioned before we often prove uniqueness by more or less directly constructing the solution. But sometimes we prove uniqueness in some other way (e.g. the solution satisfies some variational principle for a convex functional). Knowing uniqueness (even as a "black box" like this, where the proof does not directly help us solve the problem) can help us solve equations by allowing us to make additional assumptions and then check them for self-consistency only at the end, without worrying about removing solutions from consideration in the process. The simplest case of this that I can think of is solving separable ODEs: if a separable first order ODE has unique solutions to its IVPs, then all solutions are either "of separable type" or constant. But this comes up in PDE too.
  5. In some cases non-uniqueness is more interesting than uniqueness. Basically, when you have a problem arising in modeling with a non-unique solution, you must ask how the "real" system "selects" a solution when there are all these solutions out there? This can reveal new problems of interest. The example that comes to mind from my field is the invariant measure problem. It can happen that a deterministic dynamical system has many invariant measures, but that the one which "really matters" is the one which is obtained by adding a small noise, selecting the unique invariant measure of this perturbed system, and then sending the noise intensity to zero. This idea of stochastic perturbation is telling us something about the underlying deterministic system (since the answer is usually independent of the precise form of the noise that we chose).

Other than that, I don't have much to say beyond "it's nice to know that when we run our numerics, we're looking for something that actually exists, and we're not completely missing solutions either".


There's an example I've stolen from here that might be of some use.

Let's say I have the equation $\sqrt{2x-1} = -x, (x \in \mathbb R)$ we can take the square of both sides and get the equation $2x-1=x^2$ which has a single, unique solution, $x=1$. We know that this single, unique solution was only a candidate for a solution; it doesn't satisfy the equation as it's originally posed.

When dealing with PDEs, there are many, many things to consider. The first one is normally the equation itself (whether it's a Laplacian or any other type of well-known operator), and the second is probably the boundary condition.

A lot of emphasis is placed on boundary conditions, and indeed the usual three have names: Dirichlet Boundary Condition Neumann Boundary Condition and Robin Boundary Condition

Differential equations can be posed in a myriad of different ways. Add in the fact that they can be in multiple dimensions, and there's a world of problems that can come about: does a solution to my PDE even exist? I mean I can set it up very easily:

$-\Delta y + y^3 = B u \text{ in } \Omega$ - A semi-linear PDE

$ y = 0 \text{ on } \partial \Omega$ - Boundary conditions for the problem

This is a hard PDE that I just happened to have in front of me. Was this an arbitrary PDE or is there a solution that we can meaningfully solve for? We want to prove that there is a solution and that if I find one, it will be unique, before I go through all the trouble of actually solving for it. Solving these things can get really hairy really fast. What if the equation had $y^2$ instead of $y^3$?

(Yes, the semi-linear PDE I posted does have a solution. I forget the implications for any other $y$-term other than $y^3$, and it says in my notes that Browder-Minty is the theorem to prove it, but I've forgotten and am hence going back over my notes.)


PDE's are far from my area of expertise, but maybe it will nevertheless be useful if I mention what I consider an aspect of existence and uniqueness theorems that I consider particularly important and useful. That aspect is not the conclusion of the theorem ("there exists a unique function such that $\dots$") but rather the hypotheses. These describe the sort of differential equation under consideration and the initial or boundary conditions that the solution is supposed to satisfy. The theorem is essentially saying that those initial or boundary conditions are the right ones for that particular sort of equation. In other words, they tell you what conditions, in addition to the differential equation, can safely be imposed (i.e., a solution still exists when you impose the conditions) and suffice to pin down the solution completely (uniqueness). Intuitively, they tell us how much flexibility is available in the solutions of the differential equation.