Inverse functional derivatives: Find a functional whose functional derivative is a given function

I begin by adopting the strategy Radost suggests in a comment above: solve the linear case in the Fourier basis. Here's the answer I find. The functional

$$\mathscr{F}(\rho)=\frac{1}{2}(g\star\rho\star\rho_-)(0)+C$$

where $\rho_-(x)\equiv\rho(-x)$ and $C$ is a constant of integration, has functional derivative

$$\frac{\delta\mathscr{F}}{\delta\rho}=g\star\rho$$

I'm going to start with several assumptions. I don't think any of these is essential, but it is easier to give a specific answer to a more specific question (and these happen to match the application I have in mind). First, I will assume that all functions are defined on $\mathbb{R}$, but periodic with period 1. Each such periodic function $f$ has a Fourier series,

$$f(x)=\sum_{k\in2\pi\mathbb{Z}}\tilde{f}_k e^{i k x}$$

Integrals will generally be from 0 to 1 if I don't specify bounds. Thus, I define convolution as

$$(f\star g)(y)\equiv\int_0^1 f(x)g(y-x)dx$$

It is easy to verify the Convolution Theorem with these definitions: $\left(\tilde{f\star g}\right)_k=\tilde{f}_k\tilde{g}_k$. I also define an inner product,

$$\left<f,g\right>\equiv\int f(x)^*g(x) dx$$

using $f^*$ for complex conjugate. $\left<af,bg\right>=a^*b\left<f,g\right>$, and $\left<e^{i k x},e^{i j x}\right>=\delta_{kj}$. The functional derivative can be defined as

$$\lim_{\epsilon\rightarrow 0}\frac{\mathscr{F}(\rho+\epsilon\phi)-\mathscr{F}(\rho)}{\epsilon}=\left<\frac{\delta\mathscr{F}}{\delta\rho}^*,\phi\right>$$

I'm also going to assume that $g$, the kernel of the convolution below, is real and even, $g(x)=g(-x)\in\mathbb{R}$, which implies $\tilde{g}_k=\tilde{g}_{-k}\in\mathbb{R}$.

OK, enough throat-clearing. Now, let's try to find a functional $\mathscr{F}(\rho)$ whose functional derivative is $g\star\rho$. I'll begin by finding $\mathscr{F}(\phi)$, for $\phi(x)=a e^{ikx}+b e^{-ikx}$, $a,b\in\mathbb{C}$. I assume $\mathscr{F}(0)=0$. (This just amounts to the choice of a constant of integration.) I start at $\rho=0$ and integrate $d\mathscr{F}(\alpha\phi)d\alpha$ from $\alpha=0$ to 1. In stepping from $\rho=\alpha\phi$ to $\rho=(\alpha+d\alpha)\phi$, I have

$$ \begin{align} d\mathscr{F}&=\left<g\star(\alpha\phi)^*,\phi d\alpha\right> \\ &=\left<\tilde{g}_k\alpha(a e^{ikx}+b e^{-ikx})^*,(a e^{ikx}+b e^{-ikx})d\alpha\right> \\ &=\tilde{g}_k(2ab)\alpha d\alpha \\ \mathscr{F}(\phi)=\int_{\alpha=0}^1d\mathscr{F}(\alpha) &=2ab\tilde{g}_k\int_0^1\alpha d\alpha \\ &=ab\tilde{g}_k \\ &=\sum_k\frac{1}{2}\tilde{g}_k\tilde{\phi}_k\tilde{\phi}_{-k} \end{align} $$

The factor of $\frac12$ is there because the product $ab$ appears twice in the sum, once for $k$ and once for $-k$. (Also, it is intuitive: this is the same $\frac12$ that appears when you integrate a linear.)

Now, suppose I have $\phi_1(x)=a e^{ikx}+b e^{-ikx}$ and $\phi_2(x)=c e^{ijx}+d e^{-ijx}$, with $j\ne k$. What is $\mathscr{F}(\phi_1+\phi_2)$? The obvious answer, $\tilde{g}_k a b + \tilde{g}_j c d$, is correct: whether you integrate from 0 to $\phi_1$ then from $\phi_1$ to $\phi_1+\phi_2$, or from 0 to $\phi_2$ then from $\phi_2$ to $\phi_1+\phi_2$, or diagonally from 0 to $\phi_1+\phi_2$, because $\phi_1$ and $\phi_2$ are orthogonal. So, this is the answer:

$$\mathscr{F}(\rho)=\sum_k\frac{1}{2}\tilde{g}_k\tilde{\rho}_k\tilde{\rho}_{-k}$$

That's a perfectly fine functional. Numerically, it could easily be evaluated using the FFT. However, it is also possible to write $\mathscr{F}$ in terms of the functions $g$ and $\rho$, as follows. First, define the function $\rho_-$ by $\rho_-(x)\equiv\rho(-x)$. It is easy to show that $\tilde{(\rho_-)}_k=\tilde{\rho}_{-k}$. Thus, by the Convolution Theorem, $\tilde{g}_k\tilde{\rho}_k\tilde{\rho}_{-k}$ is the $k^{th}$ Fourier coefficient of $g\star\rho\star\rho_-$. If you evaluate a function at 0, you get the sum of all its Fourier coefficients. Thus,

$$\mathscr{F}(\rho)=\frac12\left(g\star\rho\star\rho_-\right)(0)$$

I'll point out that $(\rho\star\rho_-)(y)=\int\rho(x)\rho(x-y)dx$ is a familiar object: it is the autocorrelation of $\rho$.

Now let's try a couple of sanity checks. First, let $g=\delta$ (the Dirac delta function). Then $g\star\rho=\rho$. And we know what functional has $\rho$ as its functional derivative: it is just $\int\frac{\rho^2(x)}{2}dx$. That indeed checks out:

$$ \begin{align} \frac12g\star\rho\star\rho_-&=\frac12\delta\star\rho\star\rho_- \\ &=\frac12\rho\star\rho_- \\ (\frac12g\star\rho\star\rho_-)(0)&=\frac12(\rho\star\rho_-)(0) \\ &=\int\frac12\rho(x)\rho(x-0)dx \\ &=\int\frac{\rho^2(x)}{2}dx \space \checkmark \\ \end{align} $$

Now let's test $\mathscr{F}$ in the delta function basis

$$ \begin{align} \mathscr{F}(\rho+\epsilon\delta(x-y))-\mathscr{F}(\rho)&=\frac12\left(g\star\left((\rho+\epsilon\delta(x-y)\star(\rho_-+\epsilon\delta(-x-y))\right)\right)(0)-\frac12\left(g\star\rho\star\rho_-\right)(0) \\ &\approx\frac{\epsilon}{2}\left(g\star\left(\delta(x-y)\star\rho(-x)+\rho(x)\star\delta(-x-y)\right)\right)(0) \\ \frac{\mathscr{F}(\rho+\epsilon\delta(x-y))-\mathscr{F}}{\epsilon}&=\frac12\left(g\star(\rho(y-x)+\rho(y+x))\right)(0) \\ &=\frac12\left(\int\rho(y-x)g(0-x)dx + \int\rho(y+x)g(0-x)dx\right) \\ &=\frac12\left(\int\rho(y-x)g(x)dx + \int\rho(y-u)g(u)du\right) \\ &=(g\star\rho)(y) \space \checkmark \end{align} $$

In the second-to-last step, I used the evenness of $g$ to replace $g(-x)$ with $g(x)$. And I substituted $u=1-x$ in the second integral and swapped the bounds of integration.

OK, so I think that works for the linear case. What about nonlinear? By integrating $d\mathscr{F}(\alpha\rho)$ from $\alpha=0$ to 1 it is easy to see that if $(g\star\rho)^n$ has a functional antiderivative, it must be

$$ \begin{align} \mathscr{F}(\rho) & = \frac{1}{n+1}\sum_k\tilde{\left((g\star\rho)^n\right)}_k\tilde{\rho}_{-k} + C \\ & = \frac{1}{n+1}\left((g\star\rho)^n\star\rho_-\right)(0) + C \\ & = \frac{1}{n+1}\int(\left(g\star\rho)(x)\right)^n\rho(x)dx + C \end{align} $$ There remain two problems (both really the same problem) with this that I have yet to nail down. First, I have only shown that this result is what you get if you integrate radially from 0 to $\rho$. I have not shown that the integral is independent of the path. The nice orthogonality that made this easy for the linear case has no direct counterpart for $(g\star\rho)^n$. Second, although $\left<\left((g\star\rho)^n\right)^*,\epsilon\phi\right>$ gives the correct $O(\epsilon)$ change in $\mathscr{F}(\rho+\epsilon\rho)$ when $\phi=\rho$, it doesn't work for arbitrary $\phi$. That is, $(g\star\rho)^n$ does not have a functional antiderivative for $n>1$.

Here is the functional derivative of the functional $\mathscr{F}$ just described:

$$ \frac{\delta\mathscr{F}}{\delta\rho}=\frac{1}{n+1}\left((g\star\rho)^n\right)+\frac{n}{n+1}\left(\left((g\star\rho)^{n-1}\rho\right)\star g\right) $$

The second term is the problem, of course. The convolution of a product is not the product of the convolutions, so this doesn't reduce to the desired derivative $(g\star\rho)^n$, except in the special cases $g(x)=\delta(x)$ and $n=1$. Nevertheless, some quick numerical experiments suggest that $\mathscr{F}$ might work as an approximate functional antiderivative of $(g\star\rho)^n$.

If we accept that $\mathscr{F}=\frac{1}{n+1}\left((g\star\rho)^n\star\rho_-\right)(0)$ is an adequate approximate functional antiderivative of $(g\star\rho)^n$, it would follow that if $f(\rho)$ has a convergent power series, and if $F(\rho)$ is an antiderivative of $f$, i.e. $F^\prime(\rho)=f(\rho)$, then an approximate functional antiderivative of $f(g\star\rho)$ is $$ \mathscr{F}(\rho) = \left(\frac{F(g\star\rho)}{g\star\rho}\star\rho_-\right)(0) + C $$