Logical suite and inequalities

For any $x = (x_1,x_2,\ldots,x_n) \in \mathbb{R}^n$ and $\alpha \in \mathbb{R}$, let $$\Delta_\alpha(x) = \sum\limits_{1\le i < j\le n}(x_i - x_j)^\alpha$$ For functions other that $\Delta_\alpha(x)$ and coordinate indices $x_k$, we will use subscript to denote first order partial derivatives. i.e. for all other function $p(x)$, $$p_k(x) \stackrel{def}{=} (\partial_k p)(x) \stackrel{def}{=}\frac{\partial p(x)}{\partial x_k}$$

The question at hand can be rephrased as:

When $x$ varies over $\mathbb{R}^n$, what is the minimum value of the function $$U(x) = \Delta_2(x) + \Delta_{-2}(x)$$

Since $U(x)$ is invariant under translation, it is easy to see the minimum value of $U(x)$ over $\mathbb{R}^n$ equals to the minimum value of another function $$V(x) = U(x) + \left(\sum_{k=1}^n x_k\right)^2 = n\sum_{k=1}^n x_k^2 + \Delta_{-2}(x)$$ Let $M_n$ be the common minumum value of these two functions. I'm going to show for $n > 1$, $$M_n \stackrel{def}{=} \min_{x\in\mathbb{R}^n} U(x) = \min_{x\in\mathbb{R}^n} V(x) = \frac{n(n-1)}{2}\sqrt{2n}$$

Let $X \subset C^{\infty}_c(\mathbb{R})$ be the collection of smooth functions on $\mathbb{R}^n$ with compact support such that for some $\delta > 0$, the function vanishes identically whenever any $|x_i - x_j| < \delta$. For any normalized $f \in X$, i.e. $\int f^2 dx = 1$, let $\Lambda(f)$ be following quadratic functional: $$\Lambda(f) \stackrel{def}{=} \int \left[ \sum_{k=1}^n \left( \frac12 f_k^2 + \frac12\omega^2x_k^2 f^2 \right) + g^2\Delta_{-2} f^2 \right] dx\tag{*1} $$ where $\omega, g > 0$ are parameters to be modified. Define two auxillary functions

$$\phi(x) = \sum_{1\le i < j \le n}\log|x_i - x_j| \quad\text{ and }\quad \Phi(x) = -\frac{\omega}{2}\sum_{k=1}^n x_k^2 + \lambda \phi(x) $$ where $\lambda$ is another parameter to be fixed later.

Since $f(x)$ vanishes whenever any $|x_i-x_j|$ is sufficiently small, $h = f e^{-\Phi}$ also belongs to $X$.
In terms of $h$, we have

$$\begin{align} \Lambda(f) &= \int \left\{ \sum_{k=1}^n \left[\frac12\left(h_k+ \Phi_k h\right)^2 + \frac12\omega^2x_k^2 h^2 \right] + g^2\Delta_{-2} h^2 \right\} e^{2\Phi} dx\\ &= \int \left\{ \sum_{k=1}^n \left[\frac12 h_k^2 + h_k \Phi_k h + \frac12( \omega^2x_k^2 + \Phi_k^2) h^2 \right] + g^2\Delta_{-2} h^2 \right\} e^{2\Phi} dx\\ &= \int \left\{ \sum_{k=1}^n \left[\frac12 h_k^2 - \frac{h^2}{2} e^{-2\Phi}\partial_k( e^{2\Phi} \Phi_k ) + \frac{h^2}{2}( \omega^2x_k^2 + \Phi_k^2)\right] + g^2\Delta_{-2} h^2 \right\} e^{2\Phi} dx\\ &= \int \left\{ \sum_{k=1}^n \left[\frac12 h_k^2 - \frac{h^2}{2} \partial_k( \Phi_k ) + \frac{h^2}{2}( \omega^2x_k^2 - \Phi_k^2) \right] + g^2\Delta_{-2} h^2 \right\} e^{2\Phi} dx \end{align} $$ Using following algebraic identities

$$\sum_{k=1}^n (\partial_k \phi)^2 = -\sum_{k=1}^n \partial_k^2 \phi = 2\Delta_{-2}\quad\text{ and }\quad \sum_{k=1}^n x_k (\partial_k \phi) = \frac{n(n-1)}{2} $$ We find $$\begin{align} -\sum_{k=1}^n(\partial_k \Phi_k) &= \omega n - \lambda \sum_{k=1}^n\partial_k^2\phi = \omega n + 2\lambda \Delta_{-2}\\ \sum_{k=1}^n\left(\omega^2 x_k^2 - \Phi_k^2\right) &=\sum_{k=1}^n\lambda\phi_k(2\omega x_k - \lambda \phi_k) = \lambda\omega n(n-1) - 2\lambda^2 \Delta_{-2} \end{align} $$

When $g^2 = \lambda(\lambda - 1) \iff \lambda = \frac12\left(1+\sqrt{1+4g^2}\right)$, the $\Delta_{-2}$ terms cancelled out.
Above mess simplifies to

$$\Lambda(f) = \int \left\{ \sum_{k=1}^n \left[\frac12 h_k^2 + \frac{h^2}{2} (\omega n + \lambda \omega n(n-1))\right]\right\} e^{2\Phi} dx \ge \frac{\omega n}{2}\left( 1 + \lambda(n-1)\right) $$ Set $\omega = \sqrt{2n}L$ and $g = L$ and send $L$ to $+\infty$, we obtain:

$$\begin{align}\int V(x)f(x)^2 dx = \lim_{L\to\infty} \frac{\Lambda(f)}{L^2} &\ge \lim_{L\to\infty} \frac{\sqrt{2n}n}{2L}\left( 1 + \frac12\left(1+\sqrt{1+4L^2}\right)(n-1)\right)\\ &= \frac{n(n-1)}{2}\sqrt{2n} \end{align} $$ For any $\epsilon > 0$, choose a $f$ such that $f(x)$ vanishes whenever $V(x) \ge M_n + \epsilon$, we find

$$M_n + \epsilon \ge \int V(x) f(x)^2 dx \ge \frac{n(n-1)}{2}\sqrt{2n}$$

Since $\epsilon$ can be arbitrary small, we can deduce

$$M_n = \inf_{\epsilon > 0}( M_n + \epsilon )\ge\frac{n(n-1)}{2}\sqrt{2n}$$

For the other direction, restrict $L$ to even integers. We find $$e^\Phi = \left(e^{-\sqrt{2n}\sum\limits_{k=1}^n x_k^2 }\prod_{1\le i < j\le n}(x_i-x_j)^2\right)^\ell \quad\text{ where }\quad L = 2\ell$$ This is a smooth function over $\mathbb{R}^n$ which decay to $0$ rapidly as $|x| \to \infty$. Extending definition of the quadratic functional to these sort of function. If one pick a constant $K$ such that $\tilde{f} = K e^\Phi$ is normalized and repeat above derivations, we find

$$ \int \left[ \frac{1}{2L^2} \sum_{k=1}^n\tilde{f}_k^2 + V(x)\tilde{f}^2 \right] dx = \frac{\Lambda(\tilde{f})}{L^2} = \frac{\omega n}{2L^2}(1 + \lambda(n-1)) $$ Since LHS is bounded from below by $M_n$, we have $$M_n \le \lim_{\ell\to\infty}\frac{\omega n}{2L^2}(1 + \lambda(n-1)) = \frac{n(n-1)}{2}\sqrt{2n}$$

Combine these, we can conclude

$$ \bbox[border:1px solid blue,1em]{ M_n = \min_{x\in\mathbb{R}^n} U(x) = \min_{x\in\mathbb{R}^n} V(x) = \frac{n(n-1)}{2}\sqrt{2n}\;\; } $$


Notes

  1. Above proof is inspired by quantum mechanics.

    In $1971$, F. Calogero published a paper about a quantum one-dimensional $N$-body system.

    F. Calogero, Solution of the One‐Dimensional N‐Body Problems with Quadratic and/or Inversely Quadratic Pair Potentials, Journal of Mathematical Physics 12, 419 (1971)

    The Hamiltonian of the system has the form: $$ H = -\frac{\hbar^2}{2m}\sum_{i=1}^N \frac{\partial^2}{\partial x_i^2} + \sum_{i=2}^N\sum_{j=1}^{i-1}\left\{ \frac14 m\omega^2(x_i-x_j)^2+g(x_i-x_j)^{-2}\right\} $$ He found the model is exactly solvable with energy levels given by following formula: $$E_{2n+k} = \hbar\omega\sqrt{\frac{N}{2}}\left[ \frac12(N-1)+\frac12 N(N-1)(a+\frac12) + 2n+k \right]$$ where $n = 0,1,2,\ldots, k = 0,1,2,\ldots$ and $a = \frac12\sqrt{1 + 4mg\hbar^{-2}}$. If one set $m = g = 1, \omega = 2$ and take the classical limit (i.e. $\hbar \to 0$), the ground state energy of the system will converge to the minimum of the potential $U(x)$. At the end, one will find $M_N = \frac{N(N-1)}{2}\sqrt{2N}$.

  2. Another interesting question is what configurations minimizes $V(x)$.

    When I'm doing an literature search on this topics, I come across a statement that minimum of $V(x)$ is achieved at positions proportional to the roots of Hermite polynomials. I cannot find the exact expression of $x_i$ which minimizes $V(x)$. However, by matching the roots of Hermite polynomials and results from my simulation, $V(x)$ should be minimized at roots of $H_n(\sqrt[4]{2n}x)$ where $$H_n(x) = (-1)^n e^{x^2}\frac{d^n}{dx^n}e^{-x^2} = \left(2x-\frac{d}{dx}\right)^n \cdot 1$$ is the physicist's Hermite polynomials of order $n$.

    Look at Héhéhé's answer for a proof of this interesting fact.


To complete the answer of achille hui I will show that the lower bound $$ \frac{n(n-1)}{2}\sqrt{2n}$$ is actually a minimum.

Let $z_j$ be the $n$ zeros of the Hermite polynomial of order n, then we have the so-called Stieltjes sums (see http://epubs.siam.org/doi/pdf/10.1137/0514028) : $$ \sum_{j=1}^n \frac{1}{(z_k-z_j)^2} = \frac{1}{3}(2n-(z_k)^2-2)$$ (everytime the sums have to be understood without the singular terms $1/0$).

Now define $$ x_j := \alpha\, z_j$$ for some $\alpha >0$. The above formula gives $$ \sum_{j=1}^n \frac{1}{(x_k-x_j)^2} = \frac{1}{3\alpha^2}(2n-\alpha^{-2}(x_k)^2-2).$$ Notice that the $x_j$ are symetrics with respect to $x = 0$ (if $z$ is a zero of the Hermite polynomial of order $n$, then $-z$ too), we have $$\sum_{ 1 \leq i < j \leq n} \frac{1}{(x_i-x_j)^2} = \frac{1}{2} \sum_{k=1}^n \sum_{j=1}^n \frac{1}{(x_k-x_j)^2} = \frac{n(n-1)}{3\alpha^2} - \frac{1}{6\alpha^4} \sum_{k=1}^n (x_k)^2.$$ Now still using the symmetry of the $x_j$, it is not hard to show that $$ \sum_{ 1 \leq i < j \leq n} (x_i-x_j)^2 = n \sum_{k=1}^n (x_k)^2.$$ So one has $$ \sum_{ 1 \leq i < j \leq n} (x_i-x_j)^2 + \frac{1}{(x_i-x_j)^2} = \frac{n(n-1)}{3\alpha^2} + \left(n-\frac{1 }{6\alpha^4}\right) \sum_{k=1}^n (x_k)^2.$$ Now we use another "well-known" formula (see Why the sum of the squares of the roots of the $n$th Hermite polynomial is equal to $n(n-1)/2$?) $$\sum_{k=1}^n (z_k)^2 = \frac{n(n-1)}2 $$ so one has $$ \sum_{k=1}^n (x_k)^2 = \alpha^2\frac{n(n-1)}2 $$ and $$ \sum_{ 1 \leq i < j \leq n} (x_i-x_j)^2 + \frac{1}{(x_i-x_j)^2} = \frac{n(n-1)}{4} \frac{2 n \alpha^4 +1}{\alpha^2}.$$ The above quantity is minimal for $$ \alpha = (2n)^{-1/4}$$ and in this case we have $$ \sum_{ 1 \leq i < j \leq n} (x_i-x_j)^2 + \frac{1}{(x_i-x_j)^2} = \frac{n(n-1)}{2}\sqrt{2n} \quad \text{for $x_k = (2n)^{-1/4} z_k$}.$$