Minimum and maximum sum of squares given constraints

$f(x)=x^2$ is a convex function.

Also, $$(x_1+x_2+...+x_{n-1}-(n-2)x_1,x_1,...,x_1)\succ(x_{n-1},x_{n-2},...,x_1)$$ and let $x_n\geq x_1+x_2+...+x_{n-1}-(n-2)x_1.$

Thus, by Karamata $$(x_1+x_2+...+x_{n-1}-(n-2)x_1)^2+x_1^2+...+x_1^2\geq x_{n-1}^2+...+x_1^2,$$ which gives $$\max\sum_{k=1}^nx_k^2=(n-2)x_1^2+x_n^2+(1-x_n-(n-2)x_1)^2.$$

Id est, it's enough to solve our problem for $x_1\leq x_n<x_1+x_2+...+x_{n-1}-(n-2)x_1$ or $$x_1\leq x_n<\frac{1-(n-2)x_1}{2}.$$

I hope it will help.

The minimum we can get by C-S: $$\sum_{k=1}^nx_k^2=x_1^2+x_n^2+\frac{1}{n-2}\sum_{k=1}^{n-2}1^2\sum_{k=2}^{n-1}x_k^2\geq x_1^2+x_n^2+\frac{1}{n-2}\left(\sum_{k=2}^{n-1}x_k\right)^2=$$ $$=x_1^2+x_n^2+\frac{(1-x_1-x_n)^2}{n-2}.$$ The equality occurs for $x_2=...=x_{n-1}=\frac{1-x_1-x_n}{n-2},$ which says that we got a minimal value.


For the maximum: Suppose we have fixed values $x_1 \leq \frac{1}{n}$ and $x_n \geq \frac{1}{n}$. Then there is a unique point $x^*=(x_1, x_2, \dots, x_n)$ satisfying $\sum x_i=1$ with at most one index $j$ satisfying $x_1 < x_j < x_n$ (imagine starting with all the variables equal to $x_1$, then increasing them one by one to $x_n$). I claim this is where the unique maximum of your function is.

Consider any other point in the domain, and suppose it has $x_1<x_i\leq x_j<x_n$ for some $i \neq j$.

Let $\epsilon = \min\{x_i-x_1, x_n-x_j\}$. Replacing $x_i$ by $x_i'=x_i-\epsilon$ and $x_j$ by $x_j'=x_j+\epsilon$ maintains the $\sum x_i=1$ constraint, while decreasing the number of "interior to $(x_1, x_n)$" variables by one. Furthermore, the new point is better for our objective function: In the sum of squares objective we've replaced $x_i^2+x_j^2$ by $$x_i'^2+x_j'^2=(x_i-\epsilon)^2+(x_j+\epsilon)^2 = x_i^2+x_j^2 + 2 \epsilon^2 + 2 \epsilon(x_j-x_i) > x_i^2+x_j^2.$$

Repeatedly following this process, we'll eventually reach the point $x^*$ from our arbitrary point, increasing the objective at every step.


The key idea hiding in the background here is that (as Michael Rozenberg noted) the function $x^2$ is convex. So if we want to maximize $\sum x_i^2$ given a fixed $\sum x_i$, we want to push the variables as far away from each other as possible. The $x_1$ and $x_n$ constraints place limits on this, so effectively what ends up happening is we push points out to the boundary until we can't push them out any further. The minimum you observed is the reverse of this: To minimize the sum of a convex function for fixed $\sum x_i$ we push all the inputs together as much as possible (this corresponds to Jensen's Inequality).