When can a vector field be rescaled to have constant divergence?

No, this fails in all dimensions $n \geq 1$: Consider any negative, strictly decreasing ($C^1$) function $f$ ($f(x) = -\exp x$ will do), and take $X := f \partial_{x^1}$. Then, the condition $\nabla \cdot (g X) = 0$ is $(g f)' = -1$, and integrating and rearranging gives that $$g = -\frac{x^1 - j(x^2, \ldots, x^n)}{f}$$ for some $j$. But $g(j(x^2, \ldots, x^n), x^2, \ldots, x^n) = 0$, a contradiction.

The above example suggests, too, that the condition on the global sign of $g$ is not a very natural one. But if we drop this hypothesis, i.e., allow $g$ to take on nonpositive values, the statement still seems to be false. (It is true, though, in the case $n = 1$---see below.) In fact, the example from your answer to the linked question seems to be a counterexample here, too: Take $n := 2$ and $$X := \sin x \,\partial_x - 2 y \,\partial_y .$$ Then, expanding the equation $$\nabla \cdot (g X) = -1$$ in terms of the standard frame gives the p.d.e. $$g_x \sin x + g \cos x - 2 y g_y - 2 g = - 1 .$$ Evaluating at $y = 0$ and denoting $h(x) := g(x, 0)$ then gives the o.d.e. $$h' \sin x + h \cos x - 2 h = -1 .$$ This o.d.e. is singular at odd integral multiples of $\pi$. The unique solution of this equation that is continuous at $x = \pi$ is $$h(x) = \frac{\sin(x) (x - \pi) + 2 (\cos x + 1)}{(\cos x + 1)^2} \sim \frac{1}{3} + \frac{1}{30} (x - \pi)^2 + O((x - \pi)^4)$$ (where we've implicitly removed the singularity on the r.h.s. of the equality), but this function does not extend continuously outside the interveral $(-\pi, 3 \pi)$, so there is no function $g$ defined on all of $\Bbb R^2$ satisfying the condition.

On the other hand, certain adjustments of the statement are true:

  1. If $n = 1$, then $X = f(x) \partial_x$ and by hypothesis $\nabla \cdot X = f'(x) < 0$. Then, the condition $$\nabla \cdot (g X) = -1$$ becomes $$(g f)' = -1,$$ and integrating gives $$g f = -x + C .$$ Since $f$ is strictly decreasing, it has at most one zero. If it does, say, at $x_0$, then setting $C = x_0$ we can take $g = -\frac{(x - x_0)}{f}$---by hypothesis the singularity of $g$ at $x_0$ is removable. If $f$ has no zero, we can take $g = -\frac{(x - C)}{f}$ for any $C$.

  2. Then there are always local solutions $g$ around any point $p$ at which $X$ does not vanish. More precisely, if $X_p \neq 0$ for some $p \in \Bbb R^n$, then we can choose local coordinates $(y^a)$ near $p$ in which $X = \partial_{y^1}$. The volume form in these coordinates is $v \, dy^1 \wedge \cdots \wedge dy^n$ for some positive function $v$, and the condition $\nabla \cdot (g X)$ becomes $$\frac{\partial}{\partial y^1}(g v) = - 1,$$ which admits the solutions $$g = -\frac{y^1 - j(y^2, \ldots, y^n)}{v} ,$$ where $j$ is an arbitrary $C^1$ function.


Since writing my initial answer, I've found some additional insight into the question which I have added at the end.

Travis has already pointed out two obstructions to finding a solution. Here's a third obstruction which is local and also does not require $g>0$.

Let's start with an example in dimension 2. Let $X(x,y)=[3y+1-e^x, 3x+1-e^y]$. This has $\nabla\cdot X=-e^x-e^y$.

Note first that in the origin $O=(0,0)$ have $X(O)=[0,0]$, and $\nabla\cdot X\big|_O=-2$.

NB: I'll mostly be using the notation $\cdots\big|_O$ to indicate evaluation of $\cdots$ at the point $O$ instead of eg $X(O)$.

If we assume $\nabla\cdot(gX)=g\nabla\cdot X+X\cdot\nabla g=-1$ for some $g(x,y)$, evaluating in the origin gives $g(O)=-1/2$.

Now, compute the derivatives of $\nabla\cdot(gX)=-(e^x+e^y)g+(3y+1-e^x)g_x+(3x+1-e^y)g_y$ at the origin: $$ \frac{\partial}{\partial x}\nabla\cdot(gX)\big|_O = -g-3g_x+3g_y\big|_O,\quad \frac{\partial}{\partial y}\nabla\cdot(gX)\big|_O = -g+3g_x-3g_y\big|_O. $$ Since $\nabla\cdot(gX)$ is constant, these derivatives should be zero, but that requires $g(O)=0$, which is not the case.

What caused the problem can be explained in general. Let's use coordinates $(x_i)$ and the notation $f_{,i}=\partial f/\partial x_i$. $X=(X_i)$ is the vector field, so we require $\nabla\cdot X=\sum_i X_{i,i}<0$ (or just non-zero).

Assume that $X(P)=0$ at a point $P$: eg, the origin as in the example.

Now, we require that $g$ is a function so that $\nabla\cdot(gX)=-1$ (or any other constant). Written out, $$ \nabla\cdot(gX) = \sum_i gX_{i,i}+g_{,i}X_{i} = -1. $$ Evaluating at $P$ gives $g(P)=-1/a$ where $a=\nabla\cdot X\big|_P$.

Next, differentiate with respect to $x_j$: $$ \frac{\partial}{\partial x_j}\nabla\cdot(gX) = \sum_i gX_{i,ij}+g_{,j}X_{i,i}+g_{,i}X_{i,j}+g_{,ij}X_{i} = 0. $$ Evaluating at $P$ gives $$ ag_{,j}+\sum_i g_{,i}X_{i,j}\big|_P = X_{i,ij}/a\big|_P. $$ However, this can be written as a matrix equation: $$ \nabla g\cdot(aI+DX)\big|_P = \nabla(\nabla\cdot X)/a\big|_P $$ where the matrix $DX$ is given by $DX_{ij} = X_{i,j}$. If the matrix $aI+DX$ is singular and the vector $\nabla(\nabla\cdot X)$ is not in its image, there is no solution for $\nabla g$ at $P$.


Let's first write $\nabla\cdot(gX)=g\nabla\cdot X+X\cdot\nabla g$. If we take a path $\gamma(s)$ that follows the flow $X$, ie $\gamma'(s)=X(\gamma(s))$, we can rewrite the differential equation along $\gamma$ as $$ g(\gamma(s))\cdot(\nabla\cdot X)(\gamma(s))+\frac{d}{ds}g(X(\gamma(s)). $$ Thus, if we let $f(s)=(\nabla\cdot X)(\gamma(s))$ and define $h(s)=g(\gamma(s))$, we get the differential equation $h(s)f(s)+h'(s)=-1$.

Looks familiar? This is basically the same as Travis had. Solve the differential equation in $h$, and you may find a positive solution, or you may find that the solution necessarily becomes negative, depending on $f(s)$ and on the interval of $s$ for which the curve $\gamma$ is defined: closed ($[a,b]$), half-closed, or open ($(-\infty,+\infty)$).

Note that the flow, $X$, may also give rise to periodic curves: I guess this would often be referred to as closed curves, but I'm avoiding that term so as not to confuse it with a curve defined over a closed interval. This would put boundary conditions on $h$, although it would still be solvable.

As long as one can solve for $g$ along each curve, I can see four primary obstructions, three of which we have already touched upon.

First, the solution for $g$ may to switch sign. This was the case with Travis' first counter-example in 1 dimension, which is the same along a flow-curve $\gamma$ in any higher dimension.

The second is related to the first, and I think to Travis' second example. You may have a solution along a flow-curve $\gamma$ where the solution $h(s)$ goes to $\pm\infty$ as $s$ goes to $\pm\infty$, but where the curve itself has end-points: ie, $\gamma(\pm\infty)$ ends at some point at which $X(P)=0$.

Both of these are global conditions: ie, you may find local solutions, but they may not combine into global solutions without either change of sign or infinities.

The third corresponds to my counter-example, which is a local condition. Solutions for different curves may meet at points $P$ where $X(P)=0$ without combining into a solution at $P$. (I haven't really sat down to work this thing through properly.)

If there are no problems at $X=0$ points, eg if there are no such points, and we don't worry about the sign of $g$, we can make solutions along flow-curves, and in local regions, which could glue together into a global solution. In the simplest case, if you could find a coordinate system $x=(x_1,\ldots,x_n)$ where the flow curves $\gamma(s)=(s,a_2,\ldots,a_n)$, all that should be needed is to solve along each flow-curve and make sure the solution varies smoothly with the full set of coordinates. There might still be global constraints that come back to bite you, but I can't see anything concrete.

This leads to the fourth potential obstuction, which is one I have only speculated on, and may not actually lead to any obstruction, but it's so interesting I thought I'd mention it anyway. If the flow, $X$, is chaotic (in some region of space), the set of periodic flow-curves will be dense. If fact, each non-periodic flow-curve within that region will also be dense. I suspect this would put severe restrictions on $h(s)$ (unless it is constant): eg there could be global restrictions imposed on a single flow-curve as it gets arbitrarily close to itself. However, this is just a speculation: haven't had time to look more closely at this.

I was also wondering about other local obstructions, but as Travis has already shown, away from points with $X=0$, there are local solutions.