When does $f(X) = g(X+1)-g(X)$ where $f, g \in \mathbb{C}(X)$?

Every rational fraction $f\in \mathbb{C}(X)$ can be written uniquely as a sum $$f(X) = P(X) + R(X)$$ where $P$ is a polynomial and $R$ is the polar part. Now the polar part is uniquely expressed as a finite sum $$R(X)= \sum_{k\ge 1, \alpha \in \mathbb{C}}\frac{c_{k,\alpha}}{(X-\alpha)^k}$$

The map $g(X) \mapsto g(X+1) -g(X)$ takes polynomial (polar) part to polynomial (polar) part. So $f$ is in the image of this map if and if only if its polar part is. It is easy to see that the necessary and sufficient condition is $$\sum_{n\in \mathbb{Z}} c_{k,\alpha +n}=0$$ for all $k\ge 1$ and $\alpha \in \mathbb{C}$. (finitely many linear conditions).

ADDED: Consider now the case $f\in \mathbb{Q}(X)$. The decomposition in partial fraction of $f$ happens in a finite (Galois) extension $K$ of $\mathbb{Q}$. Assume that there exists $g\in \mathbb{C}(X)$ so that $f(X) = g(X+1) - g(X)$. Since the linear conditions above are satisfied then we can find $g\in K(X)$ so that $f(X) = g(X+1)-g(X)$. Now average all the equalities $f(X) =g^{\sigma}(X+1)-g^{\sigma}(X)$ for all $\sigma \in Gal(K/\mathbb{Q})$. We get $\bar g\in \mathbb{Q}(X)$ so that $f(X) = \bar g(X+1) - \bar g(X)$.

ADDED:

The linear necessary and sufficient conditions follow from the following combinatorial lemma: for a sequence $(a_n)_{n\in \mathbb{Z}}$ of finite support there exists a sequence of finite support $b= (b_n)_{n\in \mathbb{Z}}$ so that $a= \Delta b$ if and only if $\sum a_n =0$. Moreover, in that case $b$ is uniquely defined. This is the discrete analogue of the result: a function with compact support on $\mathbb{R}$ has an antiderivative with compact support if and only if its total integral is $0$. In our case we want $b_{n+1}-b_n =a_n$ for all $n$. The unique solution with finite support is given by $b_n=\sum_{k<n}a_k$ for all $n$.

ADDED: As @Steven Stadnicki: noticed, if a solution $g$ exists, it is unique up to a constant. Moreover, if $f$ has coefficients in a subfield $K$, so can $g$ be taken with coefficients in $K(X)$.

We give now some results in particular cases:

  1. Assume that $f(X) = \frac{P(X)}{Q(X)}$ where $Q$ has simple integral roots and $\deg Q \ge \deg P +2$. Then $f$ is a difference. Indeed, it is enough to check that the sum of residues of $f$ is $0$, and this follows from Cauchy's formula.

  2. Assume that $\deg f =-1$. Then $f$ is not a difference. This can be seen in two ways: one, since the sum of residues is not $0$; another way is to notice that $\deg f = \deg g -1$, but $g$ is at most of degree $-1$ ( basically we look at the rate of decrease at $\infty$.

ADDED:

If the equation $f(X)= h(X+2)-h(X)$ has a solution $h$, then the equation $f(X)=g(X+1)-g(X)$ has a solution $g(X)= h(X+1)+h(X)$.


If $f(z)$ has a singularity at $0$ (WLOG) then we must have either $g(z)$ having a singularity at $0$ or at $1$ (or both.)

If $0$ is a singularity of $g$ then since $f(-1)=g(0)-g(-1)$, either $f$ has a singularity at $-1$ or $g$ does (or both.) If $g$ does, then we can keep moving down and find that either $g$ has a singularity at $-k$ or $f$ has singularities at $0$ and $-k.$ So for some $k> 0$, we must have $f(z)$ with singularities at $0,-k$ and $g$ has a singularity at $-1,\dots,-(k-1).$ We have to be able to stop at some point because $f$ must have finitely many singularities.

If $1$ is a singularity of $g$, then either $2$ is a singularity of $g$ or $1$ is a singularity of $f$. We then move up or down, and find that $0$ and $k$ must be a singularity of $f$ for some integer $k>0.$

This means that if $f$ has a singularity at $z_0$ then there must be an integer $n\neq 0$ such that $f$ has a singularity at $z_0+n.$

An Algorithm

Here is an algorithm that doesn't require you to find the exact roots.

The general strategy: Each time through the loop, we will reduce the problem to another $f$ with fewer singularities.

Write $f(x)=\frac{p(x)}{q(x)}$ with $\deg p<\deg q,$ and $p,q$ are relatively prime. (Any polynomial part can be handled separately.) Compute $q_n(x)=\gcd(q(x),q(x+n))$ for $n=1,2,\dots$ until either:

  1. $q_n(x)\neq 1,$ or
  2. $n$ is greater than the diameter of the set of (complex) zeros of $q(x).$

If 2. is reached, and $f\neq 0,$ then there is no $g$. (We can at least compute an upper bound for the diameter, so we only have to check finitely many $n.$)

Otherwise, we've found a $q_n(x)\neq 1.$ Write $q(x)=a(x)b(x)$ where $q_n(x)\mid a(x)\mid q_n^k(x)$ for some $k$ and and $\gcd(a(x),b(x))=1.$ Basically, let $a(x)=\gcd(q(x),q_n^k(x))$ for some large $k,$ so that $a(x)$ has all repetitions of the roots $z_0$ of $q$ such that $z_0+n$ is also a root.

We know that $b(x)$ is non-constant, because any root with a maximal real part is not a root of $q_n(x).$

Then we can write:

$$f(x)=\frac{p(x)}{q(x)}=\frac{c(x)}{a(x)} + \frac{d(x)}{b(x)},$$ using standard partial fraction approaches.

Let $$\begin{align}f_1(x)&=\frac{c(x-n)}{a(x-n)}+\frac{d(x)}{b(x)}. \end{align}$$

Note that if $h(x)=\frac{c(x)}{a(x)},$ then $h(x)-h(x-n)=\Delta(h(x-1)+h(x-2)+\cdots h(x-n))$, so $f_1$ and $f$ differ by an element in the range of $\Delta.$

So $f_1$ is in the range of $\Delta$ if and only if $f$ is.

If $f_1=0$, we are done.

Otherwise we apply the process to $f_1.$

We know that $f_1$ has a smaller set of singularities - we have removed the singularities $z_0$ such that $z_0+n$ is also a singularity, and potentially added in more repetitions of the singularities at $z_0+n.$ What this means is that this process must eventually stop.


For example, if $$f(x)=\frac{3x^3+16x^2+28x+12}{(x+3)(x+2)^2x^2}$$ then $q_1(x)=x+3,$ and you get:

$$f(x)=\frac{-1}{x+3}+\frac{x^3+4x^2+8x+4}{(x+2)^2x^2}$$

Then you get $$f_1(x)=\frac{-1}{x+2}+\frac{x^3+4x^2+8x+4}{(x+2)^2x^2}=\frac{2x^2+8x+4}{(x+2)^2x^2}.$$

For this $f_1$, we get $q_2(x)=(x+2)^2$ and we solve:

$$f_1(x)=\frac{-(x+3)}{(x+2)^2} + \frac{x+1}{x^2}.$$

Then we get:

$$f_2(x)=\frac{-(x+1)}{x^2}+\frac{x+1}{x^2}=0$$

So our resulting expression is:

$$\begin{align}f(x)&=\left(\frac{-1}{x+3}-\frac{-1}{x+2}\right)+\left(\frac{-(x+3)}{(x+2)^2}+\frac{x+1}{x^2}\right)\\ &=-\Delta\left(\frac{1}{x+2} + \frac{x+2}{(x+1)^2}+\frac{x+1}{x^2}\right) \end{align}$$

In this, I am obviously using the knowledge of the roots to do the work, you need not. All of the steps above can be computed without the roots. You can compute the $q_n,$ then the $a,b$, then $c,d$ all using standard operations.

For example, if the singularities of $f$ were at $0,1,2\pm i,3\pm i,5\pm i$, then your first $a(x)$ would be found with $a(x)$ having roots $0$ and $2\pm i.$ $f_1$ would have remaining singularities a subset of $\{1,3\pm i,5\pm i\}.$ The next step would find $n=2$ and remove singularities at $3\pm i.$ Then we'd find out if $f_2=0.$ If it isn't, the next attempt to find an $n$ will fail.


This algorithm also works in any $k(x)$ where $k\subseteq \mathbb C$ is a field, because all the algorithmic steps work inside the field.

In particular, if $f(x)\in k(x)$ has a $g(x)\in\mathbb C(x)$ then it has a solution in $k(x).$