Why is it "easier" to work with function fields than with algebraic number fields?

One answer is that we can take formal derivatives. For example, Fermat's last theorem is rather difficult but the function field version is a straightforward consequence of the Mason-Stothers theorem, whose elementary proof crucially relies on the ability to take formal derivatives of polynomials.

There is no obvious way to extend this construction to integers in a way that preserves its good properties. If there were, then the abc conjecture (of which Mason-Stothers is the function field version) would be trivial, which it's not. There is a thing called the arithmetic derivative, but it is of course not linear, and it doesn't seem to me to be very easy to prove anything with it.

The problem is that if we want to think of $\mathbb{Z}$ as being analogous to a function field, then the "field" that it's a function field over is the field with one element, so if a reasonable notion of formal derivative exists here it needs not to be $\mathbb{Z}$-linear, but to be $\mathbb{F}_1$-linear, whatever that means... if we understood what that meant, perhaps we could construct the "correct" version of the arithmetic derivative and presumably prove the abc conjecture.


Arakelov theory addresses another difference between function fields and number fields, which is the existence of Archimedean places. Over a function field all places are non-Archimedean and I understand this makes various things easier, but I don't know much about this so someone else should chime in here.


The primary reason that function field arithmetic is simpler than number field arithmetic is due to the existence of nontrivial derivations. With the availability of derivatives many things simplify.

E.g. for polynomials derivatives yield easy algorithms for squarefree testing, squarefree part, etc. Contrast this to the integer case. No feasible (polynomial time) algorithm is currently known for recognizing squarefree integers or for computing the squarefree part of an integer. In fact it may be the case that this problem is no easier than the general problem of integer factorization. This problem is important because one of the main tasks of computational algebraic number theory reduces to it (in deterministic polynomial time). Namely the problem of computing the ring of integers of an algebraic number field depends upon the square-free decomposition of the polynomial discriminant when computing an integral basis.

From derivatives also come Wronskians and associated measures of independence (excerpted below). For example, this is what is at the heart of Mason's trivial high-school level proof of the ABC theorem for polynomials - which is a difficult important open problem for numbers. From Mason's theorem follows immediately a trivial two-line proof of FLT for polynomials. If there existed some sort of analogous "derivative for integers" that yielded the corresponding ABC theorem, then it would yield an analogous trivial proof of FLT for integers (more precisely it would yield asymptotic FLT, i.e. FLT for all sufficiently large exponents).

Such observations have motivated searches for "arithmetic analogues of derivations". For example, see Buium's paper by that name in Jnl. Algebra, 198, 1997, 290-99, and see his book Arithmetic differential equations.

poly FLT, abc theorem, Wronskian formalism [was: Entire solutions of f^2+g^2=1] Posted: Jul 17, 1996 12:13 AM
Click to see the message monospaced in plain text Plain Text Click to reply to this topic Reply


"Harold P. Boas" wrote to sci.math.research on 7/3/96:
:Robert Israel wrote:
:> Alan Horwitz writes:
:> |> I am interested in all entire solutions f and g to f^2+g^2=1.
:> |> I remember seeing this somewhere, but I cannot recall where.
:>
:> I've also seen this before, in fact I recall assigning it as homework :> to one of my classes, but I don't recall the source. The solutions are ... :
:Robert B. Burckel gives some history about this problem in his
:comprehensive book An Introduction to Classical Complex Analysis,
:volume 1 (Academic Press, 1979). In Theorem 12.20, pages 433-435,
:he shows that the equation f^n+g^n=1 has no nonconstant entire
:solutions when the integer n exceeds 2; when n=2, the solution
:is as given by R. Israel in his post. ... (papers of Fred Gross)

Note that the rational function case of FLT follows trivially from Mason's abc theorem, e.g. see Lang's Algebra, 3rd Ed. p. 195 for a short elementary (high-school level) proof of both. Chebyshev also gave a proof of FLT for poly's via the theory of integration in finite terms, e.g. see p. 145 of Shanks' "Solved and Unsolved Problems in Number Theory", or Ritt's "Integration in Finite Terms", p. 37. The Chebyshev result is actually employed as a subroutine of Macsyma's integration algorithm (implemented decades ago by Joel Moses). Via abc a related result of Dwork is also easily proved: if A,B,C are fixed poly's then all coprime poly solutions of AX^a+BY^b+C*Z^c = 0 have bounded degrees provided 1/a+1/b+1/c < 1. Other applications in both number and function fields may be found in Lang's survey [3].

Mason's abc theorem may be viewed as a very special instance of a Wronskian estimate: in Lang's proof the corresponding Wronskian identity is c^3*W(a,b,c) = W(W(a,c),W(b,c)), thus if a,b,c are linearly dependent then so are W(a,c),W(b,c); the sought bounds follow upon multiplying the latter dependence relation through by N0 = r(a)*r(b)*r(c), where r(x) = x/gcd(x,x').

More powerful Wronskian estimates with applications toward diophantine approximation of solutions of LDEs may be found in the work of the Chudnovsky's 1 and C. Osgood 2. References to recent work may be found (as usual) by following MR citations to these papers in the MathSci database.

I have not seen mention of this Wronskian view of Mason's abc theorem. Although elementary, it deserves attention since it connects the abc theorem with the general unified viewpoint of the Wronskian formalism as proposed by the Chudnovsky's and others.

1 Chudnovsky, D. V.; Chudnovsky, G. V. The Wronskian formalism for linear differential equations and Pade approximations. Adv. in Math. 53 (1984), no. 1, 28--54. 86i:11038 11J91 11J99 34A30 41A21

2 Osgood, Charles F. Sometimes effective Thue-Siegel-Roth-Schmidt-Nevanlinna bounds, or better. J. Number Theory 21 (1985), no. 3, 347--389. 87f:11046 11J61 12H05

[3] Lang, Serge Old and new conjectured Diophantine inequalities. Bull. Amer. Math. Soc. (N.S.) 23 (1990), no. 1, 37--75. 90k:11032 11D75 11-02 11D72 11J25


Let's consider an example to see why function fields are easier:

Let $q$ be a prime, and consider the global function field, $\mathbb F_q(T)$. An ideal $\mathfrak a$ of $\mathbb F_q[T]$ is just the principal ideal $\mathfrak a=(f)=(T^d+a_{d-1}T^{d-1}+\dots+a_0)$. The norm $N\mathfrak a=q^d$, and you can see that there are exactly $q^d$ ideals of norm $q^d$.

Then the zeta function over this field is $$\zeta_{\mathbb F_q[T]}(s)=\sum_{\mathfrak a \neq 0}N\mathfrak a^{-s}=\sum_{d=0}^\infty q^d(q^d)^{-s}=1/(1-q^{1-s})$$

That's a very simple expression for the zeta function. Note that it has no zeros, so it trivially satisfies the Riemann Hypothesis.