Number of collinear ways to fill a grid

By the method I used to solve Counting "connected" edge orderings (shellings) of the complete graph, I can show the following: $$ g(m,n) = (mn-1)!\,m!\,n!\sum \frac{b_1 b_2\cdots b_{m+n-2}} {b_{m+n-2}(b_{m+n-3}+b_{m+n-2})\cdots(b_1+b_2+\cdots+b_{m+n-2})}, $$ where the sum is over all sequences $a_1 a_2\cdots a_{m+n-2}$ of $m-1$ $0$'s and $n-1$ $1$'s, and where $$ b_i=\#\{1\leq j<i\,\colon\, a_i\neq a_j\}+1. $$ Does anyone see why this is equal to $m!\,n!\,(mn)!/(m+n-1)!$?


We managed to obtain a solution via Stanley's comment above and some manipulations of binomial coefficients. See https://arxiv.org/abs/1809.10263


Let me give an alternative proof with only finitely many variables used. It also gives some formulae for the following numbers: "when the filling becomes not collinear for the first time, there are $a$ columns and $b$ rows already used in the filling".

Consider a random enumeration $e_1,e_2,\ldots,e_{mn}$ of the edges of the graph $K_{n,m}$ with parts $N=\{v_1,\ldots,v_n\}$ and $M=\{u_1,\ldots,u_m\}$. We look it as a process: the edges $e_1,e_2,\ldots$ are added consecutively. Denote by $G(k)$ the graph formed by $e_1,\ldots,e_k$. For positive integers $a\leqslant n, b\leqslant m$ denote by $f(a,b)$ the probability of the following event $E(a,b)$:

(i) there exists $k$ such that the graphs $G(1),\ldots,G(k)$ are connected and the vertices of $G(k)$ are $v_1,\ldots,v_a;u_1,\ldots,u_b$; and

(ii) the vertices $v_1,v_2,\ldots,v_a$ appear in the sequence of graphs $G(1),G(2),\ldots$ in this order (that is, the first appearance of $v_i$ is before the first appearance of $v_{i+1}$, for each $i=1,\ldots,a-1$); the same for $u_1,\ldots,u_b$.

Clearly the probability of event (i) itself (without assuming (ii)) equals $a!b! f(a,b)$. So we should prove $f(n,m)=\frac1{(n+m-1)!}$. The key point is the following

Recursion. $f(1,1)=1$, also define $f(a,0)=f(0,b)$ for $n\geqslant a\geqslant 2$, $m\geqslant b\geqslant 2$. Then for $n\geqslant a\geqslant 1,m\geqslant b\geqslant 1$ and $(a,b)\ne (1,1)$ we have $$ f(a,b)=\frac{a}{nm-a(b-1)}f(a,b-1)+\frac{b}{nm-(a-1)b}f(a-1,b). \,\quad(1) $$

Proof of recursion. If the vertex $v_a$ appears before than $u_b$, then the event $E(a,b-1)$ happened. Consider the first moment after it happened and the new vertex (not $v_1,\ldots,v_a;u_1,\ldots,u_{b-1})$ appeared. It is the vertex $u_b$ with probability $\frac{a}{nm-a(b-1)}$ (there are $a$ admissible edges and $nm-a(b-1)$ edges using the new vertex in total). That's the first summand, the second analogously corresponds to the case when $u_b$ appears before $v_a$.

Now we solve recurrence (1). I do not know how it should be done in a regular way, so I guessed the answer and proved that it works.

Take a variable $t$ and call a double sequence of rational functions $\varphi(a,b)=\varphi(a,b)(t)$ defined for all non-negative integers $a,b$ appropriate if they satisfy a recurrence $$(t-ab)\varphi (a,b)=a\varphi (a,b-1)+b\varphi (a-1,b)\quad \text{for}\,\,a,b\geqslant 1.$$ So admissible double sequences form a linear space.

Observation. For any $i=1,2,3,\ldots$ the function $$\psi_{i}(a,b):=(-1)^{b-i}\binom{a+b}{a+i}\frac1{(\frac{t}i+b)^{\underline{a+b+1}}}$$ is appropriate. Here we use Knuth's notation $x^{\underline{k}}=x(x-1)\ldots (x-k+1)$. The proof is straightforward.

Proposition. Consider the double sequence $$ h(a,b):=\sum_{i=1}^\infty \left(1+\frac{t}{i^2}\right)\psi_i(a,b) $$ (the summation is of course finite, only upto $i=b$, since for $i>b$ we have $\binom{a+b}{a+i}=0$). Then $h$ is an appropriate double sequence, satisfying $h(1,1)=\frac1{t(t-1)},h(a,0)=h(0,b)=0$ for $a,b\geqslant 2$.

Proof of the proposition. $h$ is appropriate by Observation and linearity of the set of appropriate function. Relations $h(1,1)=\frac1{t(t-1)}$ and $h(a,0)=0$ are clear. For computing $h(0,b)$ we may look at $h(0,b)$ as a rational function of negative degree and check that all residues are equal to 0. The residue at $t=0$ equals $$\sum_{i=1}^b(-1)^{b-i}\binom{b}i\cdot i=b\sum_{i=1}^b(-1)^{b-i}\binom{b-1}{i-1}=0\quad \text{for}\quad b\geqslant 2.$$ The residues at $t=-ij$, $1\leqslant i,j\leqslant b, i\ne j$, for functions $(1+\frac{t}{i^2})\psi_i(0,b)$ and $(1+\frac{t}{j^2})\psi_j(0,b)$ cancel out.

Main formula. $$f(a,b)=(t-ab)h(a,b)\quad \text{where} \quad t=mn$$ for $1\leqslant a\leqslant n, 1\leqslant b\leqslant m$. (For $a=n,b=m$ the function $h(n,m)$ has a pole at $t=mn$, but the product $(t-mn)h(n,m)$ is well-defined as ${\rm res}_{t=mn} h(n,m)$.)

Proof. Both parts satisfy the same initial conditions $f(1,1)=\frac1{mn}=\frac1{t}=(t-1)h(1,1)$, $f(a,0)=f(0,b)=0$ for $a\geqslant 2$ and the same recurrence (1).

Now for $a=n,b=m$ we get $$f(n,m)={\rm res}_{t=mn}\sum_{i=1}^m\left(1+\frac{t}{i^2}\right)\psi_i(a,b) ={\rm res}_{t=mn} (1+\frac{n}m) \psi_m(n,m)=\frac{1}{(n+m-1)!}$$