Returning Paths on Cubic Graphs Without Backtracking

Call a walk reduced if it does not backtrack. If $A=A(X)$ for a graph $X$, define $p_r(A)$ to be the matrix (of the same order as $A$) such that $(p_r(A)_{u,v})$ is the number of reduced walks in $X$ from $u$ to $v$. Observe that $$ p_0(A)=I,\quad p_1(A) =A,\quad p_2(A) = A^2-\Delta, $$ where $\Delta$ is the diagonal matrix of valencies of $X$. If $r\ge3$ we have the recurrence $$ Ap_r(A) = p_{r+1}(A) +(\Delta-I) p_{r-1}(A). $$ These calculations were first carried out by Norman Biggs, who observed the implication that $p_r(A)$ is a polynomial in $A$ and $\Delta$, of degree $r$ in $A$.

If $X$ is cubic, $\Delta=3I$ and we want the polynomials $p_r(t)$ satisfying the recurrence $$ p_{r+1}(t) = tp_r(t)-2p_{r-1}(t). $$ with $p_0=1$ and $p_1=t$. If my calculation are correct, then $2^{-r/2}p_r(t/\sqrt{2})$ is a Chebyshev polynomial.


Chris Godsil's beautiful answer is marred, I believe, by errors in the last two lines. Since numerous other posts refer to that answer and since there seems to be some reluctance to modify it, while at the same time there still seems to be some confusion/disagreement about what the correct result is, I have written this community wiki answer in order to place what I believe to be a corrected version of Chris's answer on the record. If this correction is in error, I hope that people will downvote it mercilessly; if it is valid, I hope that some form of the corrections will be incorporated into the original post so that this answer can be deleted.

Here is Chris's answer, with correction appended:

Call a walk reduced if it does not backtrack. If $A=A(X)$ for a graph $X$, define $p_r(A)$ to be the matrix (of the same order as $A$) such that $(p_r(A)_{u,v})$ is the number of reduced walks in $X$ from $u$ to $v$. Observe that $$ p_0(A)=I,\quad p_1(A) =A,\quad p_2(A) = A^2-\Delta, $$ where $\Delta$ is the diagonal matrix of valencies of $X$. If $r\ge3$ we have the recurrence $$ Ap_r(A) = p_{r+1}(A) +(\Delta-I) p_{r-1}(A). $$ These calculations were first carried out by Norman Biggs, who observed the implication that $p_r(A)$ is a polynomial in $A$ and $\Delta$, of degree $r$ in $A$.

If $X$ is cubic, $\Delta=3I$ and we want the polynomials $p_r(t)$ satisfying the recurrence $$ p_{r+1}(t) = tp_r(t)-2p_{r-1}(t) $$

with initial conditions $p_1=t$ and $p_2=t^2-3$. Note that the recurrence does not hold when $r=1$ since $tp_1(t)-2p_0(t)=t^2-2$ is not equal to $p_2(t)=t^2-3$. The function $q_r(t)=2^{-r/2}p_r(2^{3/2}t)$ satisfies the recurrence of the Chebyshev polynomials, $$ q_{r+1}(t)=2tq_r(t)-q_{r-1}(t) $$ with initial conditions $$\begin{aligned} q_1(t)&=2t=U_1(t)=U_1(t)-\frac{1}{2}U_{-1}(t),\\ q_2(t)&=4t^2-\frac{3}{2}=4t^2-1-\frac{1}{2}=U_2(t)-\frac{1}{2}U_0(t). \end{aligned} $$ Here $U_r(t)$ are the Chebyshev polynomials of the second kind, which satisfy the initial conditions $$ \begin{aligned} U_0(t)&=1,\\ U_1(t)&=2t, \end{aligned} $$ and the further relations $$ \begin{aligned} U_{-1}(t)&=0,\\ U_2(t)&=4t^2-1, \end{aligned} $$ as implied by the recurrence. Since the recurrence is linear, we conclude that $$ q_r(t)=\begin{cases}1 & \text{if $r=0$,}\\ U_r(t)-\frac{1}{2}U_{r-2}(t) & \text{if $r\ge1$.}\end{cases} $$ From this it follows that $$ p_r(t)=\begin{cases}1 & \text{if $r=0$,}\\ 2^{r/2}U_r(t/2^{3/2})-2^{(r-2)/2}U_{r-2}(t/2^{3/2}) & \text{if $r\ge1$.}\end{cases} $$


You can do it with an adjacency matrix, but the states are now combinations of the node and where you came from. Aside from the starting vertex, for a cubic graph there are three times as many. There is one extra for the starting vertex as you didn't come from anywhere for the start. The number of length $n$ paths back to start is the sum of the three different states that represent start in the $n^{\text{th}}$ power of this matrix.

Added: If your cubic graph is $K_4$ with nodes 1,2,3,4 and you start at 1, your states are $1(start), 1 (came from 2), \ldots 2(came from 1), 2(came from 3),\ldots 4(came from 3)$ for a total of $13$ of them. You calculate an adjacency matrix as usual. Each state will have three outgoing edges and (except for the start one) three or four incoming edges. You can then take powers of it to find the number of paths to any state. If you want paths coming back to $1$ of length $n$, you add the 1 (came from 2), 1 (came from 3), and 1 (came from 4) values in the $n^{\text{th}}$ power of the adjacency matrix.

Tags:

Graph Theory