Time to reach a final state in a random dynamical system (answer known, proof unknown)

This is now a complete answer.

It turns out that Markus was right all along : with this particular starting configuration, the number of black balls is a Markov chain, and at any point in time, the distribution of the sequence is uniform conditionned on the number of black balls in the space of sequences starting with a black ball.


Let $\Omega$ be the set of sequences of $0$ and $1$ of length $n$ starting with $1$.

To show that the number of black balls is a Markov chain it is enough to show that if we are given a random sequence in $\Omega$ distributed uniformly conditioned on the number of black balls and apply one step of the process, then we still get a sequence distributed uniformly conditioned on the number of black balls.

More formally, let $\Omega' = \{1 \ldots n\}$ be the possible number of black balls and $f : \Omega \to \Omega'$ be the counting map.
Let $F,F'$ be the free vector spaces on the basis $\Omega$ and $\Omega'$ respectively.
Your process is described by a linear endormorphism $T : F \to F$, and $f$ induces a linear map $f : F \to F'$.
Let $g : F' \to F$ be the map sending $1.[k]$ for $k \in \Omega'$ to $\frac 1{|f^{-1}(k)|} \sum_{f^{-1}(k)} 1.[\omega]$.

We obviously have that $f \circ g = id_{F'}$.
If we prove that the image of $g$ is stable by $T$ then there is a map $T' : F' \to F'$ such that $T \circ g = g \circ T'$.
Then we get (I will omit the compositions from now on)
$gfTgf = gf(Tg)f = gf(gT')f = g(fg)T'f = gT'f = Tgf$.

By induction it follows that $T^ngf = (Tgf)^n$ and $fT^ngf = (fTg)^nf = T'^n f$

Since we are only interested in the behaviour of $fT^n(V)$ for the initial vector $V = 1.[10\ldots0] \in F$, and because we have $gfV=V$, we are interested in the behaviour of $fT^n(V) = fT^ngf(V) = T'^n f(V) = T'^n(1.[1])$.

That is, $T'$ is the transition matrix of a Markov chain on $\Omega'$ and we want to know what's the expected time to get from $1$ to $n$.


Intuitively, if you interpret $gf$ as a process that randomly permutes the $n-1$ rightmost balls, this says that the two processes
"randomize ; apply step ; randomize" and "randomize ; apply step" are equivalent.

Then for any $k$, the process "initialize ; randomize ;( apply step ; randomize) x $k$" is equivalent to the process "initialize ; randomize ; apply step x $k$".

And since the initial configuration is already randomized, "initialize ; randomize" is equivalent to "initialize".

Therefore the original process is equivalent to "initialize ; randomize ; (apply step ; randomize) x $k$", where the number of black balls is obviously a Markov chain


Now I will show that the image of $g$.is stable by $T$. Let $N = n(n-1)/2$ be the number of pairs $(i,j)$ with $1 \le i < j \le n$.

To prove this you essentially to count things up. The "surprising" part is that to obtain a particular sequence as a result of a non trivial copy move, the number of ways to obtain it does not depend on the order of the balls If you want to get a sequence with $4$ ones by copying a $1$ onto a $0$, you have to choose two $1$ then replace the right one with a $0$.

Even though you get some input sequences more times than others, since they are all equiprobable, the resulting probability for each output sequence doesn't depend on the order of the balls.

Conversely, some inputs have a tendency to increase the sum, and some others have a tendency to decrease the sum, but really, when you sum up every permutation of the inputs, you do get something uniform inside each equivalence class.


To compute $T(g(k))$, consider a random sequence that has sum $k$ and is uniformly distributed, so that every sequence of sum $k$ has probability $1/\binom {k-1}{n-1}$ to be the input.

For each possible output sequence of sum $k$, consider how many times it can be reached.
To get a sequence of sum $k$ you need to make a move that does nothing, and for each output sequence there are $k(k-1)/2+(n-k)(n-k-1)/2$ moves that do this, so every sequence of sum $k$ is reached in total with probability $(k(k-1)+(n-k)(n-k-1))/(2N \binom {k-1}{n-1})$, this is $ (k(k-1)+(n-k)(n-k-1))/2N \;.g(k)$.

Now consider the output sequences of sum $k+1$ (if $k<n$). To get a sequence of sum $k+1$ you need to copy a $1$ onto a $0$.
For each output, you have $k$ sequences of sum $k$ that can produce it, one that produces it in one case, the next one in two cases, and so on. Since every sequence of sum $k$ is equally probable, each output sequence can be reached with equal probability $k(k+1)/(2N \binom {k-1}{n-1})$, so summing up everything this is $k(k+1)\binom {k}{n-1}/(2N\binom {k-1}{n-1})\;. g(k+1) = (k+1)(n-k)/2N \;. g(k+1)$

Finally, consider the output sequences of sum $k-1$ (if $k>0$) To get a sequence of sum $k-1$ you need to copy a $0$ onto a $1$.
For each output, you have $n-k$ sequences of sum $k$ that can produce it, one that produces it in one case, the next one in two cases, and so on. Since every sequence of sum $k$ is equally probable, each output sequence can be reached with equal probability $(n-k)(n-k+1)/(2N \binom {k-1}{n-1})$, so summing up everything this is $(n-k)(n-k+1)\binom {k-2}{n-1}/(2N\binom {k-1}{n-1}) \;. g(k-1) = (n-k)(k-1)/2N \;. g(k-1)$

And $k(k-1)+(n-k)(n-k-1) + (k+1)(n-k) + (n-k)(k-1) = n(k-1)+(n-k)n = n^2-n = 2N$ so the three terms sum to $1$ as needed.


Finally, we can use the fact that the sequences are uniformly distributed conditioned on the number of black balls to prove that $E[T_n - T_{n-1}] = 1$.

Consider at each time, the probabilities that we go from a particular sequence with $n-1$ $1$s to the final sequence with $n$ $1$s.

At every time, the $n-1$ sequences with sum $n-1$ are equally probable (say $p_t$), then a simple counting argument says that the final sequence comes from $1\ldots10$ with probability $(n-1)p_t/N = 2p_t/n$ and it comes from one of the other $n-2$ other sequences with probability $(1+2+\ldots+n-2)p_t/N = (n-2)(n-1)p_t/2N = (n-2)p_t/n$.

Summing up those events for every time $t$, the probability that the last step comes from $1\ldots10$ is $2/n$, which means that $P(T_{n-1} < T_n) = 2/n$, and that $P(T_{n-1} = T_n) = (n-2)/n$.

But if $T_{n-1} < T_n$ then the expected time for the last step is $N/(n-1) = n/2$ (because we are stuck on $1\ldots10$ until we move to $1\ldots 11$, which happens with probability $2/n$)

Thus $E[T_n - T_{n-1}] = 2/n \times n/2 + 0 \times (n-2)/2 = 1$.


You can in fact easily prove @leonbloy's conjecture from the observation that $P(T_{n} > T_{n-1}) = 2/n$.

If you hide the last $n-k$ balls, then what you see is a mix of the same process on the first $k$ balls and steps that would copy one of visible balls to one of the hidden balls, and so steps that don't visibly do anything. Therefore, even though you see a slowed process, this doesn't influence the probabilities $P(T_{k-1} > T_{k}) = 2/k$.

From then, it's easy to get the expected time between each step, because if we have equality then $T_{k-1} - T_{k} = 0$,
and if not, at every step between them you have a probability $(k-1)/N$ of coloring the $k$th ball black, and so it takes $N/(k-1)$ steps.
Finally, the expected difference is $N/(k-1) \times 2/k = n(n-1)/k(k-1)$.


@leonbloy pointed out an interesting observation:

Related conjecture (supported by simulations) : Let $T_i$ be the incremental amount of steps it takes to colour the first $i$ positions of the sequence over and above the number of steps it takes to color the first $i-1$ character. Then $$ E(T_i)=\frac{\binom{n}{2}}{\binom{i}{2}} = \frac{n(n-1)}{i(i-1)} $$ This agrees with the original result $E(T_2+T_3 + \cdots T_n)=(n-1)^2$ and your conjecture $E(T_n)=1$.

I will try and "prove" this but it relies on another conjecture (also supported by simulation). This may just be a way of rewriting @leonbloy's conjeture, but perhaps it is easier to prove.

We will do a sort of pseudo induction. We can prove it is true directly for $T_2$ where our sequence is $BW...W$. The only way to color the second character is to pick the pair $(1,2)$ which has probability $1/{n \choose 2}$. We then have the following set up:

$$ E(T_2) = 1 + \frac{1}{\binom{n}{2}}\cdot 0 \ + \left(1-\frac{1}{\binom{n}{2}}\right)E(T_2) \\ E(T_2) = \binom{n}{2} = \frac{\binom{n}{2}}{\binom{2}{2}} $$

i.e. we must draw one pair and there is a $1/{n \choose 2}$ we are done and a $1 - 1/{n \choose 2}$ chance we are back where we started.

Now consider a character sequence where the first $k$ characters have been colored in and the rest are unknown i.e. we have $B\ldots B*$. We will prove the expected number of steps to color the first $k+1$ characters, $E(T_{k+1})$, should be:

$$E(T_{k+1}) = \frac{\binom{n}{2}}{\binom{k+1}{2}} = \frac{n(n-1)}{k(k+1)}$$

To do so we need some more information about the sequence besides the fact that the first $k$ characters are colored. In partiuclar, if we can differentiate the cases where the $k+1^{th}$ character happened to have been colored or not in the process of coloring the first $k$ colors, we can calculate $E(T_{k+1})$. Let $p_k$ be the probability that the $k+1^{th}$ character is already colored. Then we have a $p_k$ probability that our sequence looks like $B\ldots BB*$ i.e. it already has the first $k+1$ characters colored and a $1 - p_k$ probability that it looks like $B\ldots BW*$ i.e. the $k+1^{th}$ character is white (W) and thus we need to calculate the expected number of steps to paint it black. Let us use $W_k$ to denote the number of steps needed to color the $k+1^{th}$ character of the sequence $B\ldots BW*$ black. Then:

$$ E(T_{k+1}) = p_k \cdot 0 + (1 - p_k)E(W_k) = (1 - p_k)E(W_k) $$

We can easily calculate $W_k$. The pairs we can draw that will color the $k+1^{th}$ character are of the form $(i,k+1)$ where $i \in [1,k]$. Thus we have $k$ possible pairs that will paint the desired character giving a probability of $k / \binom{n}{2}$. Thus:

$$ E(W_k) = 1 + \frac{k}{\binom{n}{2}}\cdot 0 \ + \left(1-\frac{k}{\binom{n}{2}}\right)E(W_k) \\ E(W_k) = \frac{\binom{n}{2}}{k} = \frac{n(n-1)}{2k} $$

Thus substituting this back, we get:

$$ E(T_{k+1}) = (1 - p_k)\frac{n(n-1)}{2k} $$

Here is where we get stuck and need the conjecture.

Conjecture (verified by simulation): After following a number of steps until the first $k$ characters of the sequence are colored, the probability, $p_k$, that the $k+1^{th}$ character is already colored is given by: $$ p_k = \frac{k-1}{k+1} $$ Evidence provided at bottom of answer.

Assuming it is true, we get:

$$ \begin{align} E(T_{k+1}) &= \left(1 - \frac{k-1}{k+1} \right)\frac{n(n-1)}{2k} \\ &= \frac{2}{k+1}\cdot \frac{n(n-1)}{2k} \\ &= \frac{n(n-1)}{k(k+1)} = \frac{\binom{n}{2}}{\binom{k+1}{2}} \\ \end{align} $$

as required. This would prove op's conjecture since the expected number of steps till completion is given by:

$$ \begin{align} E(T_2 + T_3 + \cdots + T_n) &= \sum_{i=2}^n\frac{n(n-1)}{i(i+1)} \\ &= n(n-1) \cdot \sum_{i=2}^n\frac{1}{i(i+1)} \\ &= n(n-1) \cdot \sum_{i=2}^n\frac{1}{i-1} - \frac{1}{i} \\ &= n(n-1) \cdot \left ( \frac{1}{1} -\frac{1}{2} + \frac{1}{2} -\frac{1}{3} + \frac{1}{3} + \cdots + \frac{1}{n-1} - \frac{1}{n} \right) \\ &= n(n-1) \cdot \left ( \frac{1}{1} - \frac{1}{n} \right) \\ &= n(n-1) \cdot \left (\frac{n-1}{n} \right) \\ &= (n-1)^2 \end{align} $$


Evidence that $p_k = \frac{k-1}{k+1}$:

The array below is a matrix $M$ where entry $i,j$ represents the probability of the $j+1^{th}$ character of a sequence of length $i$ being colored after the first $j$ are colored. As you can see, the values are invariant of the length of the sequence (the rows). In actuality I ran a larger tests but did not save the results. For tonight, my computer needs a break. I might post a bigger sample tomorrow.

     1        2        3        4        5        6       7        8       9   
1   0.0  0.00000  0.00000  0.00000  0.00000  0.00000  0.0000  0.00000  0.0000
2   0.0  0.00000  0.00000  0.00000  0.00000  0.00000  0.0000  0.00000  0.0000
3   0.0  0.32890  0.00000  0.00000  0.00000  0.00000  0.0000  0.00000  0.0000
4   0.0  0.33455  0.49570  0.00000  0.00000  0.00000  0.0000  0.00000  0.0000
5   0.0  0.33545  0.49400  0.60520  0.00000  0.00000  0.0000  0.00000  0.0000
6   0.0  0.33605  0.49825  0.59395  0.66200  0.00000  0.0000  0.00000  0.0000
7   0.0  0.33055  0.50465  0.59865  0.67025  0.71760  0.0000  0.00000  0.0000
8   0.0  0.33335  0.50275  0.60105  0.66430  0.71635  0.7476  0.00000  0.0000
9   0.0  0.33005  0.50605  0.60035  0.66255  0.71310  0.7490  0.77605  0.0000
10  0.0  0.33480  0.49800  0.60330  0.66855  0.70855  0.7497  0.77940  0.8023

The program used to simulate the events is here (python3):

import numpy as np
from pandas import DataFrame

def t_i(n, k, seq=[]):
    if not seq: seq = ['B'] + ['W']*(n - 1)
    assert(k <= len(seq))
    count = 0
    while seq[:k] != ['B']*k:
        i, j = rand_pair(seq)
        seq[j] = seq[i]
        count += 1
    if seq[k] == 'B':
        rem_count = 1
    else:
        rem_count = 0
    return rem_count


def ti_time(n, k, seq=[], iter_n=20000):
    """
    Tries to get probability of k+1-th character being colored
    after coloring the first k characters.
    """
    rem_avg_sum = 0
    count = 0
    for i in range(iter_n):
        rem_count = t_i(n, k, seq)
        rem_avg_sum += rem_count
        count += 1
    return rem_avg_sum/count

def main():
    N_LIM = 7
    arr = np.zeros((N_LIM, N_LIM))
    for n in range(3, N_LIM+1):
        for k in range(2, n):
            arr[n-1][k-1] = ti_time(n, k)
    df = DataFrame(arr)
    df.index += 1
    df.columns += 1
    print(df)
    return df

-this is not an answer, just an idea

Suppose we have $n$ things we need to colour black. Every time we pick two objects from this $n$, call the one with lower index the 'colorer' and the one with higher index the 'receiver' (the 'colorer' colours the 'receiver'). The last object can never be the colorer, it only receives. So we can sort of ignore it for now, and just concentrate on colouring the first $(n-1)$ objects black, and worry about the last one later.

Define $E(n)$ as the expected number of moves to color $n$ objects black starting from BWW...WW, and $E'(n-1)$ as the expected time to colour the first $(n-1)$ objects black if you have a total of $n$ objects (ignoring the last object).

Every time we pick two objects, we say the colour 'leaks' if the receiver is the last object. To find $E'(n-1)$, we need to have $E(n-1)$ nonleaking colouring operations, and add on the leaks. Each time we colour, we have a $(n-1)/(nC2)$ chance of leaking. So we have

$$E'(n-1) (\frac{nC2 - (n-1)}{nC2}) = E(n-1)$$

And so if we assume (in hope of induction) that $ E(n-1) = (n-2)^2$ then we have

$$E'(n-1) = (n-1)^2 -1$$

After colouring the first $(n-1)$ objects black, we need to fix the last one. Notice that the leaks 'overwrite' each other, in the sense that only the last leak counts to what colour the last object ends up as. For this reason, it's highly likely that the last object is already black by the time we done colouring the first (n-1) objects. So we could conjecture that

$$E(n) = E'(n-1) + 1$$

ie that it on average takes 1 colour to colour the last object. If this is true, we have our desired relationship.

Sorry if this is incohesive; I'm on the train just thought to put ideas out there