Probability that $A$ need more coin tosses to get two consecutive heads than $B$ need to get three consecutive heads

Let $A_{m,n}$ be the probability that player $A$ flips two consecutive heads before player $B$ flips three consecutive heads, given that $A$'s (resp. $B$'s) current run of heads has length $m$ (resp. $n$). Then $$ A_{m,n}=\frac{1}{4}\left(A_{m+1,n+1} + A_{m+1,0} + A_{0,n+1}+A_{0,0}\right); $$ the boundary conditions are that $A_{m,n}=1$ for $m\ge 2$ and $n < 3$, and $A_{m,n}=0$ for $m \le 2$ and $n\ge 3$. You want to find $A_{0,0}$. The relevant six equations are: $$ \begin{eqnarray} A_{0,0} &=& \frac{1}{4}A_{1,1} + \frac{1}{4}A_{1,0} + \frac{1}{4}A_{0,1} + \frac{1}{4}A_{0,0}\\ A_{0,1} &=& \frac{1}{4}A_{1,2} + \frac{1}{4}A_{1,0} + \frac{1}{4}A_{0,2} + \frac{1}{4}A_{0,0} \\ A_{1,0} &=& \frac{1}{2} + \frac{1}{4}A_{0,1} + \frac{1}{4}A_{0,0} \\ A_{1,1} &=& \frac{1}{2} + \frac{1}{4}A_{0,2} + \frac{1}{4}A_{0,0} \\ A_{0,2} &=& \frac{1}{4}A_{1,0} + \frac{1}{4}A_{0,0} \\ A_{1,2} &=& \frac{1}{4} + \frac{1}{4}A_{0,0}, \end{eqnarray} $$ or $$ \left(\begin{matrix}3/4 & -1/4 & -1/4 & -1/4 & 0 & 0 \\ -1/4 & 1 & -1/4 & 0 & -1/4 & -1/4 \\ -1/4 & -1/4 & 1 & 0 & 0 & 0 \\ -1/4 & 0 & 0 & 1 & -1/4 & 0 \\ -1/4 & 0 & -1/4 & 0 & 1 & 0 \\ -1/4 & 0 & 0 & 0 & 0 & 1 \end{matrix}\right)\times\left(\begin{matrix} A_{0,0} \\ A_{0,1} \\ A_{1,0} \\ A_{1,1} \\ A_{0,2} \\ A_{1,2} \end{matrix}\right) = \left(\begin{matrix} 0 \\ 0 \\ 1/2 \\ 1/2 \\ 0 \\ 1/4 \end{matrix}\right), $$ assuming no typos. Further assuming no typos entering this into WolframAlpha, the result is $$ A_{0,0} = \frac{1257}{1699} \approx 0.7398, $$ which at least looks reasonable.


Update: As pointed out in a comment, the above calculation finds the probability that $A$ gets two consecutive heads sooner than $B$ gets three consecutive heads; the original problem asks for the opposite. The correct boundary conditions for the original problem are that $A_{m,n}=1$ for $m<2$ and $n\ge 3$ and $A_{m,n}=0$ for $m\ge 2$ and $n\le 3$. The matrix equation becomes $$ \left(\begin{matrix}3/4 & -1/4 & -1/4 & -1/4 & 0 & 0 \\ -1/4 & 1 & -1/4 & 0 & -1/4 & -1/4 \\ -1/4 & -1/4 & 1 & 0 & 0 & 0 \\ -1/4 & 0 & 0 & 1 & -1/4 & 0 \\ -1/4 & 0 & -1/4 & 0 & 1 & 0 \\ -1/4 & 0 & 0 & 0 & 0 & 1 \end{matrix}\right)\times\left(\begin{matrix} A_{0,0} \\ A_{0,1} \\ A_{1,0} \\ A_{1,1} \\ A_{0,2} \\ A_{1,2} \end{matrix}\right) = \left(\begin{matrix} 0 \\ 0 \\ 0 \\ 0 \\ 1/2 \\ 1/4 \end{matrix}\right), $$ The result becomes $A_{0,0}=\frac{361}{1699}\approx 0.2125$. The two results add to slightly less than one because there is a nonzero probability that both players hit their goals at the same time... this probability is $81/1699\approx 0.0477$.


I can provide answers from three perspectives. The Markov way, the naive way as well as the programming/simulation way.

The Markov way is almost the same with the first answer (he did an excellent job except for some small errors regarding the boundary conditions) so I am basically rephrasing his set-ups. The point here is to find out the transitions between different "states", and the probabilities associated with that states will have corresponding relations. We denote an ordered pair $(m, n)$ the state where A is now having a run of $m$ Heads and B is having a run of $n$ Heads. Next we will toss to see what states A and B will falling in. They can fall in the $(0, 0)$ state if both have a Tail toss, or $(m+1, n+1)$ state if both have Head toss, or $(m+1, 0)$ for a HT toss, and $(0, n+1)$ with a TH toss. Thus we obtained a transition matrix, and the probability associated on those states are related corresponding to those transition probabilities. The equation he built up: $$ P(m,n)=\biggl(\frac{1}{4}P(m+1, n+1)+P(m+1,0)+P(0,n+1)+P(0,0)\biggr) $$
says if I start from state $(m,n)$, my probability of success is the sum of probabilities that I move to different states and succeed there. Notice that we also have the corner cases (boundary conditions): if $n\geq3,\,m<2$, then $P(m,n)=1$ because B already encounters HHH; if $m\geq2$, $n\leq3$ then $P(m, n)=0$ because this means A encounters HH no later than B encountering HHH. Combing them together you should be able to obtain a linear system and solve to get the result.

The naive way is rather straightforward. We denote by $X$ the number of tosses for A to get a HH, and $Y$ the number of tosses for B to get a HHH. Then $$ P(X>Y) = \mathop{\sum}_{k=1}^{\infty} P(X>k| Y=k)P(Y=k) $$ Since $X$ and $Y$ are independent, we have $$ P(X>k|Y=k)=P(X\geq k)=\mathop{\sum}_{m = k+1}^{\infty}P(X = m) $$ So $$ P(X>Y) = \mathop{\sum}_kP(Y=k)\mathop{\sum}_{m=k+1}^{\infty}P(X=m) $$

It suffices to find out the probability distribution of $X$ and $Y$. This can again be solved by the Markov chain method, or conditional expectation. I will take $X$ as an example. We have $$ p_n = P(X = n) = P(T)\cdot P(X = n-1)+P(HT)\cdot P(X = n-2)=\frac{1}{2}p_{n-1}+\frac{1}{4}p_{n-2} $$ with $p_1 = P(X = 1)=0,\,p_2 = P(X = 2) = \frac{1}{4}$.

It then suffices to find out $p_n$ and similarly for $q_k = P(Y = k)$.

Finally the above method can be further verified by either accurately computed or simulated using C++ or Python/Matlab/Julia. I won't paste the code here but can provide to whoever interested in it.

PS: Forgot to mention but the result should be $\frac{361}{1699}$, or $0.2124$ by simulation.

Tags:

Probability