Example of concrete statement which requires probabilistic algorithm

First of all, there is a conjecture in computer science that BPP = P, i.e., anything that can be done in randomized polynomial time can also be done in deterministic polynomial time. The hope is that you can replace true-random trials with repeated trials with a cryptographically secure pseudo-random number generator. No one can prove that such PRNGs exist, but there are many candidates that are secure as far as anyone knows. A more refined version of the conjecture suggests a quadratic speedup, so that it's still polynomial but not the same degree.

The situation with primality testing is in keeping with this same general picture. The time complexity of Miller-Rabin is $O(d^{2+\epsilon})$, while the time complexity of the rigorous ECPP test is (I am told) $O(d^{4+\epsilon})$. Now ECPP is not Miller-Rabin with repeated trials, but the extended Riemann hypothesis implies that Miller-Rabin with repeated trials would also give you a $O(d^{4+\epsilon})$ time algorithm. My guess is that ECPP is a little slower than ERH-enhanced Miller-Rabin, but it has the advantage that it primality is completely unconditional. In fact, ECPP is also a randomized algorithm, but its answer is always rigorous; the only uncertainty is that it is probably fast. (This is more formally called ZPP and is also called "Las Vegas randomized".)

This page shows current (or current as of a few years ago?) records for finding probable primes using Miller-Rabin. Meanwhile this page shows current records for proving primality using ECPP. A convenient comparison case is given by looking at Wagstaff primes, which are primes of the form $(2^n + 1)/3$. These primes are similar to Mersenne primes in some ways, except that there is no known deterministic primality test like Lucas-Lehmer which is used for the Mersenne case. (In fact Lucas-Lehmer is itself similar to Miller-Rabin and is about as fast; they are both simply refinements of Fermat's Little Theorem.) So, comparing records, $(2^{13,372,531}+1)/3$ was witnessed as probably prime in September 2013, while $(2^{83,339}+1)/3$ was proven to be prime by ECPP one year later, in September 2014. I don't have a specific comparison of computational resources used, but qualitatively it's moot. Taking the $O(d^{4+\epsilon})$ time estimate, it would presumably take about a billion times as much computational power to prove that $(2^{13,372,531}+1)/3$ is prime as to prove that $(2^{83,339}+1)/3$ is prime. In the nearly four years since the smaller number was shown to be prime, the number of digits in ECPP records has grown by a factor of 1.4, which is roughly a factor of 4 in increased computational resources. (Curiously there are 2 or 3 more Wagstaff probable primes in this range, and I'm not sure why they haven't yet been ECPP certified.) I suppose that rigorously proving that $(2^{13,372,531}+1)/3$ is prime is within reach of the US federal budget. But that too is surely moot because a small fraction of that same money could be spent on finding yet bigger probable Wagstaff primes.


Take two binary vectors $x$ and $y$ of bitlength $n.$ Party X has $x$ and party Y has $y$. They are trusted parties (trust each other) separated by a one way reliable communication channel. Let $n$ be arbitraryily large.

The goal is for party X to transmit $x$ to party Y and for party Y to decide whether $x$ is indeed equal to $y$.

A deterministic algorithm will require $O(n)$ bits to be transmitted. The randomized algorithm below requires $O(\log n)$ bits to be transmitted.

Party X uniformly at random chooses a prime $p$ from $[2,n^2]$ (since there are $\sim \frac{n^2}{2\log n}$ such primes by the PNT, this can be done using $O(\log n)$ random bits by first setting the last bit to be 1, to ensure an odd number is selected. Party X can test primality, after choosing uniformly an odd number from $[2,n^2]$ this has cost $O((\log n)^c)$ for $c\leq 6.$

Party X now encodes $p$ and $N(x) \pmod p~$, where $N(x)$ is the integer whose binary expansion is the bitvector $x$, in binary and transmits these $O(\log n)$ bits to Party Y. So party Y can reconstruct the prime and the remainder. It can then compute $x \pmod y$ and check equality.

If $x \neq y$ then the residue is different with high probability $p,$ where $$p \geq 1-\frac{\log n^2}{n}=1-\frac{2\log n}{n}.$$ For large $n$ this is essentially $1.$

This can be argued by observing that the residue mod $p$ is the same for the distinct numbers if $p$ divides $|N(x)-N(y)|.$

So the probability that party Y accepts $x$ as equal to $y$ when $x\neq y$ can be made very small by a randomized algorithm, choosing a number of primes at random, instead of just one.

Reference: This is from Hromkovic's book, Algorithmics for Hard Problems, section 5.2.