Why is Google's quantum supremacy experiment impressive?
To elaborate on my intuition here, the thing I consider "impressive" about classical computers is their ability to simulate other systems, not just themselves. When setting up a classical circuit, the question we want to answer is not "which transistors will be lit up once we run a current through this?" We want to answer questions like "what's 4+1?" or "what happens when Andromeda collides with the Milky Way?"
There isn't a real distinction here. Both quantum and classical computers only do one thing: compute the result of some circuit. A classical computer does not fundamentally know what $4+1$ means. Instead current is made to flow through various transistors, as required by the laws of classical physics. We then read off the final state of the bits and interpret it as $5$.
The real distinction, which holds in both cases, is whether you can program it or not. For example, a simple four-function calculator is a classical system involving lots of transistors, but the specific things it can compute are completely fixed, which is why we don't regard it as a classical computer. And a pudding is a quantum system involving lots of qubits, but it can't do anything but be a pudding, so it's not a quantum computer.
Google can control the gates they apply in their quantum circuit, just like loading a different program can control the gates applied in a classical CPU. That's the difference.
The big difference between the quantum supremacy experiment and your pudding experiment is that the quantum supremacy experiment solved an unambiguous, well-posed mathematical problem. While people sometimes describe the computational task as "simulating the physical Sycamore computer", that's not right. The actual task was calculating the output of an abstract quantum logical circuit, of which the Sycamore computer was an approximate physical instantiation. The difference is subtle but crucial. From a computational perspective, the math came first and the physics came second.
Crucially, the quantum supremacy problem was mathematically well-specified, and so it could be checked on a classical computer. The parallel classical computation wasn't just there to provide a time benchmark, but also - crucially - to check the quantum computation for accuracy.
There's no such "slower but equivalent" computation to the pudding experiment. In the pudding experiment, you need to specify exact which of these two problems you're trying to solve:
- Simulate the pattern that will result if you knock a generic pudding cup off the table.
- Simulate the pattern that will result if you knock a particular pudding cup off the table [where you specify enough detail to nail down the initial conditions in enough detail to model its fall].
The first variant is obviously massively underspecified and doesn't have a unique answer. The second variant does in principle have a unique answer, but crucially, you can't actually capture all the necessary details about the initial condition in practice. So neither variant can actually be framed as a mathematically well-posed question.
In the quantum supremacy experiment, the abstract problem to be solved (which was solving the abstract logical circuit, not simulating the physical hardware) was simple enough to pose that you could (very slowly) solve it exactly on a classical computer as well as on a quantum one.
I co-run an experimental research group in which, among other things, we develop the ability to control quantum bits so as to do (one day) quantum computing. In our lab we have the most precise quantum bits and operations, but we have only ever performed (or tried to perform) operations on two or three bits at a time. This is partly because we have taken an interest in other aspects of the problem, and partly because if we were to put ten or more qubits into our experiment (which would be easy to do), we would merely reproduce small-scale circuits rather than learn how to build a large computer.
I would say that the question asked by Bridgeburners is well asked, and it correctly characterises the limited nature of Google's calculation. However, one can look at Google's result from an experimental perspective, and then it does become, I think, very impressive.
From an experimental point of view, Google's achievement is that they have achieved sufficient experimental control over a circuit containing 53 qubits that it can generate entangled states involving all or most of the qubits, in such a way that the fidelity of the state is not immediately lost before the state can even be measured in some way. We would certainly not be able to achieve this in our lab today. If we devoted all our effort to doing the same thing, I think it would take us a year or more to implement the required extensions to our experimental equipment with the required precision. So it is indeed very impressive. (Meanwhile with our trapped ion methods we can also do some things which Google's machine could not do.)
Looking now to the near future, there are two main technologies showing promise for quantum computing. These are atomic ions confined in high vacuum, and superconducting circuits of the type employed by Google. A few years ago, a rough 'competition' was held, involving computations requiring only 10 or so qubits, and the trapped ion methods won because they could take advantage of greater inter-connectability of their qubits, and good general precision in operations and measurements. If such a competition were to be held now, it is not so clear which technology would win. Nor is it clear which is the better bet for complete control of 50 qubits in such a way that general-purpose computing could be done, answering questions people really want to know (as opposed to computer-science abstractions). But what is clear, I think, is that this stage will be reached by either or both paths, and this will happen on a timescale of years not decades. What John Martinis and his colleagues have done is shown that the superconducting circuit approach is a very strong contender, and they have shown great expertise and mastery in overcoming many and severe technical challenges to get this far.