Are "intelligent" systems able to bypass the uncertainty principle?

Now that's a nice question. I've only browsed the paper you submitted, so correct me if I misinterpreted it. But the idea there is to use machine learning methods to identify those features of a collection of quantum data that are best suited to be used as characteristic features to the processes at play there. So this is about analyzing data. Therefore, simply put, as long as the data doesn't violate the uncertainty principle, neither will the AI the paper talks about.

But I think your question is a bit more ambitious. Any AI that is trained on a data set can in principle be asked to make predictions about data it has not yet seen, and your question is what it is that hinders the AI to make arbitrarily accurate predictions, thus giving both the momentum and the position of a particle to arbitrary degrees of accuracy. Now, this is much more about physics than it is about AI.

I think the key aspect here is to ask what it means to know a particle's position and momentum. We don't have to go to AI there, we can for example look at something as simple as the ground state of a particle in a 1D box. We consider this problem in classical Schrödinger QM (which, as commenters correctly pointed out, is only a fraction of all of QM). This state can be described by the wave function $$\psi_1(x,t) = e^{-i\omega_1 t} \cos\left(\frac{\pi}{L}x\right),$$ for $x \in (-L/2,L/2)$, where $L$ is the size of the box and $\omega_1 = \pi^2\hbar/2mL^2$. This is, for all we know by the Schrödinger picture of quantum mechanics, the exact state the particle is in. I repeat that: This is everything we can know about the state of a particle in a box, when someon gives us this wave function, we solved the problem of finding the particle's ground state.

A naive way to look at this wave function is to go to use Born's rule to find the probability distribution (which happens to be stationary because we chose an eigenstate of the Hamiltonian $H$) $$\rho(x) = |\psi(x,t)|^2 = \cos^2\left(\frac{\pi}{L}x\right)$$ and argue that the particle it describes just wiggles around between $-L/2$ and $L/2$ with this given probability, and once we measure position, we pick up the particle's position at an instant, losing momentum information. But this is just one way of looking at it, and it is problematic, albeit there are ways to make it mathematically sound. This picture leads to a confusion, namely that you think that there is something like the particle's position that is independent of measurement. And at the same time, there is something like the particle's velocity that is independent of measurement, it's just measurement that necessarily discards some of that information, but one could try to get a smart AI to track them both.

But this is not really the data you have. The wave function - encapsulating the full knowledge of the state of the particle - contains no information about the particle's exact position or momentum at an instant. The history of QM showed that it is hopeless to try to maintain our intuition about what position and momentum is. Yes, you can get well-defined relations between them for each path in a path integral formalism, but then you suddenly find the particle tracing out multiple paths. Or you add global hidden variables (like e.g. Bohmian mechanics) to recover a well-defined concept of position and momentum, but then those are not measurable so they come back to haunt you whenever you perform a measurement. There really isn't a way around this: A clear concept of position and momentum can not be maintained at the quantum level. The AI cannot be "smarter" or "more observant" than the maximum information available, which is encoded in the wave function. The information you desire to trace with your AI does just not exist in the way you would need it to.

If you are interested, there is a nice 3Blue1Brown video about the mathematical origins of uncertainty in Fourier analysis which also talks about another aspect of this question, even beyond the scope of quantum physics. I can recommend that.

If it could be done it would be a violation of the uncertainty principle. This would mean one of two things:

  • The AI cannot violate the uncertainty principle, or...
  • The uncertainty principle is wrong

So if we start from the assumption that the current model of QM is perfect in every way, then the AI could not beat the odds, because it would not have the physical tools needed to go about beating the odds.

However, where AI tools like neural nets are powerful is in their ability to detect patterns that we did not see before. It is plausible that an AI could come across some more fundamental law of nature which yields more correct results than the uncertainty principle does. This would invite us to develop an entirely new formulation of microscopic physics!

As a very trivial example, let me give you a series of numbers.

293732 114329 934700 172753 489332 85129 759100 61953 644932 335929 623500 671153 760532 866729 527900 353 836132 677529 472300 49553 871732 768329 456700 818753 867332 139129 481100 307953 822932 789929 545500 517153 738532 720729 649900 446353 614132 931529 794300 95553 449732 422329 978700 464753 245332 193129 203100 553953 932 243929 467500 363153 716532 574729 771900 892353 392132 185529 116300 141553 27732 76329 500700 110753 623332 247129 925100 799953 178932 697929 389500 209153 694532 428729 893900 338353 170132 439529 438300 187553 605732 730329 22700 756753 1332 301129 647100 45953 356932 151929 311500 55153 672532 282729 15900 784353 948132 693529 760300 233553 183732 384329 544700 402753 379332 355129 369100 291953 534932 605929 233500 901153 650532 136729 137900 230353 726132 947529 82300 279553 761732 38329 66700 48753 757332 409129 91100 537953 712932 59929 155500 747153 628532 990729 259900 676353 504132 201529 404300 325553 339732 692329 588700 694753 135332 463129 813100 783953 890932 513929 77500 593153 606532 844729 381900 122353 282132 455529 726300 371553 917732 346329 110700 340753 513332 517129 535100 29953 68932 967929 999500 439153 584532 698729 503900 568353 60132 709529 48300 417553 495732 329 632700 986753 891332 571129 257100 275953 246932 421929 921500 285153 562532 552729 625900 14353 838132 963529 370300 463553

These numbers appear highly random. Upon seeing it in a physical setting, one might assume these numbers actually are random, and invoke statistical laws like those at the heart of the uncertainty principle. But, if you were to throw an AI at this, you'd notice that it could predict the results with frustratingly high regularity.

Once a neural network, like that described in the journal article, has shown that there is indeed a pattern, we can try to tease it apart. And, lo and behold, you would find that sequence was $\{X_1, X_2, X_3, ...\}$ where $X_i=2175143 * X_{i-1} + 10653\quad\text{(mod 1000000)}$ starting with $X_{0}=3553$ I used a linear congruential PRNG to generate those.

If the universe actually used that sequence as its "source" for drawing the random values predicted in QM, then an AI could pick up on it, and start using this more-fundamental law of nature to do things that the uncertainty principle says are impossible.

On the other hand, if the universe actually has randomness in it, the AI cannot do any better than the best statistical results it can come up with.

In the middle is a fascinating case. Permit me to give you another series of numbers, this one in binary (because the tool I used outputs in binary) 1111101101100110111010101101010001000101111100101011111110000110100010010001110010010011101010000010101001111001100011100110001010011110100100010001000111110000010100101101111101011111000001011101011110110100000000000101010110100001101101001100111111000110000101000110000000110001100101001011000110101111011011101011011101110010111101111001111110010110011000000101110010010010111111001110101101111100110100111010010001011101101111110001111111011010111000101000001011001011010010011111000000110011100000001110000011000101110111100001100010111010111101010101000011010111010011011010101000111110110011100111000011101101110011111100011100101111101110100111001101011000000000110000111001010000001011100100100010111100101101101111011110000011110100010100011000011110010000001100011001110111011010001100010000011101011011011001011001100110100101001011001000101101000110010010010000110100110010111010001111001000111000100100100100111011001101011111001110011100100001001010001011110101001010000010100010111010

I will not tell you whether this series is random or pseudorandom. I will not tell you whether it was generated using the Blum Blum Shub algorithm. And I certainly wont tell you the key I used if I used the Blum Blum Shub algorithm.

It is currently believed that, to tell the difference between a random stream and the output of Blum Blum Shub, one must solve a problem we do not currently believe is solvable in any practical amount of time. So, hypothetically, if the universe actually used the stream of numbers I Just provided as part of the underlying physics that appears to be random per, quantum mechanics, we would not be able to tell the difference.

But an AI might be able to detect a pattern that we didn't even know we could detect. It could latch onto the pattern, and start predicting things that are "impossible" to predict.

Or could it? Nobody is saying that that string of binary numbers is actually the result of an algorithm. It might truly be random...

Neural networks like the one described in the paper can find patterns that we did not observe with our own two eyes and our own squishyware inside our skull. However, they cannot find a pattern if one does not exist (or worse: they can find a false pattern that leads one astray).

tl;dr Yes, a physicist might come up with a model that supersedes those with an uncertainty-principle, and such a physicist might be an AI.

This is a pretty simple topic overall. In short:

  1. A better model of reality could find the uncertainty-principle to be emergent, dispensing with it in favor of describing what it emerges from.

  2. The better model could be found by a sufficient intelligence, be it human, AI, or otherwise.

  3. So, yes, an AI could potentially dispense with the uncertainty-principle.

That said, the linked paper discusses simulated systems:

Machine learning models are a powerful theoretical tool for analyzing data from quantum simulators, in which results of experiments are sets of snapshots of many-body states.

A god-like AI with infinite computation could, in principle, find the ultimate truth of anything it analyzes. So if it analyzes real experimental data, then it could find deeper models of physics.

But, if it's looking at simulated data, then the ultimate truth that it'd find would be an exact description of the simulation – not necessarily what the simulation was attempting to emulate. And if the simulation respects something like an uncertainty-principle, then a perfect analysis of it would reflect that.

Likewise, a god-like AI analyzing a simulation of Newtonian physics wouldn't discover quantum-mechanics nor relativity. But the AI could discover quantum-mechanics and relativity from looking at real data, even if the folks who made the AI thought the world was perfectly Newtonian.