In particle colliders, according to QM, how are two particles able to "collide"?

The answer is basically the one you've suggested. When we collide particles in e.g. the LHC we are not colliding point particles. We are colliding two wavefunctions that look like semi-localised plane waves. The collision would look something like:

Collision

So classically the two particles would miss each other, but in reality their positions are delocalised so there is some overlap even though the centres (i.e. the average positions) of the two particles miss each other.

I've drawn a green squiggle to vaguely indicate some interaction between the two particles, but you shouldn't take this too literally. What actually happens is that both particles are described as states of a quantum field. When the particles are far from each other they are approximately Fock states i.e. plane waves.

However when the particles approach each other they become entangled and now the state of the quantum field cannot simply be separated into states of two particles. In fact we don't have a precise description of the state of the field when the particles are interacting strongly - we have to approximate the interaction using perturbation theory, which is where those Feynmann diagrams come in.

So to summarise: we should replace the verb collide with interact, and the interaction occurs because the two particles overlap even when their centres are separated. We calculate that interaction using quantum field theory, and the interaction strength will depend on the distance of closest approach.

The OP asks in a comment:

So, that interaction causes two particles to "blow up", and disintegrate into its more elementary particles?

I mentioned above that the particles are a state of the quantum field and that when far apart that state is separable into the two Fock states that describe the two particles. When the particles are close enough to interact strongly the state of the field cannot be separated into separate particle states. Instead we have some complicated state that we cannot describe exactly.

This intermediate state evolves with time, and depending on the energy it can evolve in different ways. It could for example just evolve back into the two original particles and those two particles head off with the same total energy. But if the energy is high enough the intermediate state could evolve into states with different numbers of particles, and this is exactly how particles get create in colliders. We can't say what will happen, but we can calculate the probabilities for all the possible outcomes using quantum field theory.

The key point is that the intermediate state does not simply correspond to a definite number of specific particles. It is a state of the field not a state of particles.


Besides what other people have said, it's worth looking at the numbers that enter the uncertainty relation $$ \Delta x\cdot\Delta p \gtrsim \hbar. $$ A quick web search tells me that the momentum in the LHC is adjusted to ppm precision, i.e. $\Delta p = 10^{-6}\times 13\,\textrm{TeV}\approx 10^7 \textrm{eV}$. Since we need to localize the particles in the transverse plane to have them pass through the same point (using naïve language), we have to insert transverse momentum into the uncertainty relation. The crossing angle of the two beams in the interaction point of the CMS detector is $285\,\mu\textrm{rad}$, so the transverse momentum fraction and its respective uncertainty are roughly 0.014% of the respective numbers, giving us $\Delta p_t \approx 10^{-4} \times 10^7\textrm{eV}= 10^3\textrm{eV}$.

In useful units $\hbar \approx 2\times 10^{-7}\, \textrm{eV}\,\textrm{m}$. With this we find that we can localize the beam particles to a precision of $$ \Delta x \gtrsim \frac{\hbar}{\Delta p_t} \approx 10^{-10}\,\textrm{m}$$ in the transverse plane without running into any difficulties with the uncertainty principle. This is significantly larger than a proton (whose radius is approximately $1\,\textrm{fm}$), but that's where the other answers complete the picture.

To relate this number to reaction probabilities, I have to expand a bit: what is the task in a particle physics calculation? Typically, we set up two colliding beams. So we have a certain number of particles (of a certain kind, with a certain energy, polarization, you name it) passing through a certain area in a certain time $L$, $[L] = \textrm{cm}^{-2} \textrm{s}^{-1}$. This is called Luminosity. What we want to know from the fundamental laws of physics, and what we want to compare to the data to are the numbers of reactions of a certain type per unit time, which is proportional to the luminosity since we can assume the reactions are independent. The proportionality constant is called the cross section $\sigma_X$ ("sigma"), and this is the quantity we actually have to calculate. I.e. $$ \frac{\textrm{d}N}{\textrm{d}t} = L\cdot\sigma_X. $$ We see that the interesting quantity $\sigma_X$ has the dimensions of an area. Given its additive nature between different reactions, we can think of $\sigma_X$ as the actual transverse dimension of the beam particle, if it happens to undergo reaction $X$. So in this sense interacting particles aren't actually point-like, but they have an area which depends on the specific reaction. To relate this to the previous number: a relatively rare process such as Higgs production at the LHC has a cross section of approximately $20\cdot 10^{-40}\,\textrm{m}^2$, which would correspond to a length scale of $10^{-20}\,\textrm{m}$.

Now you may ask: how can reactions happen, when the length scales are so different? That's where large numbers come into play: for each particle, we cannot know it's transverse coordinate better than $10^{-10}\,\textrm{m}$, but give me lots of particles focused to this precision, and one pair in $10^{10}\cdot 10^{10}$ will be within the cross-sectional area. The same applies to the actual beam spot sizes used in experiments.


To get particles to actually collide in a collider, many, many particles are formed into a high-speed beam which is separated into clumps that circulate one way around the collider, while other particles are similarly circulating around in the opposite direction. When both beams have been given the right amount of energy, they are then aimed at one another so the clumps intersect inside a sensor array that detects the products of any collisions that take place there.

This process involves millions upon millions of particles each time the clumps are steered together, and the collisions are set up in this way millions upon millions of times- which means that the experimenters rely on probability to furnish enough collision opportunities to make the experiment worthwhile- even though in any given collision, they do not have precise control over or knowledge of the positions of every single one of the particles in the beam clumps as they pass through the detector.

Instead, they rely upon the detector to track the products of all the collisions that do occur as they get knocked out of the beam and spray outwards. The trajectories of those new particles can be traced backwards to infer the location of each collision, and (among other things) verify that the collision products actually did originate inside the detector and were not just background noise that the detector array responded to.