Difference between gravitational wave detectors

All three direct detectors, pulsar timing, space based interferometers, and terrestrial interferometers, all use the same principle to detect gravitational waves (GW). Measure the change in distance between two objects due to a passing GW. The amplitude of a GW is proportional to the strain $h = \Delta L / L$, the change in length divided by the total length.

The key difference between all thee experiments is the $L$. LIGO and other terrestrial interferometers like Virgo and KAGRA are kilometer scale with arm lengths of $L\sim 10^3$ m. LISA, a proposed space based interferometer in an Earth trailing solar orbit, has a proposed arm length on the gigameter scale, $L\sim 10^9$ m. PTAs like the International Pulsar Timing Array (IPTA) are monitoring the distance between the solar system and millisecond pulsars in our galaxy. Typical PTA distances are kiloparsecs, $L\sim 10^{19}$ m.

Since strain is $\Delta L/L$, to measure the same strain each experiment has a different target $\Delta L$ sensitivity. With longer arms you can measure a much smaller strain, but only if you can achieve the same $\Delta L$ sensitivity.

Noise

The limiting factor for any detector is the random noise that competes with the signals you want to detect. Each of the three experiments has different limitations affecting the smallest $\Delta L$ they can observe. The noise level is different at each possible GW frequency, so the noise dictates which GW frequencies a particular experiment can detect.

These limitations are summarized in this plot of GW sensitivity curves from http://gwplotter.com/. GW sensitivity curves and sources The black curves show the strain sensitivity of each experiment. Any GW source producing a strain greater than the curve is detectable.

Ground and space based interferometers

The interferometers, both ground and space based, have the same sorts of noise limitations.

Notice how the slope of the black line for LIGO and LISA is the same at the right end of each curve? This is because each experiment is limited by photon shot noise at high GW frequency. Basically, how many photons can you catch as one wavelength of the GW passes. Higher frequency GWs give you less time to collect photons, so you get fewer and thus a less accurate measure of the distance. You can combat this effect by starting with more photons by using a higher power laser. That is one of the improvements made during the upgrade from initial to advanced LIGO.

A further limitation for photon shot noise is that as the laser beam travels a greater distance it spreads out. Fewer of the initially emitted photons will hit the final detector. As an example, a $1$ micron wavelength laser with an emitted beam width of $1$ cm will spread out to a radius of $100$ km over LISA's Gm arms. That's an enormous loss of power. For the same GW frequency LISA can collect far fewer photons, so it is much less sensitive to high frequency GWs than LIGO.

At low GW frequencies the two interferometers are limited by acceleration noise of their test masses. Basically, non GW sources cause the masses to bounce around.

For LIGO the limiting factor is seismic motion. People sometimes refer to the steep slope at the low frequency end of LIGO's sensitivity curve as the "seismic wall". The terrestrial detectors put in some Herculean effort to achieve the levels of seismic isolation they have, but to observe lower and lower GW frequencies at some point you just have to get off the Earth. The spike in LIGO's sensitivity curve is due to a mechanical resonance in the seismic isolation system. Small vibrations at that frequency are amplified effectively blinding LIGO at that particular frequency.

In space you don't have seismic motion to compete with but other effects, can still shake your test masses. Particularly, electromagnetic couplings with the spacecraft that shields the test mass can cause low frequency noise. LISA's low frequency slope is much more gradual, because space provides a much cleaner low frequency environment.

LIGO can't detect low GW frequencies because of seismic motion, and LISA can't detect high GW frequencies because it has too few photons to count.

Pulsar timing arrays

To use a PTA to detect GWs you need to compare the expected time of arrival of a radio pulse from a pulsar with its actual time of arrival. If the radio telescope and the pulsar were perfectly at rest with respect to one another and the radio pulses were emitted perfectly regularly and the pulses traveled through a perfect vacuum, this would be easy. In practice it's not that easy

The Earth is moving around the Sun and many millisecond pulsars have binary companions. The center of mass of the solar system is moving in the galaxy relative the center of mass of the pulsar system. The model for the expected time of arrival of pulses needs to take this into account. The pulses propagate through the interstellar medium which changes the speed of the radio waves slightly. The interstellar medium is also moving, so this dispersion effect changes over time. The intrinsic brightness of a pulsar also affects one's ability to accurately measure the time of arrival of a pulse.

This is not to say that PTAs don't work, they do. They just present a fundamentally different noise problem than the interferometers. Luckily, people much smarter than me have been working on it for years.

The sensitivity curve for the IPTA in the plot is not terribly detailed, but it does show two important limitations (although there are other ones not shown).

At the low frequency end it goes straight up. This represents the finite length of the observation times. To measure a signal with a period of one year, you need to watch for at least one year. Since PTAs have only been gathering dedicated high precision pulsar data systematically for about 15 years (NANOGrav started in 2004), that means there is a hard low frequency cutoff at $f \sim 1/15\,\mathrm{yr} \approx 2\times 10^{-9}$ Hz (the second "N" in NANOGrav stands for nanohertz).

At the high GW frequency end the slope is determined by white radiometer noise in the radio telescopes that observe the pulsars. This sensitivity curve assumes the models for the expected time of arrival of the pulses are perfect, and any deviations from the actual time of arrival are all caused by measurement uncertainty in the radio telescopes (or GWs). This is similar to photon shot noise. The individual pulse brightness and the regularity of their shape are key factors for this effect.

In reality the expected time of arrival predictions are not perfect, so we would expect the sensitivity to bottom out and curve up again, like the LIGO and LISA curves, before hitting the low frequency cuttoff. If we mismodeled the motion of the Earth or the pulsar that's the same as having an unknown force shaking the detector test masses, adding low frequency noise.

This paper by Hazboun (et al) does the messy work to calculate a much more realistic sensitivity curve for NANOGrav's 11 year dataset. It takes into account all of the individual model details and noise properties for more than 30 pulsars.

NANOGrav 11yr sensitivity, Hazboun+2019, figure 15 The green curve here is a more realistic version of the black IPTA curve in the original sensitivity plot above. The spike in the curve occurs at a frequency of $f=1\,\mathrm{yr}^{-1}$. The Earth's motion around the Sun limits a PTA's ability to measure that particular frequency.

Sources of GWs

Since each experiment targets a different GW frequency band, each one has different potential sources. The second part of that sensitivity plot is the expected strain from those sources. It doesn't matter that all are not equally sensitive, because the sources produce different strains. Particularly, lower frequency binary sources are more massive and so they are louder.