What is the highest achievable update rate for a civilian GPS receiver?

The constraining factor is the lowpass-filtering after despreading. If we assume -204dBW/Hz noise power density (~ 17°C noise temp), we can only allow around 25kHz of noise bandwidth before it reaches the L1 power of -160dBW. Our integration time must be at least 1/25.000s to detect the signal from the noise background (assuming omnidirectional antenna). This is the theoretical limit for a full strength signal.

The product of integration time \$T\$ and tracking loop bandwidth \$B_n\$ must be significantly less than unity for the loop to be stable, so at most 25kHz bandwidth are possible (in real-world-receivers, you will often find \$T=10^{-3}s\$ and \$B_n<=18Hz\$). The relative timing of received signal and local replica can only change (meaningful) at a rate of \$B_n/2\$, making more frequent position fixes useless.

You can cheat by using a directional antenna, but in order to compute azimuth and elevation, your antennas position needs to be fixed, and that kind of contradicts the purpose of a navigation system.

Now back to reality: shortening the integration period of makes the position fixes more noisy. Given the link budget of an off-the-shelf unit, more than 50 fixes/s is a waste, unless you have really strong signal, all you get is (phase-)noise. And theres a high computational burden, it will eat battery like hell.


A GPS receiver operates by maintaining an internal software "model" of the receiver's position (and derivatives of the position). A Kalman filter is typically used to keep this model in sync with reality, based on raw data coming from the satellites.

The signal from each satellite is normally integrated for 20 ms at a time, because this is the bit period of the PSK data coming from the satellite. This means that the model gets a raw update on the distance from each satellite 50 times a second. However, note that the updates from different satellites are essentially asynchronous (they don't all occur at the same time), because the path length differences from satellites overhead to satellites on the horizon is also on the order of 20 ms. As each new satellite measurement comes in, the internal model is updated with the new information.

When the GPS receiver puts out an update message, the data in the message comes from the model. The receiver can update the model as often as it likes, and output position messages as often as it likes, too. However, the result is simple interpolation — no new information is contained in the extra output messages. The information bandwidth is constrained by the rate at which the raw satellite measurements are fed to the filter.

As Andreas notes, having a high output message rate does NOT mean that you can track higher receiver dynamics. If you must track high receiver dynamics, you must use other sources of information such as an IMU. In a "tightly-coupled" system, the IMU data updates the same internal model that the GPS receiver is using, which allows the IMU to "assist" the tracking of the individual GPS signals.

There's also an economic side to the question. Most "civilian" GPS receivers are highly cost-constrained, and therefore, only enough CPU power (and battery power) is employed in order to meet the update rate requirements for the application at hand (e.g., car or cellphone navigation). An update rate of once a second (or less) is more than enough for most such applications. "Military" applications that need higher update rates have higher budgets for materials and power. The GPS receivers are priced accordingly, even though the actual receiver hardware is essentially the same, with the possible exception of employing a more powerful CPU.

Tags:

Gps

Gnss