Why is carbon dating limit only 40,000 years?

Carbon-14 makes up about 1 part per trillion of the carbon atoms around us, and this proportion remains roughly constant due to continual production of carbon-14 from cosmic rays. The half life of carbon-14 is about 5,700 years, so if we measure the proportion of C-14 in a sample and discover it's half a part per trillion, i.e. half the original level, we know the sample is around one half life or 5,700 years old.

So by measuring the C-14 level we work out how many half lives old the sample is and therefore how old it is. The trouble is that after 40,000 years there is under 1% of the original C-14 left, and it becomes too hard to measure it accurately. This isn't a fundamental limit as more accurate measurements could go further back, but at some point you'd simply run out of C-14 atoms. With our current kit 40-50K years is about the limit.


There is no exact date beyond which carbon 14 decay is/is not useful. However, given that the half life of carbon 14 is 5730 years, then there really isn't much carbon 14 left in a sample that is 40,000 years old. The decay constant is $\lambda = \ln 2/t_{1/2}$, so the fraction of carbon 14 left would be $\exp[-\lambda t]$, which, for $t=$40,000 years, would be $0.79$%.

Of course, these small traces probably could be found with modern techniques, with some uncertainty, but then you have to factor in systematic uncertainties - for example associated with present-day contamination (the air contains carbon 14 !). Any small uncertainty in the measurements, in the amount of contamination (or any other source of small error such as fluctuations in the naturally occurring 14 to 12 C ratio) could easily be magnified into a huge age error in an old sample with a very small amount of carbon 14 present.

For example, let's say you can measure the 14/12 C ratio to be $f \pm \delta f$ (in a system of units where the original ratio was expected to be 1). Crudely speaking, what you do next is to extrapolate a decay curve back in time to see how long ago the sample would have had $f=1$. Thus $$ f = \exp[-\lambda \tau]$$ $$ \ln f = -\lambda \tau$$ $$ \frac{\delta f}{f} = |-\lambda \delta\tau |$$ $$ \delta \tau = \frac{t_{1/2}}{\ln 2} \frac{\delta f}{f}$$

So say your ability to measure $f$ was limited to $\pm 0.02$ because of potential contamination or other complications, then $$ \delta \tau = \frac{165}{f}\ {\rm years} \tag{1}$$

If $f=0.5$ (i.e something that is just 5730 years old), then your uncertainty would be a perhaps tolerable $\pm 330$ years.

However, if $f=0.0079$ (for 40,000 years old), then the uncertainty would be a less-than-useful $\pm 20,800$ years.

In fact, the latter example is worse (more asymmetric) than that, because formula (1) is not valid when $\delta f > f$. In reality, the uncertainty is consistent with there being anywhere from no carbon 14 at all (and so an infinite age) to $f \sim 0.028$, which would imply $\tau \sim 30\,000$ years old.