Is a "shift in the meaning" of Accuracy and Precision occurring?

The "shift in the meaning" refers to some attempts to reinterpret the terminology that were made by a metrological document, ISO 5725, in 2008. That may be described as a bureaucratic effort by a few officials – really bureaucrats of a sort – and as far as I know, the "shift in the meaning" hasn't penetrated to the community of professionals. The people behind the standards have certain limited tools to assure that their new interpretation of the words would be taught to students of physics and engineering but I don't think that they are succeeding so far.

Before 2008, people would agree that "precision" refers to the typical difference between individual measurements, and the precision is good if a "statistical error" is low or if the measurements are producing "many significant figures" for the result.

"Precision" didn't – and still doesn't – discuss whether the results of the measurements are actually clustered around the true value. They may be separated by a "systematic error" – which goes in the same direction in each repetition of the measurement and can't be removed by averaging many measurements. The absence (or low magnitude) of this "true" systematic error – the difference between the true value and the average of many measurements – would be summarized by the word "accuracy".

One wants results that are both "precise" and "accurate" in the sense above, and a word like "valid" or "satisfactory" or another similarly neutral non-technical word would be reserved for results that are both "precise" and "accurate".

The "shifted" 2008 proposal is to use the word "accurate" for what would previously be called "valid" etc. – i.e. for measurements that suppress both kinds of errors, systematic and statistical, i.e. for measurements that are both accurate (in the pre-2008 sense) and precise (in the pre-2008 sense which is the same as post-2008 sense).

Even if this "shift" succeeded in the language of professionals, it won't make much difference. The reason is that the word "accurate" has pretty much implicitly included "precise" even before 2008. If you make a small number of measurements (repetitions of a measurement) and you want to determine whether the measurements are "accurate", you have to calculate the difference between the true value and the measured values. But if you have just one or several measured values, the difference is affected by the "statistical error", anyway, so you can't quantify the "systematic error" well, anyway. To be sure that the systematic error is low, after just a few measurements, the statistical error has to be low, too (the precision has to be good).

So whether the word "accuracy" included "precision" before 2008 is debatable. All these things are just changes at the level of the language. When one is quantitative, things have to be described by actual quantities – systematic errors and statistical errors and nothing was changed about the meaning of these quantities in the 2008 document.


(Currently editing answer.. previous version at https://physics.stackexchange.com/revisions/135002/7)