The famous drop of $c$

  1. The speed of light was defined at its present value in 1983, not 1972.
  2. We could know that $c$ because $\alpha\propto1/c$ (fine structure constant) and we have better ways of determining $\alpha$ than $c$
    1. Not actually true: we cannot determine if physical constants change, cf. this Physics.SE Q&A
  3. "Official" science uses error bars when measuring things, "unofficial" scientists ignore these crucial components.

enter image description here

(based on data from Wikipedia and Henrion & Fischhoff 1986 (NB: PDF)). The relevant section of Henrion & Feschhoff reads,

A related measure [to the chi-squared statistic], the Birge ratio, $R_B$, assesses the compatibility of a set of measurements by comparing the variability among experiments to the reported uncertainties. It may be defined as the standard deviation of the normalized residuals:$$R_B^2=\sum_ih_i^2/(N-1)$$ Alternatively, the Birge ratio may be seen as a measure of the appropriateness of the reported uncertainties...If $R_B$ is much greater than one, then one or more of the experiments has underestimated its uncertainty and may contain unrecognized systematic errors...If $R_B$ is much less than one, then either the uncertainties have, in the aggregate, been overestimated or the errors are correlated.

According to Henrion & Fischhoff, the Birge ratio in the range 1875-1941 was 1.47 while the range 1947-1958 had a ratio of 1.32; the combined ranges give $R_B= 1.42$. This means that pretty much all the data taken prior to the 1960's was not accounting for error correctly. Since then, we have improved (a) our experiments to reduce the errors and (b) our ability to correctly account for errors.


We fixed the speed of light by definition in 1972.

Already by 1960, J.L. Synge (Relativity, the General Theory Ch. III §2) taught:

"For us time [or rather, duration] is the only basic measure. Length (or distance [or, indeed, quasi-distances]), in so far as it is necessary or desirable to introduce it, is strictly a derived concept, and will be dealt with later in that spirit. "

In 1983, on the other hand, the 17th CGPM defined (effectively) the SI unit of "speed", i.e. the ratio of SI base units "m / s", as the $1 / 299792458$th of the speed of light (in vacuum). However, the value of the speed of light (in vacuum) itself is of course unaffected by any particular definition of units.

He [the speed of light (in vacuum)] might still change,

That appears doubtful. As long as reference is made to the same definition of "ligth (in vacuum)", and as long as "length" is strictly understood as a derived measure (with the same derivation or definition used consistently), the notion "speed of light (in vacuum)" plainly remains unchanged.

A note in consideration of the already published answer by Kyle Kanos:
Of course, the speed of light (in vacuum) being constant (by definition of "length", and thus of "speed") does not preclude the electro-magnetic coupling (referenced to vacuum) of some particular given charged particles to be found of different value, in different trials;
nor, for instance, the length of some particular given "platinum-iridium bar" to be found of different value, in different trials.

And how does official science explains really the famous drop in the measures of c?

My own assessment (which is hereby public and open for comments/responses):
It appears doubtful that people who claim to have "measured the speed of light (in vacuum)", (rather than, for instance, having measured distances between different identifiable parts of experimental equipment, or whether they were at rest to each other in the first place; or having measured the index of refraction in a particular experimental region) were able to assign any finite range of systematic uncertainty (or confidence intervals) to their reported results at all. Thus any possible such "drop" appears insignificant; and one may not strictly speak of such reports as "measures of c" in the first place.