Where is Unix Time / Official Time Measured?

Your headline question doesn't have a real answer; Unix time isn't a real timescale, and isn't "measured" anywhere. It's a representation of UTC, albeit a poor one because there are moments in UTC that it can't represent. Unix time insists on there being 86,400 seconds in every day, but UTC deviates from that due to leap seconds.

As to your broader question, there are four important timescales of interest:

  1. UT1 (Universal Time), which is calculated by observatories around the world which measure the rotation of the Earth with respect to the fixed stars. With these observations and a little math, we get a more modern version of the old Greenwich Mean Time, which was based on the moment of solar noon at the Royal Observatory in Greenwich. Universal Time is calculated by An organization called the IERS (the International Earth Rotation and Reference Systems Service, formerly the International Earth Rotation Service).

  2. TAI (International Atomic Time), which is kept by hundreds of atomic clocks around the world, maintained by national standards bodies and such. The keepers of the clocks that contribute to TAI use time transfer techniques to steer their clocks towards each other, canceling out any small errors of individual clocks and creating an ensemble time; that ensemble is TAI, published by the International Bureau of Weights and Measures (BIPM), the stewards of the SI system of units. To answer your question about time dilation, TAI is defined to be atomic time at sea level (actually, at the geoid, which is a fancier version of the same idea), and each clock corrects for the effects of its own altitude.

  3. UTC (Coordinated Universal Time). UTC was set equal to ten seconds behind TAI on 1 January 1972, and since that date it ticks forwards at exactly the same rate as TAI, except when a leap second is added or subtracted. The IERS makes the decision to announce a leap second in order to keep the difference within 0.9 seconds (in practice, within about 0.6 seconds; an added leap second causes the difference to go from -0.6 to +0.4). In theory, leap seconds can be both positive and negative, but because the rotation of the earth is slowing down compared to the standard established by SI and TAI, a negative leap second has never been necessary and probably never will.

  4. Unix time, which does its best to represent UTC as a single number. Every Unix time that is a multiple of 86,400 corresponds to midnight UTC. Since not all UTC days are 86,400 seconds long, but all "Unix days" are, there is an irreconcilable difference that has to be patched over somehow. There's no Unix time corresponding to an added leap second. In practice, systems will either act as though the previous second occurred twice (with the unix timestamp jumping backwards one second, then proceeding forward again), or apply a technique like leap smearing that warps time for a longer period of time on either side of a leap second. In either case there's some inaccuracy, although at least the second one is monotonic. In both cases, the amount of time that passes between two distant Unix timestamps a and b isn't equal to b-a; it's equal to b-a plus the number of intervening leap seconds.

Since UT1, TAI, UTC, and the IERS are all worldwide, multinational efforts, there is no single "where", although IERS bulletins are published from the Paris Observatory and the BIPM is also based in Paris, that's one answer. An organization that requires precise, traceable time might state their timebase as something like "UTC(USNO)", which means that their timestamps are in UTC and that they're derived from the time at the US Naval Observatory, but given the problems that I mentioned with Unix time, it's basically incompatible with that level of precision — anyone dealing with really precise time will have an alternative to Unix time.


The adjustments to the clock are co-ordinated by the IERS. They schedule the insertion of a leap second into the time stream as required.

From The NTP Timescale and Leap Seconds

The International Earth Rotation Service (IERS) at the Paris Observatory uses astronomical observations provided by USNO and other observatories to determine the UT1 (navigator's) timescale corrected for irregular variations in Earth rotation.

To the best of my knowledge 23:59:60 (Leap Second) and 00:00:00 the next day are considered the same second in Unix Time.


UNIX time is measured on your computer, running UNIX.

This answer is going to expect you to know what Coördinated Universal Time (UTC), International Atomic Time (TAI), and the SI second are. Explaining them is well beyond the scope of Unix and Linux Stack Exchange. This is not the Physics or Astronomy Stack Exchanges.

The hardware

Your computer contains various oscillators that drive clocks and timers. Exactly what it has varies from computer to computer depending on its architecture. But usually, and in very general terms:

  • There is a programmable interval timer (PIT) somewhere, that can be programmed to count a given number of oscillations and trigger an interrupt to the central processing unit.
  • There is a cycle counter on the central processor that simply counts 1 for each instruction cycle that is executed.

The theory of operation, in very broad terms

The operating system kernel makes use of the PIT to generate ticks. It sets up the PIT to free-run, counting the right number of oscillations for a time interval of, say, one hundredth of a second, generating an interrupt, and then automatically resetting the count to go again. There are variations on this, but in essence this causes a tick interrupt to be raised with a fixed frequency.

In software, the kernel increments a counter every tick. It knows the tick frequency, because it programmed the PIT in the first place. So it knows how many ticks make up a second. It can use this to know when to increment a counter that counts seconds. This latter is the kernel's idea of "UNIX Time". It does, indeed, simply count upwards at the rate of one per SI second if left to its own devices.

Four things complicate this, which I am going to present in very general terms.

Hardware isn't perfect. A PIT whose data sheet says that it has an oscillator frequency of N Hertz might instead have a frequency of (say) N.00002 Hertz, with the obvious consequences.

This scheme interoperates very poorly with power management, because the CPU is waking up hundreds of times per second to do little more than increment a number in a variable. So some operating systems have what are know as "tickless" designs. Instead of making the PIT send an interrupt for every tick, the kernel works out (from the low level scheduler) how many ticks are going to go by with no thread quanta running out, and programs the PIT to count for that many ticks into the future before issuing a tick interrupt. It knows that it then has to record the passage of N ticks at the next tick interrupt, instead of 1 tick.

Application software has the ability to change the kernel's current time. It can step the value or it can slew the value. Slewing involves adjusting the number of ticks that have to go by to increment the seconds counter. So the seconds counter does not necessarily count at the rate of one per SI second anyway, even assuming perfect oscillators. Stepping involves simply writing a new number in the seconds counter, which isn't usually going to happen until 1 SI second since the last second ticked over.

Modern kernels not only count seconds but also count nanoseconds. But it is ridiculous and often outright unfeasible to have a once-per-nanosecond tick interrupt. This is where things like the cycle counter come into play. The kernel remembers the cycle counter value at each second (or at each tick) and can work out, from the current value of the counter when something wants to know the time in nanoseconds, how many nanoseconds must have elapsed since the last second (or tick). Again, though, power and thermal management plays havoc with this as the instruction cycle frequency can change, so kernels do things like rely on additional hardware like (say) a High Precision Event Timer (HPET).

The C language and POSIX

The Standard library of the C language describes time in terms of an opaque type, time_t, a structure type tm with various specified fields, and various library functions like time(), mktime(), and localtime().

In brief: the C language itself merely guarantees that time_t is one of the available numeric data types and that the only reliable way to calculate time differences is the difftime() function. It is the POSIX standard that provides the stricter guarantees that time_t is in fact one of the integer types and that it counts seconds since the Epoch. It is also the POSIX standard that specifies the timespec structure type.

The time() function is sometimes described as a system call. In fact, it hasn't been the underlying system call on many systems for quite a long time, nowadays. On FreeBSD, for example, the underlying system call is clock_gettime(), which has various "clocks" available that measure in seconds or seconds+nanoseconds in various ways. It is this system call by which applications software reads UNIX Time from the kernel. (A matching clock_settime() system call allows them to step it and an adjtime() system call allows them to slew it.)

Many people wave the POSIX standard around with very definite and exact claims about what it prescribes. Such people have, more often than not, not actually read the POSIX standard. As its rationale sets out, the idea of counting "seconds since the Epoch", which is the phrase that the standard uses, intentionally doesn't specify that POSIX seconds are the same length as SI seconds, nor that the result of gmtime() is "necessarily UTC, despite its appearance". The POSIX standard is intentionally loose enough so that it allows for (say) a UNIX system where the administrator goes and manually fixes up leap second adjustments by re-setting the clock the week after they happen. Indeed, the rationale points out that it's intentionally loose enough to accommodate systems where the clock has been deliberately set wrong to some time other than the current UTC time.

UTC and TAI

The interpretation of UNIX Time obtained from the kernel is up to library routines running in applications. POSIX specifies an identity between the kernel's time and a "broken down time" in a struct tm. But, as Daniel J. Bernstein once pointed out, the 1997 edition of the standard got this identity embarrassingly wrong, messing up the Gregorian Calendar's leap year rule (something that schoolchildren learn) so that the calculation was in error from the year 2100 onwards. "More honour'd in the breach than the observance" is a phrase that comes readily to mind.

And indeed it is. Several systems nowadays base this interpretation upon library routines written by Arthur David Olson, that consult the infamous "Olson timezone database", usually encoded in database files under /usr/share/zoneinfo/. The Olson system had two modes:

  • The kernel's "seconds since the Epoch" is considered to count UTC seconds since 1970-01-01 00:00:00 UTC, except for leap seconds. This uses the posix/ set of Olson timezone database files. All days have 86400 kernel seconds and there are never 61 seconds in a minute, but they aren't always the length of an SI second and the kernel clock needs slewing or stepping when leap seconds occur.
  • The kernel's "seconds since the Epoch" is considered to count TAI seconds since 1970-01-01 00:00:10 TAI. This uses the right/ set of Olson timezone database files. Kernel seconds are 1 SI second long and the kernel clock never needs slewing or stepping to adjust for leap seconds, but broken down times can have values such as 23:59:60 and days are not always 86400 kernel seconds long.

M. Bernstein wrote several tools, including his daemontools toolset, that required right/ because they simply added 10 to time_t to get TAI seconds since 1970-01-01 00:00:00 TAI. He documented this in the manual page.

This requirement was (perhaps unknowingly) inherited by toolsets such as daemontools-encore and runit and by Felix von Leitner's libowfat. Use Bernstein multilog, Guenter multilog, or Pape svlogd with an Olson posix/ configuration, for example, and all of the TAI64N timestamps will be (at the time of writing this) 26 seconds behind the actual TAI second count since 1970-01-01 00:00:10 TAI.

Laurent Bercot and I addressed this in s6 and nosh, albeit that we took different approaches. M. Bercot's tai_from_sysclock() relies on a compile-time flag. nosh tools that deal in TAI64N look at the TZ and TZDIR environment variables to auto-detect posix/ and right/ if they can.

Interestingly, FreeBSD documents time2posix() and posix2time() functions that allow the equivalent of the Olson right/ mode with time_t as TAI seconds. They are not apparently enabled, however.

Once again…

UNIX time is measured on your computer running UNIX, by oscillators contained in your computer's hardware. It doesn't use SI seconds; it isn't UTC even though it may superficially resemble it; and it intentionally permits your clock to be wrong.

Further reading

  • Daniel J. Bernstein. UTC, TAI, and UNIX time. cr.yp.to.
  • Daniel J. Bernstein. tai64nlocal. daemontools. cr.yp.to.
  • "seconds since the epoch". Single UNIX Specification Version 2. 1997. The Open Group.
  • "Seconds Since the Epoch (Rationale)". Base Specifications Issue 6. 2004. IEEE 1003.1. The Open Group.
  • David Madore (2010-12-17). The Unix leap second mess.
  • time2posix. FreeBSD 10.3 manual. § 3.
  • https://physics.stackexchange.com/questions/45739/
  • https://astronomy.stackexchange.com/questions/11840/