Measure time in Linux - time vs clock vs getrusage vs clock_gettime vs gettimeofday vs timespec_get?

The problem is that there are several different time functions available in C and C++, and some of them vary in behavior between implementations. There are also a lot of half-answers floating around. Compiling a list of clock functions together with their properties would answer the question properly. For starts let's ask what the relevant properties are that we're looking for. Looking at your post, I suggest:

  • What time is measured by the clock? (real, user, system, or, hopefully not, wall-clock?)
  • What is the precision of the clock? (s, ms, µs, or faster?)
  • After how much time does the clock wrap around? Or is there some mechanism to avoid this?
  • Is the clock monotonic, or will it change with changes in the system time (via NTP, time zone, daylight savings time, by the user, etc.)?
  • How do the above vary between implementations?
  • Is the specific function obsolete, non standard, etc.?

Before starting the list, I'd like to point out that wall-clock time is rarely the right time to use, whereas it changes with time zone changes, daylight savings time changes, or if the wall clock is synchronized by NTP. None of these things are good if you're using the time to schedule events or to benchmark performance. It's only really good for what the name says, a clock on the wall (or desktop).

Here's what I've found so far for clocks in Linux and OS X:

  • time() returns the wall-clock time from the OS, with precision in seconds.
  • clock() seems to return the sum of user and system time. It is present in C89 and later. At one time this was supposed to be the CPU time in cycles, but modern standards like POSIX require CLOCKS_PER_SEC to be 1000000, giving a maximum possible precision of 1 µs. The precision on my system is indeed 1 µs. This clock wraps around once it tops out (this typically happens after ~2^32 ticks, which is not very long for a 1 MHz clock). man clock says that since glibc 2.18 it is implemented with clock_gettime(CLOCK_PROCESS_CPUTIME_ID, ...) in Linux.
  • clock_gettime(CLOCK_MONOTONIC, ...) provides nanosecond resolution, is monotonic. I believe the 'seconds' and 'nanoseconds' are stored separately, each in 32-bit counters. Thus, any wrap-around would occur after many dozen years of uptime. This looks like a very good clock, but unfortunately it isn't yet available on OS X. POSIX 7 describes CLOCK_MONOTONIC as an optional extension.
  • getrusage() turned out to be the best choice for my situation. It reports the user and system times separately and does not wrap around. The precision on my system is 1 µs, but I also tested it on a Linux system (Red Hat 4.1.2-48 with GCC 4.1.2) and there the precision was only 1 ms.
  • gettimeofday() returns the wall-clock time with (nominally) µs precision. On my system this clock does seem to have µs precision, but this is not guaranteed, because "the resolution of the system clock is hardware dependent". POSIX.1-2008 says that. "Applications should use the clock_gettime() function instead of the obsolescent gettimeofday() function", so you should stay away from it. Linux x86 and implements it as a system call.
  • mach_absolute_time() is an option for very high resolution (ns) timing on OS X. On my system, this does indeed give ns resolution. In principle this clock wraps around, however it is storing ns using a 64-bit unsigned integer, so the wrapping around shouldn't be an issue in practice. Portability is questionable.
  • I wrote a hybrid function based on this snippet that uses clock_gettime when compiled on Linux, or a Mach timer when compiled on OS X, in order to get ns precision on both Linux and OS X.

All of the above exist in both Linux and OS X except where otherwise specified. "My system" in the above is an Apple running OS X 10.8.3 with GCC 4.7.2 from MacPorts.

Finally, here is a list of references that I found helpful in addition to the links above:

  • http://blog.habets.pp.se/2010/09/gettimeofday-should-never-be-used-to-measure-time
  • How to measure the ACTUAL execution time of a C program under Linux?
  • http://digitalsandwich.com/archives/27-benchmarking-misconceptions-microtime-vs-getrusage.html
  • http://www.unix.com/hp-ux/38937-getrusage.html

Update: for OS X, clock_gettime has been implemented as of 10.12 (Sierra). Also, both POSIX and BSD based platforms (like OS X) share the rusage.ru_utime struct field.


C11 timespec_get

Usage example at: https://stackoverflow.com/a/36095407/895245

The maximum possible precision returned is nanoseconds, but the actual precision is implementation defined and could be smaller.

It returns wall time, not CPU usage.

glibc 2.21 implements it under sysdeps/posix/timespec_get.c and it forwards directly to:

clock_gettime (CLOCK_REALTIME, ts) < 0)

clock_gettime and CLOCK_REALTIME are POSIX http://pubs.opengroup.org/onlinepubs/9699919799/functions/clock_getres.html, and man clock_gettime says that this measure may have discontinuities if you change some system time setting while your program runs.

C++11 chrono

Since we're at it, let's cover them as well: http://en.cppreference.com/w/cpp/chrono

GCC 5.3.0 (C++ stdlib is inside GCC source):

  • high_resolution_clock is an alias for system_clock
  • system_clock forwards to the first of the following that is available:
    • clock_gettime(CLOCK_REALTIME, ...)
    • gettimeofday
    • time
  • steady_clock forwards to the first of the following that is available:
    • clock_gettime(CLOCK_MONOTONIC, ...)
    • system_clock

Asked at: Difference between std::system_clock and std::steady_clock?

CLOCK_REALTIME vs CLOCK_MONOTONIC: Difference between CLOCK_REALTIME and CLOCK_MONOTONIC?