Why real time can be lower than user time

The output you show is a bit odd, since real time would usually be bigger than the other two.

  • Real time is wall clock time. (what we could measure with a stopwatch)
  • User time is the amount of time spend in user-mode within the process
  • Sys is the CPU time spend in the kernel within the process.

So I suppose if the work was done by several processors concurrently, the CPU time would be higher than the elapsed wall clock time.

Was this a concurrent/multi-threaded/parallel type of application?

Just as an example, this is what I get on my Linux system when I issue the time find . command. As expected the elapsed real time is much larger than the others on this single user/single core process.

real    0m5.231s
user    0m0.072s
sys     0m0.088s

The rule of thumb is:

  • real < user: The process is CPU bound and takes advantage of parallel execution on multiple cores/CPUs.
  • real ≈ user: The process is CPU bound and takes no advantage of parallel exeuction.
  • real > user: The process is I/O bound. Execution on multiple cores would be of little to no advantage.

Just to illustrate what has been said, with a two threaded processes doing some calculation.

    #include <pthread.h>
    static void  * dosomething () {
        unsigned long a,b=1;
        for (a=1000000000; a>0; a--) b*=3;
        return NULL;
    main () {
        pthread_t one, two;
        pthread_create(&one,NULL, dosomething, NULL);
        pthread_create(&two,NULL, dosomething, NULL);
        pthread_join (one, NULL);
        pthread_join (two, NULL);
/* end of a.c */


gcc a.c -lpthread

(This is just to illustrate, in real life I should have added the -D_REENTRANT flag)

$ time ./a.out

real    0m7.415s
user    0m13.105s
sys     0m0.032s

(Times are on an Intel Atom that has two slow cores :) )