Measuring execution time of code

This question might be considered a duplicate. It is closely related to these:

  • Considerations when determining efficiency of Mathematica code
  • Difference between AbsoluteTiming and Timing
  • Benchmarking expressions
  • Profiling from Mathematica

However, one simple reading of this question that I do not believe is covered in the answers above is answered with this Front End option:

SetOptions[$FrontEndSession, EvaluationCompletionAction -> "ShowTiming"]

This will print the total evaluation time for each cell as it completes in the lower left window margin:

enter image description here

To make the setting persistent between sessions use $FrontEnd in place of $FrontEndSession.


Update: "ShowTiming" was already covered in Brett's answer to:

  • Does AbsoluteTiming slow the evaluation time?

Here's a TL;DR answer. For more details, follow Mr. Wizard's links.

  • Timing measures the computation time consumed by the kernel process, thus

    • On a 4-core machine, internally parallelized functions such as LinearSolve will show 4-times the Timing, i.e. the sum of CPU time used by each core

    • Pause doesn't use CPU time so it's not included in the Timing.

    • LibraryLink functions run in the kernel process so they're included

    • Anything that doesn't run in the kernel process, such as other processes called (Run, Import["!..."], etc.), is not included.

    • MathLink programs' run time is not included. Subkernel run times (when using the Parallel Tools) is not included

    • Rasterize and Export might not be fully measured because part of the job is done by the front end

    • Will not time the time required to send expressions to the front end and display them (e.g. render graphics)

  • AbsoluteTiming measures elapsed wall time, thus

    • If other processes are running and slow down your machine, the AbsoluteTiming that you measure will be increased. Consequently the results have more fluctuation than with Timing.

    • It can be used with computations using the parallel tools or external processes

    • Will not time the time required to send expressions to the front end and display them (e.g. render graphics)

Other things to note:

Both functions have a finite time resolution that is system dependent. On Windows XP this is around 15 ms.

Realize that a computation will typically have a part that doesn't depend on input length, and a part that does. To measure the second accurately you need to time long computations. Anything below 1 seconds is likely to be inaccurate and unusable for extrapolation to longer inputs.

One of the most common benchmarking mistakes is running too short benchmarks and letting both fluctuations and the constant part influence the result.

A good benchmark would measure how the timing depends on the length (and possibly type) of the input. For example, sorting a long list takes more time than sorting a short one. Doing this type of benchmarking will more readily reveal any benchmarking errors.


Finally, be aware that if you have a modern, SpeedStep enabled laptop CPU, frequency scaling may influence the results. I don't have a good solution for this yet, but it did bite me before. Specifically, when comparing several versions of a function, the last one I tested was penalized because by the time it ran, the CPU frequency was scaled down. This tends not to be a problem for short benchmarks. However, short benchmarks tend not to be accurate ...

Tags:

Timing