Why is DateTime based on Ticks rather than Milliseconds?

  • TimeSpan and DateTime use the same Ticks making operations like adding a TimeSpan to a DateTime trivial.
  • More precision is good. Mainly useful for TimeSpan, but above reason transfers that to DateTime.

    For example StopWatch measures short time intervals often shorter than a millisecond. It can return a TimeSpan.
    In one of my projects I used TimeSpan to address audio samples. 100ns is short enough for that, milliseconds wouldn't be.

  • Even using milliseconds ticks you need an Int64 to represent DateTime. But then you're wasting most of the range, since years outside 0 to 9999 aren't really useful. So they chose ticks as small as possible while allowing DateTime to represent the year 9999.

    There are about 261.5 ticks with 100ns. Since DateTime needs two bits for timezone related tagging, 100ns ticks are the smallest power-of-ten interval that fits an Int64.

So using longer ticks would decrease precision, without gaining anything. Using shorter ticks wouldn't fit 64 bits. => 100ns is the optimal value given the constraints.


From MSDN;

A single tick represents one hundred nanoseconds or one ten-millionth of a second. There are 10,000 ticks in a millisecond.

A tick represents the total number of ticks in local time, which is midnight on January 1st in the year 0001. But a tick is also smallest unit for TimeSpan also. Since ticks are Int64, so if miliseconds used instead of ticks, there can be a information losing.

Also could be a default CLS implementation.