How can a file have been 'created' in 1641?

Why is a date from the 1600s possible?

Windows does not store file modification timestamps like Unix systems do. According to the Windows Dev Center (emphasis mine):

A file time is a 64-bit value that represents the number of 100-nanosecond intervals that have elapsed since 12:00 A.M. January 1, 1601 Coordinated Universal Time (UTC). The system records file times when applications create, access, and write to files.

So, by setting a wrong value here, you can easily get dates from the 1600s.

Of course, another important question is: how was this value set? What is the actual date? I think you'll never be able to find out, as that could have simply been a calculation error in the file system driver. Another answer hypothesizes that the date is actually a Unix timestamp interpreted as a Windows timestamp, but they're actually calculated on different intervals (seconds vs. nanoseconds).

How does this relate to the Year 2038 problem?

The use of a 64-bit data type means that Windows (generally) is not affected by the Year 2038 Problem that traditional Unix systems have, since Unix initially used a 32-bit integer, which overflows sooner than the 64-bit integer that Windows has. (This is despite Unix operating on seconds and Windows operating on micro/nanoseconds.)

Windows is still affected when using 32-bit programs that were compiled with old versions of Visual Studio, of course.

Newer Unix operating systems have already expanded the data type to 64 bits, thus avoiding the issue. (In fact, since Unix timestamps operate in seconds, the new wraparound date will be 292 billion years from now.)

What is the maximum date that can be set?

For the curious ones – here's how to calculate that:

  • The number of possible values in a 64-bit integer are 263 – 1 = 9223372036854775807.
  • Each tick represents 100 nanoseconds, which is 0.1 µs or 0.0000001 s.
  • The maximum time range would be 9223372036854775807 ⨉ 0.0000001 s, so hundreds of billions of seconds.
  • One hour has 3600 seconds, one day has 86400 seconds, and one year has 365 days, so there are 86400 ⨉ 365 s = 31536000 s in a year. This is, of course, only an average, ignoring leap years, leap seconds, or any calendar changes that future postapocalyptic regimes might dictate on the remaining earthlings.
  • 9223372036854775807 ⨉ 0.0000001 s / 31536000 s ≈ 29247 years
  • @corsiKa explains how we can subtract leap years: 29247 / 365 / 4 ≈ 20
  • So your maximum year is 1601 + 29247 – 20 = 30828.

Some folks have actually tried to set this and came up with the same year.


If you don't feel too bad about some guessing, let me offer an explanation. And I don't mean "someone set the value to nonsense", that's obviously always possible :)

Unix time usually uses the number of seconds since 1970. Windows, on the other hand, uses 1601 as its starting year. So if we assume (and that's a big assumption!) that the problem is wrong conversion between the two times, we can imagine that the date that was supposed to be represented is actually sometime in 2011 (1970 + 41), which got incorrectly converted to 1640 (1601 + 41). EDIT: Actually, I made a mistake in the Windows starting year. It's possible that the actual creation time was in 2010, or that there was another error involved (off-by-one errors are pretty common in software :D).

Given that this year happens to be another of the tracking dates associated with the file in question, I think it's a pretty plausible explanation :)


As has been written by others, the Windows epoch is at 1601-01-01 00:00.

The number of seconds between that epoch and the filetime displayed, is 1.266.705.294.
If we add that to the Unix epoch, we arrive at 2010-02-20 23:34:54 CEST, a Saturday. This is about a year before the last access date, which makes it somewhat plausible. So it may have been a Unix timestamp interpreted against the wrong epoch.

Tags:

Filesystems