What does Linus Torvalds mean when he says that Git "never ever" tracks a file?

In CVS, history was tracked on a per-file basis. A branch might consist of various files with their own various revisions, each with its own version number. CVS was based on RCS (Revision Control System), which tracked individual files in a similar way.

On the other hand, Git takes snapshots of the state of the whole project. Files are not tracked and versioned independently; a revision in the repository refers to a state of the whole project, not one file.

When Git refers to tracking a file, it means simply that it is to be included in the history of the project. Linus's talk was not referring to tracking files in the Git context, but was contrasting the CVS and RCS model with the snapshot-based model used in Git.


I agree with brian m. carlson's answer: Linus is indeed distinguishing, at least in part, between file-oriented and commit-oriented version control systems. But I think there is more to it than that.

In my book, which is stalled and might never get finished, I tried to come up with a taxonomy for version control systems. In my taxonomy the term for what we're interested here is the atomicity of the version control system. See what is currently page 22. When a VCS has file-level atomicity, there is in fact a history for each file. The VCS must remember the name of the file and what occurred to it at each point.

Git doesn't do that. Git has only a history of commits—the commit is its unit of atomicity, and the history is the set of commits in the repository. What a commit remembers is the data—a whole tree-full of file names and the contents that go with each of those files—plus some metadata: for instance, who made the commit, when, and why, and the internal Git hash ID of the commit's parent commit. (It is this parent, and the directed acycling graph formed by reading all commits and their parents, that is the history in a repository.)

Note that a VCS can be commit-oriented, yet still store data file-by-file. That's an implementation detail, though sometimes an important one, and Git does not do that either. Instead, each commit records a tree, with the tree object encoding file names, modes (i.e., is this file executable or not?), and a pointer to the actual file content. The content itself is stored independently, in a blob object. Like a commit object, a blob gets a hash ID that is unique to its content—but unlike a commit, which can only appear once, the blob can appear in many commits. So the underlying file content in Git is stored directly as a blob, and then indirectly in a tree object whose hash ID is recorded (directly or indirectly) in the commit object.

When you ask Git to show you a file's history using:

git log [--follow] [starting-point] [--] path/to/file

what Git is really doing is walking the commit history, which is the only history Git has, but not showing you any of these commits unless:

  • the commit is a non-merge commit, and
  • the parent of that commit also has the file, but the content in the parent differs, or the parent of the commit doesn't have the file at all

(but some of these conditions can be modified via additional git log options, and there's a very difficult to describe side effect called History Simplification that makes Git omit some commits from the history walk entirely). The file history you see here does not exactly exist in the repository, in some sense: instead, it's just a synthetic subset of the real history. You'll get a different "file history" if you use different git log options!


"git does not track files" basically means that git's commits consist of a file tree snapshot connecting a path in the tree to a "blob" and a commit graph tracking the history of commits. Everything else is reconstructed on-the-fly by commands like "git log" and "git blame". This reconstruction can be told via various options how hard it should look for file-based changes. The default heuristics can determine when a blob changes place in the file tree without change, or when a file is associated with a different blob than before. The compression mechanisms Git uses don't care a whole lot about blob/file boundaries. If the content is somewhere already, this will keep the repository growth small without associating the various blobs.

Now that is the repository. Git also has a working tree, and in this working tree there are tracked and untracked files. Only the tracked files are recorded in the index (staging area? cache?) and only what is tracked there makes it into the repository.

The index is file-oriented and there are some file-oriented commands for manipulating it. But what ends up in the repository is just commits in the form of file tree snapshots and the associated blob data and the commit's ancestors.

Since Git does not track file histories and renames and its efficiency does not depend on them, sometimes you have to try a few times with different options until Git produces the history/diffs/blames you are interested in for non-trivial histories.

That's different with systems like Subversion which record rather than reconstruct histories. If it's not on record, you don't get to hear about it.

I actually built a differential installer at one time that just compared release trees by checking them into Git and then producing a script duplicating their effect. Since sometimes whole trees were moved, this produced much smaller differential installers than overwriting/deleting everything would have produced.


The confusing bit is here:

Git never ever sees those as individual files. Git thinks everything as the full content.

Git often uses 160 bit hashes in place of objects in its own repo. A tree of files is basically a list of names and hashes associated with the content of each (plus some metadata).

But the 160 bit hash uniquely identifies the content (within the universe of the git database). So a tree with hashes as content includes the content in its state.

If you change the state of the content of a file, its hash changes. But if its hash changes, the hash associated with the file name's content also changes. Which in turn changes the hash of the "directory tree".

When a git database stores a directory tree, that directory tree implies and includes all of the content of all of the subdirectories and all of the files in it.

It is organized in a tree structure with (immutable, reusable) pointers to blobs or other trees, but logically it is a single snapshot of the entire content of the entire tree. The representation in the git database isn't the flat data contents, but logically it is all of its data and nothing else.

If you serialized the tree to a filesystem, deleted all .git folders, and told git to add the tree back into its database, you'd end up with adding nothing to the database -- the element would already be there.

It may help to think of git's hashes as a reference counted pointer to immutable data.

If you built an application around that, a document is a bunch of pages, which have layers, which have groups, which have objects.

When you want to change an object, you have to create a completely new group for it. If you want to change a group, you have to create a new layer, which needs a new page, which needs a new document.

Every time you change a single object, it spawns a new document. The old document continues to exist. The new and old document share most of their content -- they have the same pages (except 1). That one page has the same layers (except 1). That layer has the same groups (except 1). That group has the same objects (except 1).

And by same, I mean logically a copy, but implementation-wise it is just another reference counted pointer to the same immutable object.

A git repo is a lot like that.

This means that a given git changeset contains its commit message (as a hash code), it contains its work tree, and it contains its parent changes.

Those parent changes contain their parent changes, all the way back.

The part of the git repo that contains history is that chain of changes. That chain of changes it at a level above the "directory" tree -- from a "directory" tree, you cannot uniquely get to a change set and the chain of changes.

To find out what happens to a file, you start with that file in a changeset. That changeset has a history. Often in that history, the same named file exists, sometimes with the same content. If the content is the same, there was no change to the file. If it is different, there is a change, and work needs to be done to work out exactly what.

Sometimes the file is gone; but, the "directory" tree might have another file with the same content (same hash code), so we can track it that way (note; this is why you want a commit-to-move a file separate from a commit-to-edit). Or the same file name, and after checking the file is similar enough.

So git can patchwork together a "file history".

But this file history comes from efficient parsing of the "entire changeset", not from a link from one version of the file to another.