Disk slowly filling up but no visible file size changes

If there's an invisible growth in disk space, a likely culprit would be deleted files. In Windows, if you try to delete a file opened by something, you get an error. In Linux, the file will be marked as deleted, but the data will be retained until the application lets go. In some cases, this can be used as a neat way to clean up after yourself - application crashes won't prevent temporary files from being cleaned.

To look at deleted, still-used files:

lsof -b 2>/dev/null | grep deleted

You may have a large number of deleted files - that in itself is not a problem. A single deleted file getting large is a problem.

A reboot should fix this, but if you don't want to reboot, check the applications involved (first column in lsof output) and restart or close reasonable looking ones.

If you ever see something like:

zsh   1724   muru   txt   REG   8,17   771448   1591515  /usr/bin/zsh (deleted)

Where the application and the deleted files are the same, that probably means the application was upgraded. You can ignore those as a source of large disk usage (but you should still restart the program so that bug-fixes apply).

Files in /dev/shm are shared memory objects and don't occupy much space on disk (an inode number at most, I think). They can also be safely ignored. Files named vteXXXXXX are log files from a VTE-based terminal emulator (like GNOME Terminal, Terminator, etc.). These could be large, if you have a terminal window open with lots (and I mean lots) of stuff being output.


To add to the excellent answer by muru :

  • df shows the size on the disk,
  • and du shows the total size of the files content.

Maybe what you don't see with du is the appearance of many, many small files... (look on the last column of df -i and see if the number of inodes (ie, of files) increases a lot overtime too)

If you happen to have, say, 1'000'000 (1 million) tiny 1-byte files, du will count that as 1'000'000 bytes total, let's say 1Mb (... purists, please don't cringe)

But on disk, each file is made of 2 things:

  • 1 inode (pointing to the file's data), and that inode can by itself be 16kb(!),
  • And each file's data (= the file's content) is put on disk blocks, and those blocks can't contain several file's data (usually...), so your 1 byte of data will occupy at least 1 block

Thus, a million files 1-byte files will occupy 1'000'000'000 * size_of_a_block total space for the data, plus 1'000'000'000 * size_of_an_inode of inode's size... That can amount to several Gb of disk usage for 1 million "1-byte" files.

If you have 1024-byte blocks, and another 256 bytes of inode size, your 1'000'000 files will be reported as roughly 1Mb by du, but will count as roughly 1.25Gb on disk (as seen by df) ! (or even 2Gb if each inode also has to be on 1 dedicated disk block... I don't know if that's the case)