deleting files but disk space is still full

Two things might be happening here.

First, your filesystem has reserved some space that only root can write to, so that critical system process don't fall over when normal users run out of disk space. That's why you see 124G of 130G used, but zero available. Perhaps the files you deleted brought the utilisation down to this point, but not below the threshold for normal users.

If this is your situation and you're desperate, you may be able to alter the amount of space reserved for root. To reduce it to 1% (the default is 5%), your command would be

# tune2fs -m 1 /dev/sda3

Second, the operating system won't release disk space for deleted files which are still open. If you've deleted (say) one of Apache's log files, you'll need to restart Apache in order to free the space.


If you delete a file that is being used by a process, you can no longer view the file by ls. The process is still writing to that file until you stop the process.

To view those deleted files, simply run lsof|grep delete


2 others ways to get the disk is full issue:

1) hidden under a mount point: linux will show a full disk with files "hidden" under a mount point. If you have data written to the drive and mount another filesystem over it, linux correctly notes the disk usage even though you can't see the files under the mount point. If you have nfs mounts, try umounting them and looking to see if anything was accidentally written in those directories before the mount.

2) corrupted files: I see this occasionally on windows to linux file transfer via SMB. One file fails to close the file descriptor and you wind up with a 4GB file of trash.

This can be more tedious to fix, because you need to find the subdirectory that the file is in, but it's easy to fix because the file itself is readily removable. I use the du command and do a listing of the root subdirs to find out where the file space is being used.

cd /
du -sh ./* 

The number of top level directories is usually limited, so I set the human readable flag -h to see which subdirectory is the space hog.

Then you cd into the problem child and repeat the process for all items in it. To make this easy to spot the large items, we change the du slightly and couple it with a sort.

cd /<suspiciously large dir>
du -s ./* | sort -n

which produces a smallest to largest output by byte size for all files and directories

4          ./bin 
462220     ./Documents
578899     ./Downloads
5788998769 ./Grocery List

Once you spot the oversized file, you can usually just delete it.

Tags:

Linux