No space on device when removing a file under OpenSolaris

Ok, that's a weird one… not enough space to remove a file!

This turns out to be a relatively common issue with ZFS, though it could potentially arise on any filesystem that has snapshots.

The explanation is that the file you're trying to delete still exists on a snapshot. So when you delete it, the contents keep existing (in the snapshot only); and the system must additionally write the information that the snapshot has the file but the current state doesn't. There's no space left for that extra little bit of information.

A short-term fix is to find a file that's been created after the latest snapshot and delete it. Another possibility is to find a file that's been appended to after the latest snapshot and truncate it to the size it had at the time of the latest snapshot. If your disk got full because something's been spamming your logs, try trimming the largest log files.

A more generally applicable fix is to remove some snapshots. You can list snapshots with zfs list -t snapshot. There doesn't seem to be an easy way to predict how much space will be regained if you destroy a particular snapshot, because the data it stores may turn out to be needed by other snapshots and so will live on if you destroy that snapshot. So back up your data to another disk if necessary, identify one or more snapshots that you no longer need, and run zfs destroy name/of/snap@shot.

There is an extended discussion of this issue in this OpenSolaris forums thread.


That's a well-known issue with copy-on-write filesystems: To delete a file, the filesystem first needs to allocate a block and fix the new status before it is able to release the wealth of space contained within the file just being deleted.

(It is not a problem of filesystems with snapshots, as there are other ways of implementing these than just copy-on-write)

Ways out of the squeeze:

  • release a snapshot (in case there is one...)
  • grow the pool (in case there's any spare left you can assign to it)
  • destroy another filesystem in the pool, then grow the tight filesystem
  • truncate the file, then remove it (though once I have been in too tight a squeeze to be able to do even that, see thread at ZFS Discuss)
  • unlink the file. (same as above)

I've run into the same trap a few years ago, and didn't have any snapshots I could have released to free me. See the thread at ZFS Discuss where this problem had been discussed in depth.