Bulk remove a large directory on a ZFS without traversing it recursively

Tracking freed blocks is unavoidable in any decent file system and ZFS is no exception. There is however a simple way under ZFS to have a nearly instantaneous directory deletion by "deferring" the underlying cleanup. It is technically very similar to Gilles' suggestion but is inherently reliable without requiring extra code.

If you create a snapshot of your file system before removing the directory, the directory removal will be very fast because nothing will need to be explored/freed under it, all being still referenced by the snapshot. You can then destroy the snapshot in the background so the space will be gradually recovered.

d=yourPoolName/BackupRootDir/hostNameYourPc/somesubdir
zfs snapshot ${d}@quickdelete && { 
    rm -rf /${d}/certainFolder
    zfs destroy ${d}@quickdelete & 
}

What you're asking for is impossible. Or, more precisely, there's a cost to pay when deleting a directory and its files; if you don't pay it at the time of the deletion, you'll have to pay it elsewhere.

You aren't just removing a directory — that would be near-instantaneous. You're removing a directory and all the files inside it and also recursively likewise removing all of its subdirectories. Removing a file means decrementing its link count, and then marking its resources (the blocks use for file contents and file metadata, and the inode if the filesystem uses an inode table) as free if the link count reaches 0 and the file isn't open. This is an operation that has to be done for every file in the directory tree, so the time it takes is at least proportional to the number of files.

You could delay the cost of marking the resources as free. For example, there are garbage-collected filesystems, where you can remove a directory without removing the files it contains. A run of the garbage collector will detect the files that aren't reachable via the directory structure and mark them as free. Doing rm -f directory; garbage-collect on a garbage collected filesystem does the same things as rm -rf on a traditional filesystem, with different triggers. There are few garbage-collected filesystems because the GC is additional complexity which is rarely needed. The GC time could come at any moment, when the filesystem needs some free blocks and doesn't find any, so the performance of an operation would be dependent on past history, not just on the operation, which is usually undesirable. You'd need to run the garbage collector just to get the actual amount of free space.

If you want to simulate the GC behavior on a normal filesystem, you can do it:

mv directory .DELETING; rm -rf .DELETING &

(I omitted many important details such as error checking, as resilience to power loss, etc.) The directory name becomes non-existent immediately; the space is reclaimed progressively.

A different approach to avoid paying the cost during removal without GC would be to pay it during allocation. Mark the directory tree as deleted, and go through deleted directories when allocating blocks. That would be hard to reconcile with hard links, but on a filesystem without hard links, it can be done with O(1) cost increase in allocation. However that would make a very common operation (creating or enlarging a file) more expensive, with the only benefit being a relatively rare operation (removing a large directory tree) cheaper.

You could bulk-remove a directory tree if that tree was stored as its own pool of blocks. (Note: I'm using the word “pool” in a different meaning from ZFS's “storage pool”. I don't know what the proper terminology is.) That could be very fast. But what do you do with the free space? If you reassign it to another pool, that has a cost, though a lot less than deleting files individually. If you leave the space as unused reserve space, you can't reclaim it immediately. Having an individual pool for a directory tree means added costs to increase or reduce the size of that pool (either on the fly or explicitly). Making the tree its own storage pool also increases the cost of moving files into and out of the tree.