Is there a way to create cow-copies in ZFS?

I think option 3 as you have described above is probably your best bet. The biggest problem with what you want is that ZFS really only handles this copy-on-write at the dataset/snapshot level.

I would strongly suggest avoiding using dedup unless you have verified that it works well with your exact environment. I have personal experience with dedup working great until one more user or VM store is moved in, and then it falls off a performance cliff and causes a lot of problems. Just because it looks like it's working great with your first ten users, your machine might fall over when you add the eleventh (or twelfth, or thirteenth, or whatever). If you want to go this route, make absolutely sure that you have a test environment that exactly mimics your production environment and that it works well in that environment.

Back to option 3, you'll need to set up a specific data set to hold each of the file system trees that you want to manage in this way. Once you've got it set up and initially populated, take your snapshots (one per dataset that will differ slightly) and promote then into clones. Never touch the original dataset again.

Yes, this solution has problems. I'm not saying it doesn't, but given the restrictions of ZFS, it's still probably the best one. I did find this reference to someone using clones effectively: http://thegreyblog.blogspot.com/2009/05/sparing-disk-space-with-zfs-clones.html

I'm not real familiar with btrfs, but if it supports the options that you want, have you considered setting up a separate server just to support these datasets, using Linux and btrfs on that server?

Tags:

Freebsd

Zfs