ZFS Pool with Different Drive Sizes

There is a better way, create a single 3 TB pool composed of two mirrors.

zpool create test mirror disk1 disk2 mirror disk3 disk4

with disk1 and disk2 being the 1TB disks and disk3 and disk4 being the 2 TB ones.

Edit:

Should you want to maximize size and do not care that much about performance or best practices, you can partition all the drives with equal size partitions (or slices) and create a 4 TB hybrid pool with a 4 vdev RAIDZ and a 2 vdev mirror.

zpool create -f test raidz d0p1 d1p1 d2p1 d3p1 mirror d0p2 d1p2

Note the "-f" option required to force the command to accept the replication level mismatch.


This depends on how much data storage you need. You can create two pools of 1TB and 2TB each, using RAID 1. If not, see if you can acquire like-sized disks and try RAID 1+0 or RAIDZ.


From the ZFS admin guide:

"The devices can be individual slices on a preformatted disk, or they can be entire disks that ZFS formats as a single large slice."

So yes, you could create two 1-TB partitions on those 2TB drives, use them for RAID-Z vdev and the remaining space for non-redundant storage.

However, according to the ZFS Best Practices Guide, you may experience degraded performance:

For production systems, use whole disks rather than slices for storage pools for the following reasons:

  • Allows ZFS to enable the disk's write cache for those disks that have write caches. If you are using a RAID array with a non-volatile write cache, then this is less of an issue and slices as vdevs should still gain the benefit of the array's write cache.

  • For JBOD attached storage, having an enabled disk cache, allows some synchronous writes to be issued as multiple disk writes followed by a single cache flush allowing the disk controller to optimize I/O scheduling. Separately, for systems that lacks proper support for SATA NCQ or SCSI TCQ, having an enabled write cache allows the host to issue single I/O operation asynchronously from physical I/O.

  • The recovery process of replacing a failed disk is more complex when disks contain both ZFS and UFS file systems on slices.

Tags:

Zfs

Freenas