How do I resize an ext4 partition beyond the 16TB limit?

With the option -O 64bit (enabled by default in filesystems created today), ext file systems can span 1024 PiB instead of just 16 TiB volumes. You can upgrade your old filesystem to activate this option.

Before you start

  1. This size of volume must be backed by RAID. Regular disk errors will cause harm otherwise.
  2. Still, RAID is not a backup. You must have your valuables stored elsewhere as well.
  3. First resize & verify all surrounding volumes (partition tables, encryption, lvm).
  4. After changing hardware RAID configuration, linux may or may not immediately acknowledge the new maximum size. Check $ cat /proc/partitions and reboot if necessary.
  5. Make sure (check uname -r) you are running a kernel that can properly handle 64bit ext4 filesystems - you want to use a 4.4.x kernel or later (default Ubuntu 16 and above).
  6. Acquire e2fsprogs of at version 1.43 (2016-05-17) or greater
    • ✔️ Ubuntu 20.04 (2020-04-23) ships with e2fsprogs 1.45.x (good!)
    • ✔️ Ubuntu 18.04 (2018-04-26) ships with e2fsprogs 1.44.x (good!)
    • Ubuntu 16.04 (2016-04-21) was released with e2fsprogs 1.42.12 (2014-08-25) - upgrade to a newer release or enable source package support and install a newer version manually (see the end of this answer):


The following steps assume you device is called /dev/mapper/target-device

Step 1: Properly umount the filesystem

$ sudo umount /dev/mapper/target-device

Step 2: Check the filesystem for errors

$ sudo e2fsck -fn /dev/mapper/target-device

Step 3: Enable 64bit support in the filesystem

Consult man tune2fs and man resize2fs - you may with to change some filesystem flags.

$ sudo resize2fs -b /dev/mapper/target-device

On a typical HDD RAID, this takes 4 minutes of high IO & CPU load.

Step 4: Resize the filesystem

If you do not pass a size on the command line, resize2fs assumes "grow to all space available" - this is typically exactly what you want. The -p flag enables progress bars - but those only display after some initial steps.

$ sudo resize2fs -p /dev/mapper/target-device

On a typical HDD RAID, this takes 4 minutes of high IO & CPU load.

Step 5: Check the filesystem again

$ sudo e2fsck -fn /dev/mapper/target-device

e2fsck of newer versions may suggest to fix timestamps or extent trees. This is not an indication of any serious issue and you may chose to fix it now or later.

If errors occur, do not panic and do not attempt to write to the volume; consult someone with extensive knowledge of the filesystem, as further operations would likely destroy data!

If no errors occur, remount the device:

$ sudo mount /dev/mapper/target-device
$ df -h


Extra steps to download and compile a newer version of e2fsprogs on older systems:

$ resize2fs
# if this  prints version 1.43 or above, continue to step 1
$ sudo apt update
$ sudo apt install git
$ sudo apt build-dep e2fsprogs
$ cd $(mktemp -d)
$ git clone -b v1.44.2 e2fsprogs && cd e2fsprogs
$ ./configure
$ make
$ cd resize
$ ./resize2fs
# confirm that this prints 1.43 or higher
# use `./resize2fs` instead of `resize2fs` for the rest of the steps

You will not need any non-Ubuntu version of e2fsprogs for continued operation of the upgraded filesystem - the kernel supports those for quite some time now. It is only necessary to initiate the upgrade.

For reference, there is a similar error message mke2fs will print if it is asked to create a huge device with inappropriate options:

$ mke2fs -O ^64bit /dev/huge
mke2fs: Size of device (0x123456789 blocks) is too big to be expressed in 32 bits using a blocksize of 4096.

It happened to me recently, with an Ubuntu 18.04 that had been updated after being installed initially with a 16.04 Ubuntu ... The storage array (/dev/sdb) had been initially partitioned into two 14 TB partitions, and that's by wanting to enlarge the first partition to 28 TB that the problem occurred.

I did not need to download a new version of resize2fs because it was very recent.

# resize2fs 
resize2fs 1.44.1 (24-Mar-2018)

The only problem was to convert 64-bit partition 1 which had been formatted in 32 bits ... Instead of inviting the reader to consult the documentation of tune2fs (as Anx suggests), I propose a real example!

# tune2fs -O 64bit /dev/sdb1
tune2fs 1.44.1 (24-Mar-2018)
Please run "resize2fs -b /dev/sdb1" to enable 64-bit mode.

# resize2fs -b /dev/sdb1
resize2fs 1.44.1 (24-Mar-2018)
Convert the file system to 64 bits.
The file system on /dev/sdb1 now has a size of 3662109119 blocks (4k).

Finally, we enlarge the disk partition!

# resize2fs /dev/sdb1
resize2fs 1.44.1 (24-Mar-2018)
Resizing the file system on /dev/sdb1 to 7324303099 (4k) blocks.
The file system on / dev / sdb1 now has a size of 7324303099 blocks (4k).