Should LVM partitions be used in virtual machine images?

"It depends."

If you are on an environment that you control (vmware or kvm or whatever), and can make your own decisions about disk performance QoS, then I'd recommend not using LVM inside your VMs. It doesn't buy you much flexibility that you couldn't get at the hypervisor level.

Remember, the hypervisor is already effectively performing these tasks. If you want to be able to arbitrarily resize file systems (a fine idea), just create a separate virtual disk for each filesystem.

One thing you might think of as you go down this road. You don't even necessarily need to put partitions on your virtual disks this way. For example, you can create a virtual disk for /home; it is /dev/vdc inside your vm. When creating the filesystem, just do something like mke2fs -j /dev/vdc instead of specifying a partition.

This is a fine idea, but...most tools (and other admins who come after you) will expect to see partitions on every disk. I'd recommend just putting a single partition on the disk and be done with it. It does mean one more step when resizing the filesystem, though. And don't forget to properly align your partitions - starting the first partition at 1MB is a good rule of thumb.

All that said - Doing this all at the hypervisor level means that you probably have to reboot the VM to resize partitions. Using LVM would allow you to hot-add a virtual disk (presuming your hypervisor/OS combination allows this), and expand the filesystem without a reboot. This is definitely a plus.


Meanwhile, if you are using a cloud provider, it's more subtle.

I don't know much about Azure, GCP, or any of the smaller players, so I can't help there.

With AWS you can follow my advice above and you'll often be just fine. You can (now) increase the size of EBS volumes (virtual disks) on-the-fly, and resize partitions, etc.

However, in the general case, it might make sense to put everything on a single big EBS volume, and use LVM (or, I suppose, plain partitions). Amazon gives you an IOPS limit on each volume. By default, this limit scales with the size of the volume. e.g., for gp2 volumes you get 3 IOPS per GiB (minimum of 100 IOPS). See https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html

For most workloads, you will want all your available IOPS to be available to any filesystem, depending on the need at the moment. So it makes sense to make one big EBS volume, get all your IOPS in one bucket, and partition/LVM it up.

Example:

3 disks with independent filesystems/swap areas, each 100GB in size. Each gets 300 IOPS. Performance is limited to 300 IOPS on each disk.

1 disk, 300GB in size. LVM partitions on the disk of 100GB each. The disk gets 900 IOPS. Any of the partitions can use all 900 IOPS.


Logical volumes are easier to create on the fly, resize, delete.
The "to LVM or not" question always has the same answer, it depends :)
It makes sense if you need the flexibility at the disk(s), partition(s) level.
It doesn't make much sense if you do not need the flexibility provided by LVM or do not want to take advantage of other LVM features


I actually like using LVs because they are not easily accessible from the virt-server. Thus these files can not easyly be destroyed/moved by chance.

Other important features of LVs:

  • You can make snapshots
  • You can analyze disk IO based on LV (iostat)
  • Easy to resize
  • By using snapshots you can make a consistent clone of running systems

To reduce complexity I use a LV as disk (not as partition). The drawback is that I can only easyly resize the last partition of the "disk" - but my standard-VM-disk-layout takes that into account (so the last partition contains the important application data).