LVM Performance overhead?

Solution 1:

LVM is fairly lightweight for just normal volumes (without snapshots, for example). It's really just a table lookup in a fairly small table that block X is actually block Y on device Z. I've never done any benchmarking, but I've never noticed any performance differences between LVM and just using the raw device. It's some small extra CPU overhead on the disc I/O, so I really wouldn't expect much difference.

My gut reaction is that the reason there are no benchmarks is that there just isn't that much overhead in LVM.

The convenience of LVM, and being able to slice and dice and add more drives, IMHO, far outweighs what little (if any) performance difference there may be.

Solution 2:

I am installing a 48T Dell MD-1200 and I was curious about this question. The MD1200 is connected to a hardware RAID card set up as RAID-6, so it looks to Linux like just a (big) drive. I tested an XFS filesystem on an LVM physical volume vs. an XFS filesystem on a straight disk partition. I used a Dell R630 machine with two E5-2699 CPUs in it. The system was set for Performance; whatever energy saving features I could find in the BIOS were turned off.

I installed CentOS 6.7 on it. Kernel is 2.6.32-573.el6.x86_64 (sorry for the oldie kernel but that's what I need for production). LVM is version 2.02.118.

I let CentOS create an XFS partition during the build. It is 1T in size. Then I created another 1T partition on the disk and created a logical volume:

vgcreate vol_grp1 /dev/sdb1
lvcreate -l 100%FREE -n lv_vol1 vol_grp1
mkfs.xfs /dev/vol_grp1/lv_vol1

My XFS-only filesystem was called /data_xfs. The LVM-backed XFS filesystem was called /data_lvm. I tested using bonnie++ v 1.03e.

The commands were: bonnie++ -u 0:0 -d /FILESYSTEM -s 400G -n 0 -m xfsspeedtest -f -b where FILESYSTEM was either /data_xfs or /data_lvm . Results are summarized as follows:

Test                        XFS on Partition        XFS on LVM
Sequential Output, Block    1467995 K/S, 94% CPU    1459880 K/s, 95% CPU
Sequential Output, Rewrite   457527 K/S, 33% CPU     443076 K/S, 33% CPU

Sequential Input, Block      899382 K/s, 35% CPU     922884 K/S, 32% CPU

Random Seeks                 415.0 /sec.              411.9 /sec.

Results seemed comparable in my view. In the Sequential Input test, LVM actually seemed to perform a little better.


Solution 3:

There is a short paper published 2015 by Borislav Djordjevic and Valentina Timcenko which used a few 7200RPM 80GB Western Digital drives using EXT3, tested using PostMark software that 'simulates loading an internet mail server' with Linux kernel 2.6.27. They found that past research that had looked at just bonnie or dd tests alone had varied results.

The tests seem to suggest the performance drop can be from 15% to 45% with LVM, compared to when not using it. They found an even bigger drop when two physical partitions are used within one LVM setup. They concluded that the biggest performance impacts were the use of LVM, as well as the complexity of it's use.

https://www.researchgate.net/publication/284897601_LVM_in_the_Linux_environment_Performance_examination http://hrcak.srce.hr/index.php?show=clanak&id_clanak_jezik=216661


Solution 4:

with snapshot active lvm performs ... badly.

take a look here to see in-depth benchmark


Solution 5:

There is an excellent (be it old) whitepaper, written by a SUSE guy, about LVM and it's overhead here. It shows some (simple) benchmarks and explains the tech behind LVM. Good read.