Shrinking LVM physical volume on top of mdadm degraded RAID array, adding a spare and rebuilding it

You don't need to shrink the pv or rebuild the array. You just need to create a new array out of the new drives and add that as a new pv ( pvcreate + vgextend ), then pvmove all of the existing lvs off the old pv, then remove the old pv ( vgreduce ) and take that drive out of service.


It's not lvmove but pvmove.

pvmove --alloc=anywhere /dev/md0:89600-102950 /dev/md0:0-12070

That should move any extents within the 89600-102950 range to 0-12070 range. According to the data you posted that should result in your LVs being relocated to the beginning of your PV.


ATTENTION: THIS GUIDE IS FAR FROM OPTIMAL. CHECK THE ACCEPTED ANSWER

Okay, I've figured out how to do what I tried to. This will be a some kind of a tutorial.

During this time I didn't yet realized, that manipulations with LVs are actually possible when the filesystems are mounted and booted into some live Linux distro (SystemRescueCD). People here explained me, that there is no necessity in this if you're not manipulating with actual filesystems and just aligning LVs and shrinking PV.

So with this guide you'll definitely achieve what you want, but not in an efficient way, because it confronts the very LVM's nature — possibility to do things live.

  1. Due to non-contiguous nature of logical volumes on my physical volume, I should somehow to move them in the beginning of the physical volume. The pvmove command, as being suggested by @frostschutz can move LVs within a PV:

    [email protected]:/home/a# pvmove --alloc=anywhere /dev/md0:89600-102950 /dev/md0:0-12070
    /dev/md0: Moved: 100.0%
    
    [email protected]:/home/a# pvs -v --segments /dev/md0
        Using physical volume(s) on command line
      PV         VG     Fmt  Attr PSize   PFree   Start SSize  LV   Start Type   PE Ranges          
      /dev/md0   system lvm2 a--  465.53g 418.38g     0     38 boot     0 linear /dev/md0:0-37      
      /dev/md0   system lvm2 a--  465.53g 418.38g    38    512 root     0 linear /dev/md0:38-549    
      /dev/md0   system lvm2 a--  465.53g 418.38g   550   5120 usr      0 linear /dev/md0:550-5669  
      /dev/md0   system lvm2 a--  465.53g 418.38g  5670   2560 tmp      0 linear /dev/md0:5670-8229 
      /dev/md0   system lvm2 a--  465.53g 418.38g  8230   3840 var      0 linear /dev/md0:8230-12069
      /dev/md0   system lvm2 a--  465.53g 418.38g 12070 107106          0 free  
    
  2. Now PV is ready for shrinking to the SSD's size (80GB). 80 gigabytes are actually a 80000000000 bytes:

    [email protected]:/home/a# pvresize --setphysicalvolumesize 80000000000B /dev/md0
      Physical volume "/dev/md0" changed
      1 physical volume(s) resized / 0 physical volume(s) not resized
    
    [email protected]:/home/a# pvs
      PV         VG     Fmt  Attr PSize  PFree 
      /dev/md0   system lvm2 a--  74.50g 27.36g
    
  3. After this, I can resize the array itself. There are no filesystems on this level, so I end up with just single mdadm --grow command, which can actually be used to shrink arrays too. The size should be entered in kibibytes, so it is 80000000000 / 1024 = 78125000:

    [email protected]:/home/a# mdadm --grow --size=78125000 /dev/md0
    mdadm: component size of /dev/md0 has been set to 78125000K
    
    [email protected]:/home/a# mdadm -D /dev/md0
    /dev/md0:
            Version : 1.2
      Creation Time : Thu Dec  4 12:20:22 2014
         Raid Level : raid1
         Array Size : 78125000 (74.51 GiB 80.00 GB)
      Used Dev Size : 78125000 (74.51 GiB 80.00 GB)
       Raid Devices : 2
      Total Devices : 1
        Persistence : Superblock is persistent
    
        Update Time : Thu Dec  4 17:56:53 2014
              State : clean, degraded 
     Active Devices : 1
    Working Devices : 1
     Failed Devices : 0
      Spare Devices : 0
    
               Name : wheezy:0  (local to host wheezy)
               UUID : 44ea4079:b3b837d3:b9bb2ca1:1b95272a
             Events : 60
    
        Number   Major   Minor   RaidDevice State
           0       8       16        0      active sync   /dev/sdb
           1       0        0        1      removed
    
  4. Now it's time to add an existing SSD to the array and let it rebuild:

    [email protected]:/home/a# mdadm --add /dev/md0 /dev/sdc
    mdadm: added /dev/sdc
    
    [email protected]:/home/a# cat /proc/mdstat 
    Personalities : [raid1] 
    md0 : active raid1 sdc[2] sdb[0]
          78125000 blocks super 1.2 [2/1] [U_]
          [>....................]  recovery =  1.3% (1081920/78125000)         finish=11.8min speed=108192K/sec
    
    unused devices: <none>
    

After rebuilding I have a healthy array. Its members can be swapped around and installation of GRUB can be routinely performed (after booting into production system) with grub-install /dev/sdc.

Tags:

Debian

Lvm

Mdadm