Using PVMOVE

LVM is one of the main stay storage technologies in almost all Linux installations and while adding a Logical Volume (LV) is common, migrating an LV is a task that many System Administrators baulk at.

Over time, most Linux Systems that have growing storage needs tend to suffer from add-hoc storage allocation which results in new disks being added and a proliferation of ever increasing disk sizes being allocated. As a result, it’s not uncommon on a Linux Server to see 10GB, 50GB, 200GB and multi terra byte disks existing in the same VM. On the face of it this looks OK but I encountered several VM’s with a recurrence of backup time out issues due to large multi-terabyte disks being present and needed to do some serious re-organization to solve it. here is a sample from an old file server:

[root@rowlf]# pvs
PV VG Fmt Attr PSize PFree
/dev/sda2 VolGroup00 lvm2 a--u 15.72g 0
/dev/sda3 VolGroup00 lvm2 a--u 992.00m 0
/dev/sdb1 VolGroup01 lvm2 a--u 100.00g 0
/dev/sdb2 VolGroup01 lvm2 a--u 50.00g 0
/dev/sdb3 VolGroup01 lvm2 a--u 50.00g 0
/dev/sdc1 VolGroup01 lvm2 a--u 200.00g 60.01g
/dev/sdd1 VolGroup00 lvm2 a--u 199.97g 179.97g
/dev/sde1 VolGroup01 lvm2 a--u 1023.99g 713.99g
/dev/sde2 VolGroup01 lvm2 a--u 1.00t 936.99g
[root@rowlf]#

On one particular Linux VM, a 2TB disk actually had only 280GB of live data which was significantly old and had no access or writes within the last two years. When this occurs the “pvmove” tool is a robust and safe way to move logical volume (LV) data off a disk and onto another disk.

First, the “pvs” tool

On this particular system I wanted to start removing disks and resize disks down (where possible) for a migration onto a new VxRail stretched cluster that used vSAN storage, so the smaller the VM the better, the “pvs” tool allowed me to see what physical volumes were in use and the space usage.

[root]# pvs
PV        VG      Fmt  Attr  PSize    PFree
/dev/sda2 ol      lvm2 a--   <24.00g  0
/dev/sdb  vg_data lvm2 a--   <500.00g 190.00g
/dev/sdc  vg_data lvm2 a--   <500.00g 50.00g
/dev/sdd  vg_data lvm2 a--   <500.00g 110.00g
/dev/sdf  vg_data lvm2 a--   <200.00g 0
[root]#

Looking at the pvs output show that disk /dev/sdd had free space that could fit all of the /dev/sdf disk so I was happy to remove a disk into a bigger disk and then see where the system stood. The command below shows the pvmove in action.

[root]# pvmove -i 5 -n lv_dmf /dev/sdf /dev/sdd
/dev/sdf: Moved: 0.02%
/dev/sdf: Moved: 0.20%
/dev/sdf: Moved: 0.36%
/dev/sdf: Moved: 0.54%
/dev/sdf: Moved: 0.73%
/dev/sdf: Moved: 0.93%
/dev/sdf: Moved: 1.14%
/dev/sdf: Moved: 1.37%
/dev/sdf: Moved: 1.59%

What if I interrupt it?

pvmove works in the background, once started it ticks over and continues to move data, re-running the same command line will yield something like this:

[root]# pvmove -i 5 -n lv_dmf /dev/sdf /dev/sdd
Detected pvmove in progress for /dev/sdf.
WARNING: Ignoring remaining command line arguments.
/dev/sdf: Moved: 18.93%
/dev/sdf: Moved: 19.16%

After the pvmove is done, a pvs shows the status of the Physical Volumes:

/dev/sdf: Moved: 98.85%
/dev/sdf: Moved: 99.13%
/dev/sdf: Moved: 99.36%
/dev/sdf: Moved: 99.58%
/dev/sdf: Moved: 99.82%
/dev/sdf: Moved: 100.00%
[rootf]# pvs
PV VG Fmt Attr PSize PFree
/dev/sda2 ol lvm2 a-- <24.00g 0
/dev/sdb vg_data lvm2 a-- <500.00g 190.00g
/dev/sdc vg_data lvm2 a-- <500.00g 50.00g
/dev/sdd vg_data lvm2 a-- <500.00g 110.00g
/dev/sdf vg_data lvm2 a-- <200.00g <200.00g <- NOW EMPTY!
[root]#

The next step would be to use “vgreduce” and “pvremove

What PV’s does an LV use?

To view what disks an LV is hosted on use the following (note the “pvmove” in progress):

[root]# lvs -o +devices --segments
LV VG Attr #Str Type SSize Devices
root ol -wi-ao---- 1 linear <21.50g /dev/sda2(640)
swap ol -wi-ao---- 1 linear 2.50g /dev/sda2(0)
lv_dmf vg_data -wI-ao---- 1 linear 349.98g /dev/sdc(38403)
lv_dmf vg_data -wI-ao---- 1 linear 50.00g /dev/sdc(0)
lv_dmf vg_data -wI-ao---- 1 linear <310.00g /dev/sdb(48640)
lv_dmf vg_data -wI-ao---- 1 linear <200.00g pvmove0(0)
lv_dmf vg_data -wI-ao---- 1 linear 50.00g /dev/sdc(12801)
lv_dmf vg_data -wI-ao---- 1 linear <190.00g /dev/sdd(0)
[root]#

You can also narrow it down to a specific LV using:

[root]# lvs -o +devices --segments /dev/vg_data/lv_dmf
LV VG Attr #Str Type SSize Devices
lv_dmf vg_data -wI-ao---- 1 linear 349.98g /dev/sdc(38403)
lv_dmf vg_data -wI-ao---- 1 linear 50.00g /dev/sdc(0)
lv_dmf vg_data -wI-ao---- 1 linear <310.00g /dev/sdb(48640)
lv_dmf vg_data -wI-ao---- 1 linear <200.00g pvmove0(0)
lv_dmf vg_data -wI-ao---- 1 linear 50.00g /dev/sdc(12801)
lv_dmf vg_data -wI-ao---- 1 linear <190.00g /dev/sdd(0)
[root]#

Enjoy!

-o0o-

You may also like...

Popular Posts