Tip

Troubleshooting Logical Volume Manager boot problems

If Logical Volume Manager (LVM) boots incorrectly, logical volumes are inaccessible and you can't manage server disk space. In this LVM how-to, we offer some simple steps to troubleshoot boot issues with LVM.

Logical volumes enable you to manage server disk space. If problems occur with your logical volumes, however, it's harder to fix them than it is with normal partitions. Troubleshooting the management of volumes in Logical Volume Manager (LVM) is also difficult. In this article, you'll learn how to fix LVM problems that occur during startup.

When your server boots up, it normally self-scans for LVM volumes. It does so by executing the pvscan command from the startup scripts, regardless of which Linux distribution you use. If something isn't working properly, however, pvscan will fail and you'll have to initialize LVM yourself. Once you understand how LVM works, this task isn't too onerous .

LVM's bottom layer consists of physical devices, or storage devices that are marked as usable by LVM. But not every storage device is a physical device; they need to be initialized using the pvcreate command before they can be used. When your server boots, it uses the pvscan command to find LVM devices that exist on your storage devices.

LVM's second layer consists of the volume groups. A volume group is a collection of one or more storage devices from which logical volumes can be created. During the configuration of LVM, one or more volume groups is normally created using the vgcreate command. When booting, your server uses the vgscan command to activate the volume group.

From the volume group, logical volumes are created. These are the storage devices on which file systems can be created. You use lvcreate to create them and lvscan to activate them. Figure 1 gives an overview of the LVM setup.


Figure 1: Overview of LVM setup

Before you begin troubleshooting, inspect your volumes with the lvdisplay command. If all is well, lvdisplay will give an overview of your logical volumes. If it isn't, it will tell you, "No volume groups found." If this message is returned, you must check the LVM setup from the physical volumes up to the logical volumes to see what might be wrong. Use these steps to help with your troubleshooting:

  1. If your LVM structure has never worked, start by checking the storage devices themselves. If you've added partitions to the LVM setup, the partition should be marked as partition type 8e. You can check with the fdisk command if this is the case: Use fdisk -l /dev/sda . If it isn't set to type 8e, use fdisk /dev/sda to open fdisk on your server's hard drive. Type t, followed by the number of the partition whose type you want to change. Finally, enter 8e, save the settings and reboot. If the problem was in the wrong partition type, your LVM volumes will be accessible after this fix.
  2. If your volumes are still not accessible, use the pvdisplay command to indicate whether the storage devices are marked as LVM devices. If they are not but you believe you set them up as LVM devices previous, use pvscan /dev/sda. If this doesn't display your physical volumes, use pvcreate /dev/sda to set up your storage device as an LVM device. Listing 1 shows you the result that pvdisplay and pvscan would normally return.

    Listing 1: Use pvscan to initialize existing physical volumes.

    root@mel:~# pvscan /dev/md0
      PV /dev/md0   VG system   lvm2 [912.69 GB / 10.69 GB free]
      Total: 1 [912.69 GB] / in use: 1 [912.69 GB] / in no VG: 0 [0   ]
    root@mel:~# pvdisplay
      --- Physical volume ---
      PV Name               /dev/md0
      VG Name               system
      PV Size               912.69 GB / not usable 1.69 MB
      Allocatable           yes
      PE Size (KByte)       4096
      Total PE              233648
      Free PE               2736
      Allocated PE          230912
      PV UUID               Z0qNiT-ZWH3-Yqfh-8jmi-jdW7-pNR4-IY6JW1
  3. The next step is to repeat the preceding actions for the volume groups on your server. First use the vgdisplay command to see your current volume groups. If that doesn't give you a result, use vgscan to tell the server to scan for volume groups on your storage devices. Listing 2 shows the result of these commands:

    Listing 2: To initialize volume groups, use vgscan and vgdisplay.

    root@mel:~# vgscan
      Reading all physical volumes.  This may take a while...
      Found volume group "system" using metadata type lvm2
    root@mel:~# vgdisplay
      --- Volume group ---
      VG Name                     system
      System ID
      Format                      lvm2
      Metadata Areas              1
      Metadata Sequence           No  6
      VG Access                   read/write
      VG Status                   resizable
      MAX LV                      0
      Cur LV                      5
      Open LV                     5
      Max PV                      0
      Cur PV                      1
      Act PV                      1
      VG Size                     912.69 GB
      PE Size                     4.00 MB
      Total PE                    233648
      Alloc PE / Size             230912 / 902.00 GB
      Free  PE / Size             2736 / 10.69 GB
      VG UUID                     9VeHJR-nkCX-2Ofg-3BUq-l52H-WqFW-3B2Sw7
  4. Now that both the physical volumes and volume groups are available, you may still have to scan your logical volumes. First, however, use lvdisplay to see wether they have been activated automatically. The command sequence repeats itself: Use lvscan to scan for available volumes and lvdisplay to see whether they are listed. Listing 3 shows you the result of these two commands:

    Listing 3: Use lvscan and lvdisplay to initialize your logical volumes

    root@mel:~# lvscan
      ACTIVE            '/dev/system/root' [100.00 GB] inherit
      ACTIVE            '/dev/system/swap' [2.00 GB] inherit
      ACTIVE            '/dev/system/var' [100.00 GB] inherit
      ACTIVE            '/dev/system/srv' [100.00 GB] inherit
      ACTIVE            '/dev/system/clonezilla' [600.00 GB] inherit
    root@mel:~# lvdisplay
      --- Logical volume ---
      LV Name                /dev/system/root
      VG Name                system
      LV UUID                C2QCPB-vtTJ-E3QN-hoZE-dfZE-cBiZ-zzO6mN
      LV Write Access        read/write
      LV Status              available
      # open                 1
      LV Size                100.00 GB
      Current LE             25600
      Segments               1
      Allocation             inherit
      Read ahead sectors     0
      Block device           254:0
      --- Logical volume ---
      LV Name                /dev/system/swap
      VG Name                system
      LV UUID                1NY8gw-TZgt-9Xxp-6FnA-2HEa-HUmv-tnqnI5
      LV Write Access        read/write
      LV Status              available
      # open                 2
      LV Size                2.00 GB
      Current LE             512
      Segments               1
      Allocation             inherit
      Read ahead sectors     0
      Block device           254:1
      --- Logical volume ---
      LV Name                /dev/system/var
      VG Name                system
      LV UUID                0yzvpN-U1uC-3Hra-7iOn-Sljz-pweh-1J8FsO
      LV Write Access        read/write
      LV Status              available
      # open                 2
      LV Size                100.00 GB
      Current LE             25600
      Segments               1
      Allocation             inherit
      Read ahead sectors     0
      Block device           254:2
      --- Logical volume ---
      LV Name                /dev/system/srv
      VG Name                system
      LV UUID                zUwbXR-7T1T-2yAJ-34Ri-FiFf-Wruc-ql5QtS
      LV Write Access        read/write
      LV Status              available
      # open                 1
      LV Size                100.00 GB
      Current LE             25600
      Segments               1
      Allocation             inherit
      Read ahead sectors     0
      Block device           254:3
      --- Logical volume ---
      LV Name                /dev/system/clonezilla
      VG Name                system
      LV UUID                zh1jLm-k3ut-UjwD-fBkh-GArt-HxII-i5342d
      LV Write Access        read/write
      LV Status              available
      # open                 1
      LV Size                600.00 GB
      Current LE             153600
      Segments               1
      Allocation             inherit
      Read ahead sectors     0
      Block device           254:4

At this point, your logical volumes should be accessible. If they are not, you have to start troubleshooting a problem unrelated to the boot procedure, which is an issue I will address in a later tip.

ABOUT THE AUTHOR: Sander van Vugt is an author and independent technical trainer, specializing in Linux since 1994. Vugt is also a technical consultant for high-availability (HA) clustering and performance optimization, as well as an expert on SLED 10 administration.

Dig Deeper on Data center ops, monitoring and management

SearchWindowsServer
Cloud Computing
Storage
Sustainability and ESG
Close