Overview

Most modern Linux versions are compatible with LVM, an open source equivalent to Storage Spaces in the Microsoft world. Of course, many are aware of the traditional methods of partitioning a disk (Linux gparted or Microsft diskpart) and formatting that partition with a filesystem (NTFS, ReFS, exFAT, ZFS, xFS, etc.). LVM brings forth these advantages: on-demand RAID partitioning, flexible volume moving and resizing, snapshots, abstracted physical volumes management.   

LVM can be thought of as a software RAID with the baseline of Just a Bunch of Disks (JBOD) in conjunction with federation. Storage Space is carved from a set of available physical drives. Redundancy (deduplication) and/or parity-based resiliency are the seamless features of this technology.

The application of “thin provisioning” improves resource utilization and capacity planning. Physical hard drives can be removed from the pool, provided that sufficient disk space exists in the pool to copy the present contents elsewhere. Hence, failed or healthy drives can be added or removed with true plug and play if redundancy or parity based storage has been etched into the design.

Theories and semantics aside, let’s dive into a real-world example (use case) as shown in the table below:

Mount /home/ /backup/
Logical Volumes SHARE01 (RAID10) SHARE02 (RAID6)
Volume Groups VOLUMEGROUP01
Physical Volumes /dev/sdb /dev/sdc /dev/sdd /dev/sde

The drawing above represents the 3 ‘layers’ of volume abstractions and 1 layer of data access on a Linux system with LVM being applied. Items from any of these layers can be added or removed without affecting computer up time. Here is how to deploy this design.

Update: there’s another example of LVM, in practice, detailed in this article: https://blog.kimconnect.com/how-to-add-a-new-disk-as-lvm-volume-to-a-linux-machine-without-rebooting/

Clear a new disk’s partitioning table by filling the first 512 bytes with zeros. This is effectively cleaning the Master Boot Record (MBR)

dd if=/dev/zero of=/dev/sdb bs=512 count=1

Explain:

# 'dd' :instream data & data sets
# 'if=/dev/zero' :in file location /dev/zero, which means copy from a volume that is filled with zeros
# 'of=/dev/sdb' :out file location /dev/sdb, which means second physical volume
# 'bs=512' :block size 512 [KB]
# 'count=1' :execute 1 pass

Define LVM Physical Volumes

pvcreate /dev/sdb

View existing LVM Volumes

pvscan -v

Output explain:

# PV :Physical Volume
# VG :Volume Group

Assign Physical Volume to Volume Group

vgcreate VOLUMEGROUP01 /dev/sdb

Explain:

# Create volume group named 'VOLUMEGROUP01' on 2nd physical volume

Add the other physical volumes onto the same volume group

vgextend VOLUMEGROUP01 /dev/sdc /dev/sdd /dev/sde

You can create, resize, and remove RAID10/RAID6 volumes in LVM, where striping is laid out across an array of disks. For large sequential reads and writes, creating a striped logical volume can improve the efficiency of the data I/O.

To create a RAID 10 logical volume, use the following form of the lvcreate command

lvcreate --type raid10 -m 1 -i 4 -L 100G -n SHARE01 VOLUMEGROUP01

Explain:

# create type=raid10 mirrors=1 stripes=4 size=100GB named='SHARE01' using VOLUMEGROUP01 pool

Format Logical Volume as XFS (Red Hat 7.5 default)

mkfs.xfs /dev/VOLUMEGROUP01/SHARE01

Mount the LV into /home directory – the trailing ‘/’ is necessary

mount /dev/VOLUMEGROUP01/SHARE01 /home/

# Check free space on ‘home’ folder

df -h /home/

Explain:

#'df -h /home/' :disk free, human readable, directory /home/
# Check for result such as "Mounted on /dev/VOLUMEGROUP01"

Check available space on VOLUMEGROUP01

vgdisplay VOLUMEGROUP01

Expand ‘home’ folder by 2GB and resize immediately

lvextend -L +2G /dev/VOLUMEGROUP01 -r

Extend LV ‘SHARE01’ by 100% of free space on VG ‘VOLUMEGROUP01’

lvextend -l +100%FREE /dev/VOLUMEGROUP01/SHARE01

LVM treats M or m as Megabytes, where each ascension level is by a multiplier of 1024 (instead of 1000). This is consistent with the storage industry’s standard of measurement.

Summary

Although the illustration above is hasn’t demonstrated some other advanced functionalities, these are some of the takeaway notes from a LVM implementation.

The Pros:

1. This is JBOD to the extreme! A mix of multi-speed hard drives with varying capacities will achieve read/write at their aggregated capabilities.

2. Deduplication is a must for mission critical data! LVM performs deduplication automatically when configured.

3. Storage Pool Tiering (not yet covered in this post) with these great features: Write Back Cache size, Tier Optimization, File pinning.

The Cons:

1. True RAID 10 cannot be achieved with a two way mirrored Storage Pool, even though that concept hasn’t been shown in the example above. Let’s just accept what I say as true because I’m good at blah blah without supporting evidence.

2. Parity Storage Pools will significantly affect performance.

3. Combining Hardware RAID with Storage Pools (software RAID) will result in… abadi… abadi… slow I/O performance. I’ll just say that putting RAID on RAID is whats-that-word… vacuous.

Practical Example:

# Scope: this documentation is intended for LVM volumes

# If a new disk is added
sudo su # run as root
scsiPath=/sys/class/scsi_host # assuming scsi host path
for host in $(ls $scsiPath); do
    echo $host
    echo "- - -" > $scsiPath/$host/scan # trigger scans
done
 
# If an disks have been expanded
sudo su
for deviceRescan in $(ls /sys/class/block/sd*/device/rescan); do
    echo 1 > $deviceRescan
done

# Install pre-requisites - example on a Redhat distro
yum -y install cloud-utils-growpart gdisk

# Rescan disk 'device' a that has been expanded in VMWare, Hyper-V, or KVM
deviceLetter=a
echo 1 > /sys/class/block/sd$deviceLetter/device/rescan

# Verify the new disk size of 120G has been realized on device sda
# Note the partition to be expanded as root or '/' and its file system is xfs
[root@testlinux]# lsblk -f
NAME                          FSTYPE      LABEL UUID                                   MOUNTPOINT
sda
├─sda1                        vfat              AAAA-AAAA                              /boot/efi
├─sda2                        xfs               AAAAAAAA-024a-4f2f-ab72-cba46447a1f4   /boot
└─sda3                        LVM2_member       AAAAA-ggQU-viCN-fp99-hmY3-VSMP-rHOdSj
  ├─centos_volumegroup1-root  xfs               AAAAAAAA-344b-4c7f-9c6f-d6cdfc926f83   /
  └─centos_volumegroup1-swap  swap              AAAAAAAA-866d-41fe-8621-47070a4b2ba4   [SWAP]

# Checking volume groups before changes
[root@testlinux]# vgs
  VG                  #PV #LV #SN Attr   VSize  VFree
  centos_volumegroup1   1   2   0 wz--n- 70.80g    0

[root@testlinux]# pvscan
  PV /dev/sda3   VG centos_volumegroup1   lvm2 [70.80 GiB / 0    free]
  Total: 1 [70.80 GiB] / in use: 1 [70.80 GiB] / in no VG: 0 [0   ]

# Expand /dev/sda, partition 3 of volume group name as specified
partitionNumber=3
partitionName=root
vgName=centos_volumegroup1
partitionPath=/dev/$vgName/$partitionName
growpart /dev/sda $partitionNumber
pvresize /dev/sda$partitionNumber
ls $partitionPath && lvextend -l +100%FREE $partitionPath

# In the case of xfs file system
xfs_growfs $partitionPath

# In the case of ext4
resize2fs $partitionPath
# Sample Output
[root@testlinux]# growpart /dev/sda $partitionNumber
CHANGED: partition=3 start=2508800 old: size=148486110 end=150994910 new: size=249149406 end=251658206

[root@testlinux]# pvresize /dev/sda$partitionNumber
  Physical volume "/dev/sda3" changed
  1 physical volume(s) resized or updated / 0 physical volume(s) not resized

[root@testlinux]# ls $partitionPath && lvextend -l +100%FREE $partitionPath
/dev/centos_volumegroup1/root
  Size of logical volume centos_eq-caprmon01/root changed from 66.80 GiB (17101 extents) to 114.80 GiB (29389 extents).
  Logical volume centos_volumegroup1/root successfully resized.

[root@testlinux]# ls $partitionPath && xfs_growfs $partitionPath
/dev/centos_volumegroup1/root
meta-data=/dev/mapper/centos_volumegroup1 isize=512    agcount=8, agsize=2280704 blks
         =                       sectsz=4096  attr=2, projid32bit=1
         =                       crc=1        finobt=0 spinodes=0
data     =                       bsize=4096   blocks=17511424, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal               bsize=4096   blocks=4454, version=2
         =                       sectsz=4096  sunit=1 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
data blocks changed from 17511424 to 30094336

[root@testlinux]# df -h
Filesystem                             Size  Used Avail Use% Mounted on
devtmpfs                               7.8G     0  7.8G   0% /dev
tmpfs                                  7.8G     0  7.8G   0% /dev/shm
tmpfs                                  7.8G   17M  7.8G   1% /run
tmpfs                                  7.8G     0  7.8G   0% /sys/fs/cgroup
/dev/mapper/centos_volumegroup1-root   115G   60G   56G  52% /
/dev/sda2                             1014M  183M  832M  19% /boot
/dev/sda1                              200M   12M  189M   6% /boot/efi
tmpfs                                  1.6G     0  1.6G   0% /run/user/1000