Storage Management with LVM and Stratis on CentOS/RHEL
Effective storage management is crucial for enterprise environments. This guide covers Logical Volume Manager (LVM) and Stratis, providing flexible and scalable storage solutions for CentOS/RHEL systems.
LVM Fundamentals
LVM Architecture Overview
Understanding LVM components:
# LVM hierarchy:
# Physical Volumes (PV) -> Volume Groups (VG) -> Logical Volumes (LV)
# Check LVM version
lvm version
# Display LVM configuration
lvmconfig --typeconfig default
# View current LVM setup
pvs # Physical volumes
vgs # Volume groups
lvs # Logical volumes
# Detailed view
pvdisplay
vgdisplay
lvdisplay
Creating LVM Storage
Set up basic LVM configuration:
# Prepare disks for LVM
parted /dev/sdb mklabel gpt
parted /dev/sdb mkpart primary 1MiB 100%
parted /dev/sdb set 1 lvm on
# Create physical volumes
pvcreate /dev/sdb1 /dev/sdc1 /dev/sdd1
# Create volume group
vgcreate vg_data /dev/sdb1 /dev/sdc1
# Create logical volumes
lvcreate -L 100G -n lv_database vg_data
lvcreate -L 50G -n lv_logs vg_data
lvcreate -l 100%FREE -n lv_backup vg_data
# Create filesystems
mkfs.xfs /dev/vg_data/lv_database
mkfs.ext4 /dev/vg_data/lv_logs
mkfs.xfs /dev/vg_data/lv_backup
# Mount logical volumes
mkdir -p /data/{database,logs,backup}
mount /dev/vg_data/lv_database /data/database
mount /dev/vg_data/lv_logs /data/logs
mount /dev/vg_data/lv_backup /data/backup
# Add to fstab
cat >> /etc/fstab <<EOF
/dev/vg_data/lv_database /data/database xfs defaults 0 0
/dev/vg_data/lv_logs /data/logs ext4 defaults 0 0
/dev/vg_data/lv_backup /data/backup xfs defaults 0 0
EOF
Advanced LVM Features
Volume Expansion and Reduction
Resize logical volumes dynamically:
# Extend volume group
pvcreate /dev/sdd1
vgextend vg_data /dev/sdd1
# Extend logical volume
lvextend -L +50G /dev/vg_data/lv_database
# Or use percentage
lvextend -l +50%FREE /dev/vg_data/lv_logs
# Resize filesystem
# XFS (can only grow)
xfs_growfs /data/database
# EXT4 (can grow and shrink)
resize2fs /dev/vg_data/lv_logs
# Reduce logical volume (EXT4 only)
umount /data/logs
e2fsck -f /dev/vg_data/lv_logs
resize2fs /dev/vg_data/lv_logs 40G
lvreduce -L 40G /dev/vg_data/lv_logs
mount /dev/vg_data/lv_logs /data/logs
LVM Snapshots
Create and manage snapshots:
# Create snapshot
lvcreate -L 10G -s -n lv_database_snap /dev/vg_data/lv_database
# Mount snapshot
mkdir /mnt/database_snap
mount -o ro /dev/vg_data/lv_database_snap /mnt/database_snap
# Monitor snapshot usage
lvs -o +snap_percent
# Merge snapshot (rollback)
umount /data/database
lvconvert --merge /dev/vg_data/lv_database_snap
# Remove snapshot
umount /mnt/database_snap
lvremove /dev/vg_data/lv_database_snap
Thin Provisioning
Implement thin provisioned volumes:
# Create thin pool
lvcreate -L 500G --thinpool tp_data vg_data
# Configure thin pool
lvchange --metadataprofile thin_performance tp_data
# Create thin volumes
lvcreate -V 200G --thin -n lv_thin_web vg_data/tp_data
lvcreate -V 300G --thin -n lv_thin_app vg_data/tp_data
lvcreate -V 400G --thin -n lv_thin_db vg_data/tp_data
# Monitor thin pool usage
lvs -o +thin_count,thin_data_percent,thin_metadata_percent
# Set thin pool thresholds
lvchange --monitor y --threshold 80 vg_data/tp_data
LVM RAID
Creating RAID Volumes
Configure software RAID with LVM:
# Create RAID 1 (mirror)
lvcreate --type raid1 -m 1 -L 100G -n lv_raid1 vg_data
# Create RAID 5
lvcreate --type raid5 -i 3 -L 300G -n lv_raid5 vg_data
# Create RAID 10
lvcreate --type raid10 -m 1 -i 2 -L 200G -n lv_raid10 vg_data
# Convert linear to RAID1
lvconvert --type raid1 -m 1 vg_data/lv_database
# Monitor RAID status
lvs -a -o +raid_sync_percent,raid_mismatch_count
RAID Management
Maintain RAID volumes:
# Replace failed device
lvconvert --repair vg_data/lv_raid1
# Add RAID images
lvconvert -m +1 vg_data/lv_raid1
# Remove RAID images
lvconvert -m -1 vg_data/lv_raid1
# Scrub RAID array
lvchange --syncaction check vg_data/lv_raid5
lvchange --syncaction repair vg_data/lv_raid5
# RAID reshape
lvconvert --stripes 4 vg_data/lv_raid5
LVM Cache and Performance
SSD Caching
Implement cache volumes:
# Create cache pool on SSD
pvcreate /dev/nvme0n1p1
vgextend vg_data /dev/nvme0n1p1
# Create cache data and metadata LVs
lvcreate -L 100G -n lv_cache_data vg_data /dev/nvme0n1p1
lvcreate -L 1G -n lv_cache_meta vg_data /dev/nvme0n1p1
# Create cache pool
lvconvert --type cache-pool --poolmetadata vg_data/lv_cache_meta \
vg_data/lv_cache_data
# Attach cache to existing LV
lvconvert --type cache --cachepool vg_data/lv_cache_data \
vg_data/lv_database
# Configure cache mode
lvchange --cachemode writethrough vg_data/lv_database
lvchange --cachemode writeback vg_data/lv_database
# Monitor cache performance
dmsetup status vg_data-lv_database
lvs -o +cache_used_blocks,cache_dirty_blocks
Performance Optimization
Tune LVM performance:
# Set read-ahead
lvchange -r 256 /dev/vg_data/lv_database
# Configure I/O scheduler
echo noop > /sys/block/dm-0/queue/scheduler
# Stripe volumes for performance
lvcreate -i 4 -I 64 -L 400G -n lv_striped vg_data
# Disable barriers for performance (risky)
mount -o nobarrier /dev/vg_data/lv_database /data/database
Stratis Storage Management
Stratis Introduction
Modern storage management with Stratis:
# Install Stratis
yum install stratisd stratis-cli
# Enable and start Stratis
systemctl enable --now stratisd
# Check Stratis version
stratis --version
# View Stratis daemon status
systemctl status stratisd
Creating Stratis Pools
Set up Stratis storage:
# Create pool
stratis pool create pool1 /dev/sdb /dev/sdc
# Add devices to pool
stratis pool add-data pool1 /dev/sdd
# List pools
stratis pool list
# Show pool details
stratis pool list pool1
# Create filesystems
stratis filesystem create pool1 fs_web
stratis filesystem create pool1 fs_database
stratis filesystem create pool1 fs_logs
# List filesystems
stratis filesystem list
# Mount Stratis filesystems
mkdir -p /stratis/{web,database,logs}
mount /stratis/pool1/fs_web /stratis/web
mount /stratis/pool1/fs_database /stratis/database
mount /stratis/pool1/fs_logs /stratis/logs
Stratis Snapshots
Manage filesystem snapshots:
# Create snapshot
stratis filesystem snapshot pool1 fs_database fs_database_snap
# List snapshots
stratis filesystem list pool1
# Mount snapshot
mkdir /mnt/database_snap
mount /stratis/pool1/fs_database_snap /mnt/database_snap
# Restore from snapshot
umount /stratis/database
stratis filesystem destroy pool1 fs_database
stratis filesystem snapshot pool1 fs_database_snap fs_database
mount /stratis/pool1/fs_database /stratis/database
# Remove snapshot
stratis filesystem destroy pool1 fs_database_snap
Storage Security
Encryption with LVM
Implement encrypted volumes:
# Install cryptsetup
yum install cryptsetup
# Create encrypted physical volume
cryptsetup luksFormat /dev/sde1
cryptsetup luksOpen /dev/sde1 encrypted_pv
# Add to LVM
pvcreate /dev/mapper/encrypted_pv
vgextend vg_secure /dev/mapper/encrypted_pv
# Create encrypted logical volume
lvcreate -L 50G -n lv_sensitive vg_data
cryptsetup luksFormat /dev/vg_data/lv_sensitive
cryptsetup luksOpen /dev/vg_data/lv_sensitive sensitive_data
# Create filesystem
mkfs.xfs /dev/mapper/sensitive_data
mount /dev/mapper/sensitive_data /secure
# Auto-mount encrypted volumes
cat >> /etc/crypttab <<EOF
sensitive_data /dev/vg_data/lv_sensitive none
EOF
# Key file setup
dd if=/dev/urandom of=/root/keyfile bs=1024 count=4
chmod 0400 /root/keyfile
cryptsetup luksAddKey /dev/vg_data/lv_sensitive /root/keyfile
Stratis Encryption
Encrypted Stratis pools:
# Create encrypted pool
stratis pool create --key-desc my_key encrypted_pool /dev/sdf
# Set up kernel keyring
stratis key set --capture-key my_key
# Create encrypted filesystem
stratis filesystem create encrypted_pool secure_fs
# Mount encrypted filesystem
mount /stratis/encrypted_pool/secure_fs /secure
Disaster Recovery
LVM Backup and Restore
Backup LVM configuration:
# Backup LVM metadata
vgcfgbackup vg_data
# Backup all VG metadata
vgcfgbackup
# View backup files
ls -la /etc/lvm/backup/
# Restore VG metadata
vgcfgrestore vg_data
# Archive management
ls -la /etc/lvm/archive/
vgcfgrestore --list vg_data
# Full LVM backup script
cat > /usr/local/bin/lvm-backup.sh <<'EOF'
#!/bin/bash
BACKUP_DIR="/backup/lvm"
mkdir -p $BACKUP_DIR
# Backup all VG configurations
for vg in $(vgs --noheadings -o vg_name); do
vgcfgbackup -f $BACKUP_DIR/${vg}_$(date +%Y%m%d).vg $vg
done
# Backup partition tables
for disk in $(lsblk -dpno NAME,TYPE | grep disk | awk '{print $1}'); do
sfdisk -d $disk > $BACKUP_DIR/$(basename $disk)_$(date +%Y%m%d).part
done
EOF
chmod +x /usr/local/bin/lvm-backup.sh
Stratis Backup
Backup Stratis configurations:
# Export pool information
stratis pool list --output-format json > /backup/stratis-pools.json
# Filesystem information
stratis filesystem list --output-format json > /backup/stratis-fs.json
# Create backup script
cat > /usr/local/bin/stratis-backup.sh <<'EOF'
#!/bin/bash
BACKUP_DIR="/backup/stratis"
mkdir -p $BACKUP_DIR
# Snapshot all filesystems
for pool in $(stratis pool list | tail -n +2 | awk '{print $1}'); do
for fs in $(stratis filesystem list $pool | tail -n +2 | awk '{print $1}'); do
stratis filesystem snapshot $pool $fs ${fs}_backup
done
done
EOF
chmod +x /usr/local/bin/stratis-backup.sh
Monitoring and Maintenance
Storage Monitoring
Monitor storage health:
# LVM monitoring script
cat > /usr/local/bin/storage-monitor.sh <<'EOF'
#!/bin/bash
echo "=== Storage Health Check ==="
# Check PV health
echo "Physical Volumes:"
pvs -o +pv_missing,pv_allocatable
# Check VG health
echo -e "\nVolume Groups:"
vgs -o +vg_missing_pv_count
# Check thin pool usage
echo -e "\nThin Pools:"
lvs --type thin-pool -o +data_percent,metadata_percent
# Check snapshot usage
echo -e "\nSnapshots:"
lvs --type snapshot -o +snap_percent
# Check RAID health
echo -e "\nRAID Volumes:"
lvs --type raid -o +raid_sync_percent,raid_health
# Check Stratis pools
if command -v stratis &> /dev/null; then
echo -e "\nStratis Pools:"
stratis pool list
fi
# Disk space alerts
df -h | awk '$5 > 80 {print "WARNING: " $6 " is " $5 " full"}'
EOF
chmod +x /usr/local/bin/storage-monitor.sh
Maintenance Tasks
Regular maintenance procedures:
# LVM maintenance
# Check and repair VG metadata
vgck vg_data
# Activate missing volumes
vgchange -ay --activationmode partial
# Remove missing PVs
vgreduce --removemissing vg_data
# Defragment thin pool
lvchange --discards passdown vg_data/tp_data
fstrim -v /data/thin_volume
# Stratis maintenance
# Refresh Stratis cache
systemctl restart stratisd
# Check pool integrity
stratis pool check pool1
Advanced Configurations
Multipath Storage
Configure multipath with LVM:
# Install multipath tools
yum install device-mapper-multipath
# Configure multipath
mpathconf --enable --with_multipathd y
# Create multipath LVM
pvcreate /dev/mapper/mpatha
vgcreate vg_multipath /dev/mapper/mpatha
# Multipath configuration
cat >> /etc/multipath.conf <<EOF
blacklist {
devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
}
defaults {
user_friendly_names yes
find_multipaths yes
failback immediate
}
EOF
Cluster Shared Storage
LVM in cluster environments:
# Install cluster LVM
yum install lvm2-cluster
# Configure LVM for clustering
lvmconf --enable-cluster
# Create clustered VG
vgcreate -cy vg_cluster /dev/sdb1
# Enable cluster locking
systemctl enable --now lvm2-lvmlockd
Performance Tuning
I/O Optimization
Optimize storage performance:
# LVM stripe optimization
lvcreate -i 4 -I 256K -L 1T -n lv_performance vg_data
# Cache policy tuning
lvchange --cachepolicy smq vg_data/lv_cached
lvchange --cachesettings migration_threshold=16384 vg_data/lv_cached
# Read-ahead tuning
for lv in $(lvs --noheadings -o lv_path); do
blockdev --setra 4096 $lv
done
# Disable atime updates
mount -o noatime,nodiratime /dev/vg_data/lv_database /data/database
Best Practices
Storage Design Guidelines
- Plan for growth: Size VGs with expansion in mind
- Use meaningful names: Follow naming conventions
- Regular backups: Backup both data and metadata
- Monitor usage: Set up alerts for capacity
- Document layout: Maintain storage documentation
- Test recovery: Regular DR drills
Health Check Script
#!/bin/bash
# Comprehensive storage health check
echo "=== Storage System Audit ==="
date
# Check for failed PVs
failed_pvs=$(pvs -o pv_name,vg_name,pv_attr | grep -c "m-")
if [ $failed_pvs -gt 0 ]; then
echo "CRITICAL: $failed_pvs failed physical volumes detected"
fi
# Check VG capacity
vgs --noheadings -o vg_name,vg_free_percent | while read vg free; do
free_int=${free%.*}
if [ $free_int -lt 10 ]; then
echo "WARNING: Volume group $vg has only $free% free space"
fi
done
# Check thin pool usage
lvs --type thin-pool -o lv_name,data_percent --noheadings | while read pool usage; do
usage_int=${usage%.*}
if [ $usage_int -gt 80 ]; then
echo "WARNING: Thin pool $pool is $usage% full"
fi
done
# Check Stratis status
if systemctl is-active stratisd &>/dev/null; then
stratis pool list | tail -n +2 | while read pool size used; do
echo "Stratis pool $pool: Size=$size, Used=$used"
done
fi
echo "Storage audit complete"
Conclusion
LVM and Stratis provide flexible storage management solutions for CentOS/RHEL systems. LVM offers mature, feature-rich volume management with extensive configurability, while Stratis provides a modern, simplified approach to storage management. Choose the appropriate solution based on your requirements and leverage their advanced features for optimal storage utilization and performance.