Advanced Storage Management Guide for Ubuntu Server 22.04
Effective storage management is crucial for server performance and data integrity. This comprehensive guide covers everything from basic disk management to advanced configurations like LVM, RAID, and ZFS on Ubuntu Server 22.04.
Prerequisites
- Ubuntu Server 22.04 LTS
- Root or sudo access
- Available storage devices
- Basic understanding of Linux file systems
Disk Management Fundamentals
View Disk Information
# List all block devices
lsblk
lsblk -f # With filesystem info
# Detailed disk information
sudo fdisk -l
sudo parted -l
# Disk usage by partition
df -h
df -i # Inode usage
# Check disk health
sudo smartctl -a /dev/sda
sudo smartctl -H /dev/sda # Health summary
Partition Management
Using fdisk (MBR)
sudo fdisk /dev/sdb
# Common commands:
# n - new partition
# d - delete partition
# p - print partition table
# w - write changes
# q - quit without saving
Using gdisk (GPT)
sudo gdisk /dev/sdb
# Similar commands to fdisk
# More suitable for disks larger than 2TB
Using parted
sudo parted /dev/sdb
# Interactive mode commands
(parted) mklabel gpt
(parted) mkpart primary ext4 0% 100%
(parted) print
(parted) quit
File System Creation and Management
Create File Systems
# ext4 (default for most use cases)
sudo mkfs.ext4 /dev/sdb1
sudo mkfs.ext4 -L "DataDisk" /dev/sdb1 # With label
# XFS (good for large files)
sudo mkfs.xfs /dev/sdb1
sudo mkfs.xfs -L "MediaStorage" /dev/sdb1
# Btrfs (advanced features)
sudo mkfs.btrfs /dev/sdb1
sudo mkfs.btrfs -L "Snapshots" /dev/sdb1
# NTFS (for Windows compatibility)
sudo mkfs.ntfs -Q /dev/sdb1
Mount File Systems
# Create mount point
sudo mkdir -p /mnt/data
# Temporary mount
sudo mount /dev/sdb1 /mnt/data
# Mount with options
sudo mount -o rw,noatime,nodiratime /dev/sdb1 /mnt/data
# Unmount
sudo umount /mnt/data
Persistent Mounting (fstab)
# Get UUID
sudo blkid /dev/sdb1
# Edit fstab
sudo nano /etc/fstab
Add entry:
# <file system> <mount point> <type> <options> <dump> <pass>
UUID=123e4567-e89b-12d3-a456-426614174000 /mnt/data ext4 defaults 0 2
UUID=234e5678-f90c-23e4-b567-537625285111 /mnt/backup xfs defaults,noatime 0 2
LVM (Logical Volume Manager)
LVM Setup
Install LVM
sudo apt install lvm2 -y
Create Physical Volumes
# Initialize disks for LVM
sudo pvcreate /dev/sdb /dev/sdc
# Display physical volumes
sudo pvdisplay
sudo pvs # Summary view
Create Volume Groups
# Create volume group
sudo vgcreate vg_data /dev/sdb /dev/sdc
# Display volume groups
sudo vgdisplay
sudo vgs # Summary view
# Extend volume group
sudo vgextend vg_data /dev/sdd
Create Logical Volumes
# Create logical volume with specific size
sudo lvcreate -L 100G -n lv_database vg_data
# Create logical volume using percentage
sudo lvcreate -l 50%VG -n lv_backup vg_data
# Create logical volume using all remaining space
sudo lvcreate -l 100%FREE -n lv_media vg_data
# Display logical volumes
sudo lvdisplay
sudo lvs # Summary view
LVM Management
Extend Logical Volume
# Extend by specific size
sudo lvextend -L +50G /dev/vg_data/lv_database
# Extend to specific size
sudo lvextend -L 200G /dev/vg_data/lv_database
# Extend using all available space
sudo lvextend -l +100%FREE /dev/vg_data/lv_database
# Resize filesystem (ext4)
sudo resize2fs /dev/vg_data/lv_database
# Resize filesystem (XFS)
sudo xfs_growfs /mount/point
Reduce Logical Volume (ext4 only)
# Unmount first
sudo umount /mnt/database
# Check filesystem
sudo e2fsck -f /dev/vg_data/lv_database
# Resize filesystem
sudo resize2fs /dev/vg_data/lv_database 150G
# Reduce logical volume
sudo lvreduce -L 150G /dev/vg_data/lv_database
LVM Snapshots
# Create snapshot
sudo lvcreate -L 10G -s -n snap_database /dev/vg_data/lv_database
# Mount snapshot
sudo mount /dev/vg_data/snap_database /mnt/snapshot
# Remove snapshot
sudo lvremove /dev/vg_data/snap_database
RAID Configuration
Software RAID with mdadm
Install mdadm
sudo apt install mdadm -y
Create RAID Arrays
RAID 0 (Striping)
sudo mdadm --create /dev/md0 --level=0 --raid-devices=2 /dev/sdb /dev/sdc
RAID 1 (Mirroring)
sudo mdadm --create /dev/md1 --level=1 --raid-devices=2 /dev/sdb /dev/sdc
RAID 5 (Striping with Parity)
sudo mdadm --create /dev/md5 --level=5 --raid-devices=3 /dev/sdb /dev/sdc /dev/sdd
RAID 6 (Striping with Double Parity)
sudo mdadm --create /dev/md6 --level=6 --raid-devices=4 /dev/sdb /dev/sdc /dev/sdd /dev/sde
RAID 10 (Mirrored Stripes)
sudo mdadm --create /dev/md10 --level=10 --raid-devices=4 /dev/sdb /dev/sdc /dev/sdd /dev/sde
RAID Management
# Check RAID status
sudo mdadm --detail /dev/md0
cat /proc/mdstat
# Add spare disk
sudo mdadm --add /dev/md1 /dev/sde
# Remove faulty disk
sudo mdadm --remove /dev/md1 /dev/sdb
# Replace disk
sudo mdadm --add /dev/md1 /dev/sdf
sudo mdadm --remove /dev/md1 /dev/sdb
# Save RAID configuration
sudo mdadm --detail --scan >> /etc/mdadm/mdadm.conf
sudo update-initramfs -u
RAID Monitoring Script
sudo nano /usr/local/bin/raid-monitor.sh
#!/bin/bash
# RAID monitoring script
LOG_FILE="/var/log/raid-monitor.log"
EMAIL="admin@example.com"
# Function to log messages
log_message() {
echo "[$(date +'%Y-%m-%d %H:%M:%S')] $1" | tee -a "$LOG_FILE"
}
# Check RAID status
check_raid() {
local raid_device=$1
local status=$(mdadm --detail $raid_device | grep "State :" | awk '{print $3}')
if [ "$status" != "clean" ] && [ "$status" != "active" ]; then
log_message "WARNING: RAID $raid_device is in $status state"
echo "RAID $raid_device is in $status state" | mail -s "RAID Alert on $(hostname)" $EMAIL
return 1
fi
# Check for failed disks
local failed=$(mdadm --detail $raid_device | grep "Failed Devices :" | awk '{print $4}')
if [ "$failed" -gt 0 ]; then
log_message "ERROR: RAID $raid_device has $failed failed devices"
mdadm --detail $raid_device | mail -s "RAID Failure on $(hostname)" $EMAIL
return 1
fi
return 0
}
# Main monitoring loop
log_message "Starting RAID monitoring"
for raid in /dev/md*; do
if [ -b "$raid" ]; then
check_raid $raid
fi
done
log_message "RAID monitoring completed"
sudo chmod +x /usr/local/bin/raid-monitor.sh
sudo crontab -e
# Add: */10 * * * * /usr/local/bin/raid-monitor.sh
ZFS File System
Install ZFS
sudo apt install zfsutils-linux -y
Create ZFS Pools
Basic Pool
# Single disk pool
sudo zpool create datapool /dev/sdb
# Mirror pool (RAID 1)
sudo zpool create datapool mirror /dev/sdb /dev/sdc
# RAIDZ1 pool (RAID 5)
sudo zpool create datapool raidz1 /dev/sdb /dev/sdc /dev/sdd
# RAIDZ2 pool (RAID 6)
sudo zpool create datapool raidz2 /dev/sdb /dev/sdc /dev/sdd /dev/sde
Advanced Pool Configuration
# Pool with cache and log devices
sudo zpool create datapool mirror /dev/sdb /dev/sdc \
cache /dev/nvme0n1 \
log mirror /dev/nvme1n1 /dev/nvme2n1
# Pool with multiple vdevs
sudo zpool create datapool \
mirror /dev/sdb /dev/sdc \
mirror /dev/sdd /dev/sde
ZFS Dataset Management
# Create dataset
sudo zfs create datapool/databases
sudo zfs create datapool/backups
# Set properties
sudo zfs set compression=lz4 datapool/databases
sudo zfs set atime=off datapool/databases
sudo zfs set quota=100G datapool/databases
sudo zfs set recordsize=16K datapool/databases # For databases
# List datasets
sudo zfs list
sudo zfs list -o name,used,avail,refer,mountpoint
ZFS Snapshots
# Create snapshot
sudo zfs snapshot datapool/databases@$(date +%Y%m%d_%H%M%S)
# List snapshots
sudo zfs list -t snapshot
# Rollback to snapshot
sudo zfs rollback datapool/databases@20240123_120000
# Clone snapshot
sudo zfs clone datapool/databases@20240123_120000 datapool/databases_clone
# Send/Receive snapshots
sudo zfs send datapool/databases@20240123_120000 | sudo zfs recv backuppool/databases
ZFS Automated Snapshot Script
sudo nano /usr/local/bin/zfs-auto-snapshot.sh
#!/bin/bash
# ZFS automatic snapshot script
# Configuration
POOLS=("datapool" "backuppool")
RETENTION_DAILY=7
RETENTION_WEEKLY=4
RETENTION_MONTHLY=12
# Function to create snapshots
create_snapshot() {
local dataset=$1
local type=$2
local timestamp=$(date +%Y%m%d_%H%M%S)
local snapshot_name="${dataset}@auto_${type}_${timestamp}"
sudo zfs snapshot $snapshot_name
echo "Created snapshot: $snapshot_name"
}
# Function to cleanup old snapshots
cleanup_snapshots() {
local dataset=$1
local type=$2
local retention=$3
# Get snapshots sorted by creation time
snapshots=$(sudo zfs list -t snapshot -o name -s creation | grep "${dataset}@auto_${type}_" | head -n -${retention})
for snapshot in $snapshots; do
sudo zfs destroy $snapshot
echo "Destroyed old snapshot: $snapshot"
done
}
# Main script
for pool in "${POOLS[@]}"; do
# Get all datasets in pool
datasets=$(sudo zfs list -H -o name -r $pool | grep -v "@")
for dataset in $datasets; do
# Daily snapshots
create_snapshot $dataset "daily"
cleanup_snapshots $dataset "daily" $RETENTION_DAILY
# Weekly snapshots (on Sunday)
if [ $(date +%u) -eq 7 ]; then
create_snapshot $dataset "weekly"
cleanup_snapshots $dataset "weekly" $RETENTION_WEEKLY
fi
# Monthly snapshots (on 1st)
if [ $(date +%d) -eq 01 ]; then
create_snapshot $dataset "monthly"
cleanup_snapshots $dataset "monthly" $RETENTION_MONTHLY
fi
done
done
Disk Encryption
LUKS Encryption
Setup Encrypted Disk
# Install cryptsetup
sudo apt install cryptsetup -y
# Create encrypted partition
sudo cryptsetup luksFormat /dev/sdb1
# Open encrypted partition
sudo cryptsetup luksOpen /dev/sdb1 encrypted_disk
# Create filesystem
sudo mkfs.ext4 /dev/mapper/encrypted_disk
# Mount encrypted filesystem
sudo mkdir /mnt/encrypted
sudo mount /dev/mapper/encrypted_disk /mnt/encrypted
Automated Mounting with Key File
# Generate key file
sudo dd if=/dev/urandom of=/root/keyfile bs=1024 count=4
sudo chmod 600 /root/keyfile
# Add key to LUKS
sudo cryptsetup luksAddKey /dev/sdb1 /root/keyfile
# Create crypttab entry
echo "encrypted_disk /dev/sdb1 /root/keyfile luks" | sudo tee -a /etc/crypttab
# Add to fstab
echo "/dev/mapper/encrypted_disk /mnt/encrypted ext4 defaults 0 2" | sudo tee -a /etc/fstab
Encrypt Existing Data
# Create backup (CRITICAL!)
sudo rsync -av /data/ /backup/
# Unmount partition
sudo umount /data
# Encrypt in place (DANGEROUS - ensure backup!)
sudo cryptsetup reencrypt --encrypt --reduce-device-size 32M /dev/sdb1
Storage Performance Optimization
I/O Scheduler Configuration
# Check current scheduler
cat /sys/block/sda/queue/scheduler
# Change scheduler (temporary)
echo deadline | sudo tee /sys/block/sda/queue/scheduler
# Permanent configuration
sudo nano /etc/default/grub
# Add: GRUB_CMDLINE_LINUX="elevator=deadline"
sudo update-grub
Mount Options Optimization
# Performance-oriented mount options
sudo nano /etc/fstab
# SSD optimizations
UUID=xxx /data ext4 defaults,noatime,nodiratime,discard 0 2
# Database optimizations
UUID=xxx /var/lib/mysql xfs defaults,noatime,nobarrier 0 2
# General performance
UUID=xxx /backup ext4 defaults,noatime,nodiratime,commit=60 0 2
Storage Benchmarking
# Install benchmarking tools
sudo apt install fio hdparm -y
# Quick read test
sudo hdparm -tT /dev/sda
# Comprehensive I/O test
sudo fio --name=random-write --ioengine=libaio --iodepth=4 \
--rw=randwrite --bs=4k --direct=1 --size=1G --numjobs=1 \
--runtime=60 --group_reporting --filename=/dev/sdb
# Sequential read/write test
sudo fio --name=seq-test --ioengine=libaio --iodepth=16 \
--rw=rw --bs=1M --direct=1 --size=1G --numjobs=1 \
--runtime=60 --group_reporting --filename=/mnt/test
Storage Monitoring
Disk Usage Monitoring Script
sudo nano /usr/local/bin/storage-monitor.sh
#!/bin/bash
# Storage monitoring script
# Configuration
THRESHOLD=80
EMAIL="admin@example.com"
LOG_FILE="/var/log/storage-monitor.log"
# Function to log messages
log_message() {
echo "[$(date +'%Y-%m-%d %H:%M:%S')] $1" | tee -a "$LOG_FILE"
}
# Check disk usage
check_disk_usage() {
df -H | grep -vE '^Filesystem|tmpfs|cdrom|udev' | awk '{ print $5 " " $1 " " $6 }' | while read output;
do
usage=$(echo $output | awk '{ print $1}' | cut -d'%' -f1)
partition=$(echo $output | awk '{ print $2 }')
mountpoint=$(echo $output | awk '{ print $3 }')
if [ $usage -ge $THRESHOLD ]; then
message="WARNING: Partition $partition mounted on $mountpoint is ${usage}% full"
log_message "$message"
echo "$message" | mail -s "Disk Space Alert on $(hostname)" $EMAIL
fi
done
}
# Check inode usage
check_inode_usage() {
df -i | grep -vE '^Filesystem|tmpfs|cdrom|udev' | awk '{ print $5 " " $1 " " $6 }' | while read output;
do
usage=$(echo $output | awk '{ print $1}' | cut -d'%' -f1)
if [[ "$usage" =~ ^[0-9]+$ ]] && [ $usage -ge $THRESHOLD ]; then
partition=$(echo $output | awk '{ print $2 }')
mountpoint=$(echo $output | awk '{ print $3 }')
message="WARNING: Inode usage on $partition mounted on $mountpoint is ${usage}%"
log_message "$message"
echo "$message" | mail -s "Inode Usage Alert on $(hostname)" $EMAIL
fi
done
}
# Check SMART status
check_smart_status() {
for disk in /dev/sd[a-z]; do
if [ -b "$disk" ]; then
smart_status=$(sudo smartctl -H $disk | grep "SMART overall-health" | awk '{print $NF}')
if [ "$smart_status" != "PASSED" ]; then
message="ERROR: SMART status for $disk is $smart_status"
log_message "$message"
sudo smartctl -a $disk | mail -s "SMART Alert on $(hostname)" $EMAIL
fi
fi
done
}
# Main monitoring
log_message "Starting storage monitoring"
check_disk_usage
check_inode_usage
check_smart_status
log_message "Storage monitoring completed"
Best Practices
- Regular Backups: Always backup before major storage operations
- Monitor Health: Use SMART monitoring and regular checks
- Plan Capacity: Monitor trends and plan for growth
- Use Appropriate RAID: Choose RAID level based on needs
- Test Recovery: Regularly test backup and recovery procedures
- Document Layout: Keep detailed documentation of storage configuration
- Performance Testing: Benchmark before and after changes
Troubleshooting
Filesystem Corruption
# Check and repair ext4
sudo umount /dev/sdb1
sudo fsck.ext4 -f /dev/sdb1
# Check and repair XFS
sudo umount /dev/sdb1
sudo xfs_repair /dev/sdb1
# Force check on next boot
sudo touch /forcefsck
Recover Lost Partition
# Scan for lost partitions
sudo testdisk /dev/sdb
# Recover partition table
sudo gpart /dev/sdb
LVM Recovery
# Backup LVM metadata
sudo vgcfgbackup
# Restore LVM metadata
sudo vgcfgrestore vg_data
# Scan for volume groups
sudo vgscan
sudo vgchange -ay
Conclusion
This comprehensive guide covered advanced storage management on Ubuntu Server 22.04, from basic disk operations to complex configurations like LVM, RAID, and ZFS. Proper storage management ensures data integrity, optimal performance, and reliable disaster recovery. Remember to always test configurations in a non-production environment and maintain regular backups.