Debian Server Backup & Recovery: Complete Strategy Guide
A comprehensive backup and recovery strategy is essential for protecting your Debian servers against data loss, hardware failures, and disasters. This guide covers backup tools, strategies, automation, and tested recovery procedures.
Backup Strategy Fundamentals
Types of Backups
- Full Backup: Complete copy of all data
- Incremental Backup: Only changes since last backup
- Differential Backup: Changes since last full backup
- Snapshot Backup: Point-in-time system state
3-2-1 Backup Rule
- 3 copies of important data
- 2 different storage media types
- 1 offsite backup copy
Essential Backup Tools Installation
# Install backup utilities
sudo apt update
sudo apt install -y \
rsync rsnapshot \
borgbackup duplicity \
bacula-client \
restic rdiff-backup \
tar gzip bzip2 xz-utils \
cifs-utils nfs-common \
s3fs awscli \
rclone
File System Backup with Rsync
Basic Rsync Backup
# Local backup
rsync -avzP --delete /source/ /backup/destination/
# Remote backup over SSH
rsync -avzP -e ssh /local/path/ user@remote:/backup/path/
# Backup with exclusions
rsync -avzP \
--exclude='*.tmp' \
--exclude='/var/log/*' \
--exclude='/tmp/*' \
/source/ /backup/
Advanced Rsync Script
sudo nano /usr/local/bin/rsync-backup.sh
#!/bin/bash
# Advanced Rsync Backup Script
# Configuration
SOURCE_DIRS="/etc /home /var/www /opt"
BACKUP_DIR="/backup/system"
REMOTE_HOST="backup.server.com"
REMOTE_USER="backup"
REMOTE_DIR="/backups/$(hostname)"
LOG_FILE="/var/log/rsync-backup.log"
EXCLUDE_FILE="/etc/rsync-excludes.txt"
# Create exclude file if not exists
if [ ! -f "$EXCLUDE_FILE" ]; then
cat > "$EXCLUDE_FILE" << EOF
*.tmp
*.temp
*.swp
*.cache
/proc/*
/sys/*
/dev/*
/run/*
/tmp/*
/var/tmp/*
/var/log/*.log
/var/cache/*
EOF
fi
# Function to log messages
log_message() {
echo "[$(date '+%Y-%m-%d %H:%M:%S')] $1" | tee -a "$LOG_FILE"
}
# Start backup
log_message "Starting backup process"
# Local backup first
for dir in $SOURCE_DIRS; do
if [ -d "$dir" ]; then
log_message "Backing up $dir"
rsync -avz --delete \
--exclude-from="$EXCLUDE_FILE" \
"$dir" "$BACKUP_DIR/" \
2>&1 | tee -a "$LOG_FILE"
fi
done
# Remote backup
log_message "Starting remote sync to $REMOTE_HOST"
rsync -avz --delete \
-e "ssh -i /home/$REMOTE_USER/.ssh/id_rsa" \
"$BACKUP_DIR/" \
"$REMOTE_USER@$REMOTE_HOST:$REMOTE_DIR/" \
2>&1 | tee -a "$LOG_FILE"
# Check exit status
if [ $? -eq 0 ]; then
log_message "Backup completed successfully"
else
log_message "Backup failed with errors"
exit 1
fi
# Cleanup old logs
find /var/log -name "rsync-backup.log*" -mtime +30 -delete
log_message "Backup process finished"
Make executable and schedule:
sudo chmod +x /usr/local/bin/rsync-backup.sh
sudo crontab -e
# Add: 0 2 * * * /usr/local/bin/rsync-backup.sh
Snapshot Backups with Rsnapshot
Configure Rsnapshot
sudo nano /etc/rsnapshot.conf
Key configuration:
# Snapshot root directory
snapshot_root /backup/snapshots/
# External program dependencies
cmd_cp /bin/cp
cmd_rm /bin/rm
cmd_rsync /usr/bin/rsync
cmd_logger /usr/bin/logger
# Backup intervals
retain hourly 24
retain daily 7
retain weekly 4
retain monthly 12
# Global options
verbose 2
loglevel 3
logfile /var/log/rsnapshot.log
lockfile /var/run/rsnapshot.pid
# Default rsync args
rsync_short_args -a
rsync_long_args --delete --numeric-ids --delete-excluded
# Backup points
backup /etc/ localhost/
backup /home/ localhost/
backup /var/www/ localhost/
backup /var/mail/ localhost/
backup /var/lib/mysql/ localhost/
# Remote backups
backup root@remote:/etc/ remote-server/
# Exclude patterns
exclude /var/log/
exclude /var/cache/
exclude /tmp/
exclude *.tmp
Schedule Rsnapshot
sudo crontab -e
Add these entries:
# Rsnapshot backup schedule
0 */4 * * * /usr/bin/rsnapshot hourly
30 3 * * * /usr/bin/rsnapshot daily
0 3 * * 1 /usr/bin/rsnapshot weekly
30 2 1 * * /usr/bin/rsnapshot monthly
Database Backup Strategies
MySQL/MariaDB Backup
sudo nano /usr/local/bin/mysql-backup.sh
#!/bin/bash
# MySQL Backup Script
# Configuration
MYSQL_USER="backup"
MYSQL_PASS="backup_password"
BACKUP_DIR="/backup/mysql"
RETENTION_DAYS=7
DATE=$(date +%Y%m%d_%H%M%S)
# Create backup directory
mkdir -p "$BACKUP_DIR"
# Get list of databases
DATABASES=$(mysql -u$MYSQL_USER -p$MYSQL_PASS -e "SHOW DATABASES;" | grep -Ev "(Database|information_schema|performance_schema|mysql|sys)")
# Backup each database
for db in $DATABASES; do
echo "Backing up database: $db"
# Dump database
mysqldump -u$MYSQL_USER -p$MYSQL_PASS \
--single-transaction \
--routines \
--triggers \
--events \
--add-drop-database \
--databases $db | gzip > "$BACKUP_DIR/${db}_${DATE}.sql.gz"
done
# Backup all databases in one file
mysqldump -u$MYSQL_USER -p$MYSQL_PASS \
--all-databases \
--single-transaction \
--routines \
--triggers \
--events | gzip > "$BACKUP_DIR/all_databases_${DATE}.sql.gz"
# Remove old backups
find "$BACKUP_DIR" -name "*.sql.gz" -mtime +$RETENTION_DAYS -delete
echo "MySQL backup completed"
PostgreSQL Backup
sudo nano /usr/local/bin/postgresql-backup.sh
#!/bin/bash
# PostgreSQL Backup Script
# Configuration
BACKUP_DIR="/backup/postgresql"
RETENTION_DAYS=7
DATE=$(date +%Y%m%d_%H%M%S)
PGUSER="postgres"
# Create backup directory
mkdir -p "$BACKUP_DIR"
# Get list of databases
DATABASES=$(sudo -u postgres psql -t -c "SELECT datname FROM pg_database WHERE datistemplate = false;")
# Backup each database
for db in $DATABASES; do
db=$(echo $db | tr -d ' ')
echo "Backing up database: $db"
# Dump database
sudo -u postgres pg_dump \
--format=custom \
--verbose \
--file="$BACKUP_DIR/${db}_${DATE}.dump" \
"$db"
done
# Backup all databases
sudo -u postgres pg_dumpall | gzip > "$BACKUP_DIR/all_databases_${DATE}.sql.gz"
# Remove old backups
find "$BACKUP_DIR" -name "*.dump" -o -name "*.sql.gz" -mtime +$RETENTION_DAYS -delete
echo "PostgreSQL backup completed"
Incremental Backups with BorgBackup
Initialize Borg Repository
# Local repository
borg init --encryption=repokey /backup/borg-repo
# Remote repository
borg init --encryption=repokey ssh://user@backup.server:22/~/borg-repo
Borg Backup Script
sudo nano /usr/local/bin/borg-backup.sh
#!/bin/bash
# BorgBackup Script
# Configuration
export BORG_REPO="/backup/borg-repo"
export BORG_PASSPHRASE="your-secure-passphrase"
# Backup name
BACKUP_NAME="$(hostname)-$(date +%Y%m%d_%H%M%S)"
# Create backup
echo "Starting backup: $BACKUP_NAME"
borg create \
--verbose \
--filter AME \
--list \
--stats \
--show-rc \
--compression lz4 \
--exclude-caches \
--exclude '/home/*/.cache/*' \
--exclude '/var/cache/*' \
--exclude '/var/tmp/*' \
::$BACKUP_NAME \
/etc \
/home \
/root \
/var/www \
/var/mail \
/var/lib
# Prune old backups
echo "Pruning old backups"
borg prune \
--list \
--prefix "$(hostname)-" \
--show-rc \
--keep-daily 7 \
--keep-weekly 4 \
--keep-monthly 6 \
--keep-yearly 1
# Verify backup
echo "Verifying backup"
borg check \
--repository-only \
--show-rc
# List backups
echo "Current backups:"
borg list
echo "Backup completed"
Cloud Backup Solutions
AWS S3 Backup
# Configure AWS CLI
aws configure
# Create backup script
sudo nano /usr/local/bin/s3-backup.sh
#!/bin/bash
# S3 Backup Script
# Configuration
S3_BUCKET="s3://my-backup-bucket"
LOCAL_DIR="/backup/local"
DATE=$(date +%Y%m%d)
HOSTNAME=$(hostname)
# Sync to S3
echo "Starting S3 backup"
# Backup system files
tar czf - /etc /home /var/www | \
aws s3 cp - "$S3_BUCKET/$HOSTNAME/system-backup-$DATE.tar.gz"
# Sync specific directories
aws s3 sync "$LOCAL_DIR" "$S3_BUCKET/$HOSTNAME/files/" \
--delete \
--exclude "*.tmp" \
--exclude "*.cache"
# Database backups
for file in /backup/mysql/*.sql.gz; do
if [ -f "$file" ]; then
aws s3 cp "$file" "$S3_BUCKET/$HOSTNAME/databases/"
fi
done
# Set lifecycle policy for old backups
aws s3api put-bucket-lifecycle-configuration \
--bucket my-backup-bucket \
--lifecycle-configuration file:///etc/s3-lifecycle.json
echo "S3 backup completed"
Rclone for Multiple Cloud Providers
# Configure rclone
rclone config
# Create multi-cloud backup script
sudo nano /usr/local/bin/cloud-backup.sh
#!/bin/bash
# Multi-Cloud Backup Script using Rclone
# Configuration
SOURCE_DIR="/backup/local"
CLOUDS=("gdrive:Backups" "dropbox:Backups" "onedrive:Backups")
LOG_FILE="/var/log/cloud-backup.log"
# Function to backup to cloud
backup_to_cloud() {
local remote=$1
echo "[$(date)] Starting backup to $remote" >> "$LOG_FILE"
rclone sync "$SOURCE_DIR" "$remote/$(hostname)" \
--verbose \
--log-file="$LOG_FILE" \
--exclude "*.tmp" \
--exclude "*.cache" \
--delete-after \
--fast-list
if [ $? -eq 0 ]; then
echo "[$(date)] Backup to $remote completed successfully" >> "$LOG_FILE"
else
echo "[$(date)] Backup to $remote failed" >> "$LOG_FILE"
fi
}
# Backup to all configured clouds
for cloud in "${CLOUDS[@]}"; do
backup_to_cloud "$cloud" &
done
# Wait for all backups to complete
wait
echo "[$(date)] All cloud backups completed" >> "$LOG_FILE"
System Image Backups
Create System Image with dd
sudo nano /usr/local/bin/system-image-backup.sh
#!/bin/bash
# System Image Backup Script
# Configuration
SOURCE_DISK="/dev/sda"
BACKUP_DIR="/mnt/backup"
IMAGE_NAME="system-image-$(date +%Y%m%d).img"
COMPRESS=true
# Check if backup directory is mounted
if ! mountpoint -q "$BACKUP_DIR"; then
echo "Backup directory not mounted"
exit 1
fi
# Create image
echo "Creating system image..."
if [ "$COMPRESS" = true ]; then
dd if="$SOURCE_DISK" bs=4M status=progress | \
gzip -c > "$BACKUP_DIR/$IMAGE_NAME.gz"
else
dd if="$SOURCE_DISK" of="$BACKUP_DIR/$IMAGE_NAME" \
bs=4M status=progress
fi
# Create checksum
echo "Creating checksum..."
sha256sum "$BACKUP_DIR/$IMAGE_NAME"* > "$BACKUP_DIR/$IMAGE_NAME.sha256"
echo "System image backup completed"
Backup Verification and Testing
Automated Backup Verification
sudo nano /usr/local/bin/verify-backups.sh
#!/bin/bash
# Backup Verification Script
# Configuration
BACKUP_DIRS="/backup/system /backup/mysql /backup/snapshots"
ALERT_EMAIL="admin@yourdomain.com"
LOG_FILE="/var/log/backup-verification.log"
TEMP_RESTORE="/tmp/backup-verify"
# Function to log messages
log_message() {
echo "[$(date '+%Y-%m-%d %H:%M:%S')] $1" | tee -a "$LOG_FILE"
}
# Function to verify backup integrity
verify_backup() {
local backup_file=$1
local file_type=$(file -b "$backup_file")
case "$file_type" in
*"gzip compressed"*)
gzip -t "$backup_file" 2>/dev/null
return $?
;;
*"tar archive"*)
tar -tf "$backup_file" >/dev/null 2>&1
return $?
;;
*"SQL"*)
head -n 10 "$backup_file" | grep -q "MySQL dump\|PostgreSQL"
return $?
;;
*)
return 0
;;
esac
}
# Function to test restore
test_restore() {
local backup_file=$1
mkdir -p "$TEMP_RESTORE"
case "$backup_file" in
*.tar.gz)
tar -xzf "$backup_file" -C "$TEMP_RESTORE" 2>/dev/null
;;
*.sql.gz)
gunzip -c "$backup_file" | head -n 1000 >/dev/null
;;
*)
return 0
;;
esac
local result=$?
rm -rf "$TEMP_RESTORE"
return $result
}
# Start verification
log_message "Starting backup verification"
ERRORS=0
# Check backup directories exist and have recent files
for dir in $BACKUP_DIRS; do
if [ ! -d "$dir" ]; then
log_message "ERROR: Backup directory $dir does not exist"
((ERRORS++))
continue
fi
# Check for recent backups (within 24 hours)
recent_files=$(find "$dir" -type f -mtime -1 | wc -l)
if [ "$recent_files" -eq 0 ]; then
log_message "WARNING: No recent backups in $dir"
((ERRORS++))
fi
# Verify each backup file
find "$dir" -type f -mtime -1 | while read -r file; do
log_message "Verifying: $file"
if ! verify_backup "$file"; then
log_message "ERROR: Backup verification failed for $file"
((ERRORS++))
fi
# Random restore test (10% chance)
if [ $((RANDOM % 10)) -eq 0 ]; then
log_message "Testing restore for: $file"
if ! test_restore "$file"; then
log_message "ERROR: Restore test failed for $file"
((ERRORS++))
fi
fi
done
done
# Check backup sizes
log_message "Checking backup sizes"
du -sh $BACKUP_DIRS | while read -r size dir; do
log_message "Backup size - $dir: $size"
done
# Send alert if errors found
if [ $ERRORS -gt 0 ]; then
log_message "Verification completed with $ERRORS errors"
echo "Backup verification failed with $ERRORS errors. Check $LOG_FILE for details." | \
mail -s "Backup Verification Failed on $(hostname)" "$ALERT_EMAIL"
else
log_message "Verification completed successfully"
fi
Disaster Recovery Planning
Create Recovery Documentation
sudo nano /backup/RECOVERY_PROCEDURES.md
# Disaster Recovery Procedures
## Emergency Contacts
- Primary Admin: John Doe - (555) 123-4567
- Secondary Admin: Jane Smith - (555) 987-6543
- Hosting Provider: (555) 555-5555
## Recovery Priority Order
1. Database servers
2. Web servers
3. Application servers
4. File servers
## System Recovery Steps
### 1. Bare Metal Recovery
```bash
# Boot from recovery media
# Restore system image
dd if=/backup/system-image.img.gz | gunzip -c | dd of=/dev/sda bs=4M
# Or using backup tool
borg extract /backup/borg-repo::system-backup-20240207
2. Database Recovery
MySQL Recovery
# Stop MySQL
sudo systemctl stop mysql
# Restore from backup
gunzip < /backup/mysql/all_databases_20240207.sql.gz | mysql -u root -p
# For specific database
gunzip < /backup/mysql/mydb_20240207.sql.gz | mysql -u root -p mydb
PostgreSQL Recovery
# Stop PostgreSQL
sudo systemctl stop postgresql
# Restore from backup
sudo -u postgres psql < /backup/postgresql/all_databases_20240207.sql
# For custom format
sudo -u postgres pg_restore -d mydb /backup/postgresql/mydb_20240207.dump
3. File System Recovery
# Rsync restore
rsync -avz /backup/system/ /
# Rsnapshot restore
rsync -avz /backup/snapshots/daily.0/ /
# Borg restore
borg extract /backup/borg-repo::backup-20240207
4. Configuration Recovery
# Restore /etc
tar -xzf /backup/etc-backup.tar.gz -C /
# Restore specific configs
cp -r /backup/configs/nginx/* /etc/nginx/
cp -r /backup/configs/apache2/* /etc/apache2/
Post-Recovery Checklist
- [ ] Verify system boots correctly
- [ ] Check all services are running
- [ ] Test database connections
- [ ] Verify web applications
- [ ] Check SSL certificates
- [ ] Test email functionality
- [ ] Verify cron jobs
- [ ] Check monitoring systems
- [ ] Update DNS if needed
- [ ] Notify team of recovery status
### Automated Recovery Testing
```bash
sudo nano /usr/local/bin/dr-test.sh
#!/bin/bash
# Disaster Recovery Test Script
# Configuration
TEST_VM="dr-test-vm"
BACKUP_SOURCE="/backup/borg-repo"
TEST_LOG="/var/log/dr-test.log"
ALERT_EMAIL="admin@yourdomain.com"
# Function to log
log_message() {
echo "[$(date '+%Y-%m-%d %H:%M:%S')] $1" | tee -a "$TEST_LOG"
}
# Start DR test
log_message "Starting disaster recovery test"
# Create test VM (example using libvirt)
if command -v virsh >/dev/null 2>&1; then
# Clone production VM for testing
virsh shutdown prod-vm --mode acpi
virt-clone --original prod-vm --name $TEST_VM --auto-clone
virsh start $TEST_VM
# Wait for VM to boot
sleep 60
# Test connectivity
if ping -c 3 $TEST_VM >/dev/null 2>&1; then
log_message "Test VM is responsive"
# Run recovery procedures
ssh root@$TEST_VM "cd /backup && ./restore-all.sh"
# Verify services
ssh root@$TEST_VM "systemctl status nginx mysql"
# Cleanup
virsh destroy $TEST_VM
virsh undefine $TEST_VM --remove-all-storage
log_message "DR test completed successfully"
else
log_message "ERROR: Test VM not responsive"
fi
fi
Backup Monitoring Dashboard
sudo nano /var/www/html/backup-status.php
<?php
// Backup Status Dashboard
$backup_dirs = [
'/backup/system' => 'System Backups',
'/backup/mysql' => 'MySQL Backups',
'/backup/postgresql' => 'PostgreSQL Backups',
'/backup/borg-repo' => 'Borg Repository'
];
function get_dir_size($dir) {
$size = 0;
foreach (glob(rtrim($dir, '/').'/*', GLOB_NOSORT) as $each) {
$size += is_file($each) ? filesize($each) : get_dir_size($each);
}
return $size;
}
function format_bytes($bytes) {
$units = ['B', 'KB', 'MB', 'GB', 'TB'];
$bytes = max($bytes, 0);
$pow = floor(($bytes ? log($bytes) : 0) / log(1024));
$pow = min($pow, count($units) - 1);
return round($bytes / pow(1024, $pow), 2) . ' ' . $units[$pow];
}
?>
<!DOCTYPE html>
<html>
<head>
<title>Backup Status Dashboard</title>
<style>
body { font-family: Arial, sans-serif; margin: 20px; }
table { border-collapse: collapse; width: 100%; }
th, td { border: 1px solid #ddd; padding: 8px; text-align: left; }
th { background-color: #4CAF50; color: white; }
.ok { color: green; }
.warning { color: orange; }
.error { color: red; }
</style>
</head>
<body>
<h1>Backup Status Dashboard</h1>
<p>Last updated: <?php echo date('Y-m-d H:i:s'); ?></p>
<table>
<tr>
<th>Backup Type</th>
<th>Location</th>
<th>Size</th>
<th>Last Backup</th>
<th>Status</th>
</tr>
<?php foreach ($backup_dirs as $dir => $name): ?>
<tr>
<td><?php echo $name; ?></td>
<td><?php echo $dir; ?></td>
<td><?php echo format_bytes(get_dir_size($dir)); ?></td>
<td>
<?php
$latest = 0;
foreach (glob($dir . '/*') as $file) {
$mtime = filemtime($file);
if ($mtime > $latest) $latest = $mtime;
}
echo $latest ? date('Y-m-d H:i:s', $latest) : 'Never';
?>
</td>
<td>
<?php
$age = time() - $latest;
if ($age < 86400) {
echo '<span class="ok">OK</span>';
} elseif ($age < 172800) {
echo '<span class="warning">Warning</span>';
} else {
echo '<span class="error">Error</span>';
}
?>
</td>
</tr>
<?php endforeach; ?>
</table>
<h2>Backup Schedule</h2>
<ul>
<li>System Backup: Daily at 2:00 AM</li>
<li>Database Backup: Every 4 hours</li>
<li>Incremental Backup: Hourly</li>
<li>Offsite Sync: Daily at 4:00 AM</li>
</ul>
</body>
</html>
Best Practices Summary
- Test Restores Regularly: Monthly restore drills
- Automate Everything: Reduce human error
- Monitor Backup Status: Alert on failures
- Document Procedures: Keep recovery docs updated
- Encrypt Sensitive Data: Use strong encryption
- Version Control Configs: Track configuration changes
- Offsite Backups: Follow 3-2-1 rule
- Retention Policies: Balance storage vs. recovery needs
Conclusion
A robust backup and recovery strategy is your insurance against data loss and system failures. This guide provides comprehensive coverage of backup tools and techniques for Debian servers. Remember:
- Regular testing is as important as the backups themselves
- Automation reduces errors and ensures consistency
- Documentation speeds recovery during emergencies
- Multiple backup methods provide redundancy
- Monitoring ensures you know about failures immediately
Invest time in setting up proper backups now to save yourself from disaster later.