Complete Virtualization and Containers Guide for Ubuntu Server 22.04
Virtualization and containerization are fundamental technologies for modern infrastructure. This comprehensive guide covers setting up and managing virtual machines with KVM, containers with Docker and LXD, and orchestration solutions on Ubuntu Server 22.04.
Prerequisites
- Ubuntu Server 22.04 LTS
- CPU with virtualization support (Intel VT-x or AMD-V)
- Minimum 8GB RAM (16GB+ recommended)
- 100GB+ available storage
- Root or sudo access
Virtualization Support Check
Verify Hardware Support
# Check CPU virtualization support
egrep -c '(vmx|svm)' /proc/cpuinfo
# Check if virtualization is enabled
sudo apt install cpu-checker -y
kvm-ok
# Check IOMMU support (for GPU passthrough)
dmesg | grep -e DMAR -e IOMMU
# Enable virtualization in BIOS if needed
KVM/QEMU Virtualization
Install KVM and Tools
# Install KVM and management tools
sudo apt update
sudo apt install qemu-kvm libvirt-daemon-system libvirt-clients bridge-utils virtinst virt-manager -y
# Add user to libvirt group
sudo usermod -aG libvirt $USER
sudo usermod -aG kvm $USER
# Verify installation
sudo systemctl status libvirtd
virsh list --all
Network Configuration for VMs
Create Bridge Network
# Create bridge configuration
sudo nano /etc/netplan/01-netcfg.yaml
network:
version: 2
renderer: networkd
ethernets:
enp0s3:
dhcp4: no
bridges:
br0:
interfaces: [enp0s3]
dhcp4: yes
parameters:
stp: false
forward-delay: 0
# Apply configuration
sudo netplan apply
# Verify bridge
ip addr show br0
Create Custom Network
# Define network
cat > net-default.xml << EOF
<network>
<name>vmnet</name>
<forward mode='nat'>
<nat>
<port start='1024' end='65535'/>
</nat>
</forward>
<bridge name='virbr1' stp='on' delay='0'/>
<ip address='192.168.100.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.100.10' end='192.168.100.254'/>
</dhcp>
</ip>
</network>
EOF
# Create network
virsh net-define net-default.xml
virsh net-start vmnet
virsh net-autostart vmnet
Create Virtual Machines
Using virt-install
# Download Ubuntu Server ISO
wget https://releases.ubuntu.com/22.04/ubuntu-22.04.3-live-server-amd64.iso
# Create VM with virt-install
sudo virt-install \
--name ubuntu-vm1 \
--ram 2048 \
--vcpus 2 \
--disk path=/var/lib/libvirt/images/ubuntu-vm1.qcow2,size=20 \
--os-variant ubuntu22.04 \
--network bridge=br0 \
--graphics vnc,listen=0.0.0.0 \
--console pty,target_type=serial \
--cdrom /path/to/ubuntu-22.04.3-live-server-amd64.iso
Create VM with Custom Configuration
# Create disk image
qemu-img create -f qcow2 /var/lib/libvirt/images/custom-vm.qcow2 50G
# Create VM XML configuration
cat > custom-vm.xml << EOF
<domain type='kvm'>
<name>custom-vm</name>
<memory unit='KiB'>4194304</memory>
<vcpu placement='static'>4</vcpu>
<os>
<type arch='x86_64' machine='pc-q35-6.2'>hvm</type>
<boot dev='hd'/>
</os>
<features>
<acpi/>
<apic/>
<vmport state='off'/>
</features>
<cpu mode='host-passthrough' check='none' migratable='on'/>
<devices>
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2'/>
<source file='/var/lib/libvirt/images/custom-vm.qcow2'/>
<target dev='vda' bus='virtio'/>
</disk>
<interface type='bridge'>
<source bridge='br0'/>
<model type='virtio'/>
</interface>
<console type='pty'>
<target type='serial' port='0'/>
</console>
<graphics type='vnc' port='-1' autoport='yes' listen='0.0.0.0'>
<listen type='address' address='0.0.0.0'/>
</graphics>
</devices>
</domain>
EOF
# Define and start VM
virsh define custom-vm.xml
virsh start custom-vm
VM Management
Basic Operations
# List VMs
virsh list --all
# Start/Stop/Restart VM
virsh start vm-name
virsh shutdown vm-name
virsh reboot vm-name
virsh destroy vm-name # Force stop
# Connect to VM console
virsh console vm-name
# VNC connection
virsh vncdisplay vm-name
# Edit VM configuration
virsh edit vm-name
# Delete VM
virsh undefine vm-name --remove-all-storage
Snapshots
# Create snapshot
virsh snapshot-create-as vm-name snapshot1 "Before updates"
# List snapshots
virsh snapshot-list vm-name
# Revert to snapshot
virsh snapshot-revert vm-name snapshot1
# Delete snapshot
virsh snapshot-delete vm-name snapshot1
Clone VM
# Clone VM
virt-clone --original vm-name --name vm-clone --file /var/lib/libvirt/images/vm-clone.qcow2
Performance Tuning
CPU Pinning
# Set CPU affinity
virsh vcpupin vm-name 0 0
virsh vcpupin vm-name 1 1
# Edit VM for permanent pinning
virsh edit vm-name
Add to XML:
<vcpu placement='static' cpuset='0-3'>4</vcpu>
<cputune>
<vcpupin vcpu='0' cpuset='0'/>
<vcpupin vcpu='1' cpuset='1'/>
<vcpupin vcpu='2' cpuset='2'/>
<vcpupin vcpu='3' cpuset='3'/>
</cputune>
Memory Optimization
<memory unit='KiB'>8388608</memory>
<currentMemory unit='KiB'>8388608</currentMemory>
<memoryBacking>
<hugepages/>
<locked/>
</memoryBacking>
Docker Containerization
Install Docker
# Remove old versions
sudo apt remove docker docker-engine docker.io containerd runc
# Install dependencies
sudo apt update
sudo apt install ca-certificates curl gnupg lsb-release -y
# Add Docker GPG key
sudo mkdir -m 0755 -p /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
# Add repository
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
# Install Docker
sudo apt update
sudo apt install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin -y
# Add user to docker group
sudo usermod -aG docker $USER
# Verify installation
docker --version
docker compose version
Docker Configuration
Configure Docker Daemon
sudo nano /etc/docker/daemon.json
{
"log-driver": "json-file",
"log-opts": {
"max-size": "10m",
"max-file": "3"
},
"storage-driver": "overlay2",
"storage-opts": [
"overlay2.override_kernel_check=true"
],
"dns": ["8.8.8.8", "8.8.4.4"],
"registry-mirrors": ["https://mirror.gcr.io"],
"insecure-registries": [],
"live-restore": true,
"userland-proxy": false,
"experimental": false,
"features": {
"buildkit": true
}
}
sudo systemctl restart docker
Docker Networks
Create Custom Networks
# Create bridge network
docker network create --driver bridge \
--subnet=172.20.0.0/16 \
--ip-range=172.20.240.0/20 \
--gateway=172.20.0.1 \
app-network
# Create overlay network (for Swarm)
docker network create --driver overlay \
--subnet=10.0.9.0/24 \
--attachable \
multi-host-network
# List networks
docker network ls
# Inspect network
docker network inspect app-network
Docker Compose Example
Create Web Application Stack
mkdir ~/webapp && cd ~/webapp
nano docker-compose.yml
version: '3.8'
services:
nginx:
image: nginx:alpine
container_name: webapp-nginx
ports:
- "80:80"
- "443:443"
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf:ro
- ./html:/usr/share/nginx/html:ro
- ./ssl:/etc/nginx/ssl:ro
depends_on:
- app
networks:
- frontend
- backend
app:
build:
context: ./app
dockerfile: Dockerfile
container_name: webapp-app
environment:
- DATABASE_URL=postgresql://user:pass@db:5432/webapp
- REDIS_URL=redis://cache:6379
volumes:
- ./app:/app
depends_on:
- db
- cache
networks:
- backend
db:
image: postgres:15-alpine
container_name: webapp-db
environment:
- POSTGRES_USER=user
- POSTGRES_PASSWORD=pass
- POSTGRES_DB=webapp
volumes:
- db-data:/var/lib/postgresql/data
networks:
- backend
cache:
image: redis:7-alpine
container_name: webapp-cache
command: redis-server --appendonly yes
volumes:
- cache-data:/data
networks:
- backend
volumes:
db-data:
cache-data:
networks:
frontend:
driver: bridge
backend:
driver: bridge
internal: true
# Start stack
docker compose up -d
# View logs
docker compose logs -f
# Stop stack
docker compose down
Docker Security
Run Container as Non-Root
FROM node:16-alpine
RUN addgroup -g 1001 -S appuser && adduser -S appuser -G appuser -u 1001
USER appuser
WORKDIR /app
COPY --chown=appuser:appuser . .
Security Scanning
# Scan image for vulnerabilities
docker scout cves nginx:latest
# Use Trivy for scanning
sudo apt install wget apt-transport-https gnupg lsb-release
wget -qO - https://aquasecurity.github.io/trivy-repo/deb/public.key | sudo apt-key add -
echo "deb https://aquasecurity.github.io/trivy-repo/deb $(lsb_release -sc) main" | sudo tee -a /etc/apt/sources.list.d/trivy.list
sudo apt update
sudo apt install trivy
trivy image nginx:latest
LXD System Containers
Install LXD
# Install LXD
sudo snap install lxd
# Initialize LXD
sudo lxd init
# Add user to lxd group
sudo usermod -aG lxd $USER
LXD Configuration
# Storage pool configuration
lxc storage create default zfs size=100GB
# Network configuration
lxc network create lxdbr0 ipv4.address=10.0.3.1/24 ipv4.nat=true
# Profile configuration
lxc profile edit default
Create and Manage Containers
Launch Containers
# Launch Ubuntu container
lxc launch ubuntu:22.04 mycontainer
# Launch with specific resources
lxc launch ubuntu:22.04 webserver \
-c limits.cpu=2 \
-c limits.memory=2GB \
-c security.nesting=true
# List containers
lxc list
# Container info
lxc info mycontainer
Container Operations
# Start/Stop/Restart
lxc start mycontainer
lxc stop mycontainer
lxc restart mycontainer
# Execute commands
lxc exec mycontainer -- apt update
lxc exec mycontainer -- bash
# File operations
lxc file push local-file mycontainer/path/to/file
lxc file pull mycontainer/path/to/file local-file
# Snapshots
lxc snapshot mycontainer snap1
lxc restore mycontainer snap1
# Clone container
lxc copy mycontainer newcontainer
LXD Clustering
# Initialize cluster on first node
lxd init
# Join cluster from other nodes
lxd init
# Choose to join existing cluster
# View cluster members
lxc cluster list
# Create container on specific node
lxc launch ubuntu:22.04 mycontainer --target node2
Container Orchestration
Docker Swarm
Initialize Swarm
# Initialize swarm on manager
docker swarm init --advertise-addr 192.168.1.100
# Join worker nodes
docker swarm join --token SWMTKN-1-xxx 192.168.1.100:2377
# View nodes
docker node ls
Deploy Stack
# Create stack file
nano stack.yml
version: '3.8'
services:
web:
image: nginx:alpine
deploy:
replicas: 3
update_config:
parallelism: 1
delay: 10s
restart_policy:
condition: on-failure
ports:
- "80:80"
networks:
- webnet
visualizer:
image: dockersamples/visualizer:stable
ports:
- "8080:8080"
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
deploy:
placement:
constraints: [node.role == manager]
networks:
- webnet
networks:
webnet:
driver: overlay
# Deploy stack
docker stack deploy -c stack.yml myapp
# List stacks
docker stack ls
# List services
docker service ls
# Scale service
docker service scale myapp_web=5
Kubernetes with MicroK8s
Install MicroK8s
# Install MicroK8s
sudo snap install microk8s --classic
# Add user to microk8s group
sudo usermod -a -G microk8s $USER
sudo chown -f -R $USER ~/.kube
# Enable add-ons
microk8s enable dns dashboard storage ingress
# Check status
microk8s status --wait-ready
Deploy Application
# Create deployment
microk8s kubectl create deployment nginx --image=nginx --replicas=3
# Expose service
microk8s kubectl expose deployment nginx --port=80 --type=NodePort
# View resources
microk8s kubectl get all
Monitoring and Management
Container Monitoring Script
nano /usr/local/bin/container-monitor.sh
#!/bin/bash
# Container monitoring script
LOG_FILE="/var/log/container-monitor.log"
THRESHOLD_CPU=80
THRESHOLD_MEM=80
# Function to log messages
log_message() {
echo "[$(date +'%Y-%m-%d %H:%M:%S')] $1" | tee -a "$LOG_FILE"
}
# Monitor Docker containers
monitor_docker() {
log_message "Checking Docker containers"
# Get container stats
docker stats --no-stream --format "table {{.Container}}\t{{.CPUPerc}}\t{{.MemPerc}}" | tail -n +2 | while read line; do
container=$(echo $line | awk '{print $1}')
cpu=$(echo $line | awk '{print $2}' | sed 's/%//')
mem=$(echo $line | awk '{print $3}' | sed 's/%//')
# Check CPU usage
if (( $(echo "$cpu > $THRESHOLD_CPU" | bc -l) )); then
log_message "WARNING: Container $container CPU usage is ${cpu}%"
fi
# Check memory usage
if (( $(echo "$mem > $THRESHOLD_MEM" | bc -l) )); then
log_message "WARNING: Container $container memory usage is ${mem}%"
fi
done
# Check for stopped containers
stopped=$(docker ps -a --filter "status=exited" -q | wc -l)
if [ $stopped -gt 0 ]; then
log_message "WARNING: $stopped stopped containers found"
fi
}
# Monitor LXD containers
monitor_lxd() {
log_message "Checking LXD containers"
lxc list --format json | jq -r '.[] | .name' | while read container; do
# Get resource usage
cpu=$(lxc info $container | grep "CPU usage" | awk '{print $3}')
mem=$(lxc info $container | grep "Memory usage" | awk '{print $3}')
log_message "LXD container $container - CPU: $cpu, Memory: $mem"
done
}
# Monitor VMs
monitor_vms() {
log_message "Checking KVM virtual machines"
virsh list --all | tail -n +3 | head -n -1 | while read line; do
vm=$(echo $line | awk '{print $2}')
state=$(echo $line | awk '{print $3}')
if [ "$state" != "running" ]; then
log_message "WARNING: VM $vm is in state: $state"
fi
done
}
# Main monitoring loop
log_message "Starting container and VM monitoring"
monitor_docker
monitor_lxd
monitor_vms
log_message "Monitoring completed"
sudo chmod +x /usr/local/bin/container-monitor.sh
sudo crontab -e
# Add: */5 * * * * /usr/local/bin/container-monitor.sh
Resource Management Dashboard
# Install ctop for container monitoring
wget https://github.com/bcicen/ctop/releases/download/v0.7.7/ctop-0.7.7-linux-amd64 -O ctop
sudo mv ctop /usr/local/bin/
sudo chmod +x /usr/local/bin/ctop
# Install lazydocker
curl https://raw.githubusercontent.com/jesseduffield/lazydocker/master/scripts/install_update_linux.sh | bash
# Install portainer
docker volume create portainer_data
docker run -d -p 9000:9000 --name=portainer --restart=always \
-v /var/run/docker.sock:/var/run/docker.sock \
-v portainer_data:/data \
portainer/portainer-ce:latest
Best Practices
- Resource Limits: Always set CPU and memory limits
- Security: Run containers with least privileges
- Networking: Use custom networks instead of default
- Storage: Use volumes for persistent data
- Monitoring: Implement comprehensive monitoring
- Updates: Keep container images and hypervisors updated
- Backups: Regular backup of VMs and container data
Troubleshooting
Docker Issues
# Check Docker daemon
sudo systemctl status docker
sudo journalctl -u docker -f
# Clean up Docker
docker system prune -a
docker volume prune
docker network prune
# Reset Docker
sudo systemctl stop docker
sudo rm -rf /var/lib/docker
sudo systemctl start docker
KVM Issues
# Check libvirt
sudo systemctl status libvirtd
sudo journalctl -u libvirtd -f
# Fix permission issues
sudo chmod 666 /dev/kvm
sudo chown root:kvm /dev/kvm
# Network issues
virsh net-list --all
virsh net-start default
LXD Issues
# Check LXD service
sudo systemctl status snap.lxd.daemon
sudo journalctl -u snap.lxd.daemon -f
# Storage issues
lxc storage info default
lxc storage list
# Network issues
lxc network list
lxc network show lxdbr0
Conclusion
This comprehensive guide covered virtualization and containerization on Ubuntu Server 22.04, including KVM for virtual machines, Docker and LXD for containers, and orchestration tools. These technologies provide the foundation for modern, scalable infrastructure. Choose the right tool for your use case: VMs for full isolation, Docker for application containers, and LXD for system containers.