SUSE Networking and Virtualization: Complete Enterprise Guide
This comprehensive guide covers SUSE's networking and virtualization solutions, including advanced networking configurations, KVM and Xen hypervisors, software-defined networking (SDN), network function virtualization (NFV), and virtual infrastructure management.
Introduction to SUSE Networking and Virtualization
SUSE provides enterprise-grade solutions for: - Advanced Networking: Bonding, VLANs, bridges, and routing - Virtualization Platforms: KVM, Xen, and container integration - Software-Defined Networking: Open vSwitch and SDN controllers - Network Function Virtualization: Virtual network appliances - Management Tools: libvirt, virt-manager, and orchestration - Cloud Integration: OpenStack and cloud-init support
Advanced Networking Configuration
Network Interface Bonding
# Configure network bonding for high availability
# Install bonding module
sudo modprobe bonding
# Create bond interface configuration
cat > /etc/sysconfig/network/ifcfg-bond0 << 'EOF'
# Bond interface configuration
STARTMODE='auto'
BOOTPROTO='static'
IPADDR='192.168.1.100'
NETMASK='255.255.255.0'
BONDING_MASTER='yes'
BONDING_MODULE_OPTS='mode=active-backup miimon=100 primary=eth0'
BONDING_SLAVE0='eth0'
BONDING_SLAVE1='eth1'
ZONE='internal'
EOF
# Configure slave interfaces
cat > /etc/sysconfig/network/ifcfg-eth0 << 'EOF'
STARTMODE='hotplug'
BOOTPROTO='none'
SLAVE='yes'
MASTER='bond0'
EOF
cat > /etc/sysconfig/network/ifcfg-eth1 << 'EOF'
STARTMODE='hotplug'
BOOTPROTO='none'
SLAVE='yes'
MASTER='bond0'
EOF
# Restart network
sudo systemctl restart network
# Verify bonding status
cat /proc/net/bonding/bond0
ip link show bond0
VLAN Configuration
# Install VLAN package
sudo zypper install vlan
# Load 8021q module
sudo modprobe 8021q
echo "8021q" >> /etc/modules-load.d/vlan.conf
# Create VLAN interface
cat > /etc/sysconfig/network/ifcfg-vlan100 << 'EOF'
STARTMODE='auto'
BOOTPROTO='static'
IPADDR='10.100.0.10'
NETMASK='255.255.255.0'
VLAN='yes'
ETHERDEVICE='eth0'
VLAN_ID='100'
ZONE='dmz'
EOF
# Create multiple VLANs script
cat > /usr/local/bin/setup-vlans.sh << 'EOF'
#!/bin/bash
# Setup multiple VLANs
TRUNK_INTERFACE="eth0"
VLAN_CONFIGS=(
"100:10.100.0.1/24:dmz"
"200:10.200.0.1/24:internal"
"300:10.300.0.1/24:management"
)
for config in "${VLAN_CONFIGS[@]}"; do
IFS=':' read -r vlan_id ip_cidr zone <<< "$config"
cat > /etc/sysconfig/network/ifcfg-vlan${vlan_id} << EOL
STARTMODE='auto'
BOOTPROTO='static'
IPADDR='${ip_cidr%/*}'
PREFIXLEN='${ip_cidr#*/}'
VLAN='yes'
ETHERDEVICE='${TRUNK_INTERFACE}'
VLAN_ID='${vlan_id}'
ZONE='${zone}'
EOL
echo "Created VLAN ${vlan_id} configuration"
done
systemctl restart network
EOF
chmod +x /usr/local/bin/setup-vlans.sh
Network Bridging
# Create bridge for virtualization
cat > /etc/sysconfig/network/ifcfg-br0 << 'EOF'
STARTMODE='auto'
BOOTPROTO='static'
IPADDR='192.168.1.10'
NETMASK='255.255.255.0'
GATEWAY='192.168.1.1'
BRIDGE='yes'
BRIDGE_PORTS='eth0'
BRIDGE_STP='on'
BRIDGE_FORWARDDELAY='15'
ZONE='internal'
EOF
# Configure physical interface for bridge
cat > /etc/sysconfig/network/ifcfg-eth0 << 'EOF'
STARTMODE='auto'
BOOTPROTO='none'
EOF
# Advanced bridge configuration
cat > /usr/local/bin/configure-bridge.sh << 'EOF'
#!/bin/bash
# Advanced bridge configuration
BRIDGE="br0"
# Set bridge parameters
brctl stp $BRIDGE on
brctl setfd $BRIDGE 15
brctl sethello $BRIDGE 2
brctl setmaxage $BRIDGE 20
# Enable VLAN filtering (if needed)
echo 1 > /sys/class/net/$BRIDGE/bridge/vlan_filtering
# Add VLAN to bridge
bridge vlan add dev $BRIDGE vid 100 self
bridge vlan add dev eth0 vid 100
# Set bridge priority for STP
brctl setbridgeprio $BRIDGE 32768
echo "Bridge $BRIDGE configured"
EOF
chmod +x /usr/local/bin/configure-bridge.sh
Advanced Routing
# Enable IP forwarding
echo "net.ipv4.ip_forward = 1" >> /etc/sysctl.d/routing.conf
echo "net.ipv6.conf.all.forwarding = 1" >> /etc/sysctl.d/routing.conf
sysctl -p /etc/sysctl.d/routing.conf
# Configure static routes
cat > /etc/sysconfig/network/routes << 'EOF'
# Static routing table
# Destination Gateway Netmask Interface
10.0.0.0 192.168.1.254 255.0.0.0 -
172.16.0.0 192.168.1.253 255.240.0.0 -
default 192.168.1.1 - -
EOF
# Policy-based routing
cat > /etc/sysconfig/network/ifroute-eth1 << 'EOF'
# Routes for eth1
10.10.0.0/16 10.1.1.1 - eth1 table 100
EOF
cat > /etc/sysconfig/network/ifrule-eth1 << 'EOF'
# Routing rules for eth1
from 10.1.1.0/24 table 100 priority 100
EOF
# Dynamic routing with FRR
sudo zypper install frr
# Configure OSPF
cat > /etc/frr/ospfd.conf << 'EOF'
hostname router1
password zebra
enable password zebra
router ospf
ospf router-id 192.168.1.10
network 192.168.1.0/24 area 0.0.0.0
network 10.0.0.0/8 area 0.0.0.1
interface eth0
ip ospf hello-interval 10
ip ospf dead-interval 40
ip ospf priority 100
log syslog informational
EOF
systemctl enable frr
systemctl start frr
KVM Virtualization
KVM Installation and Setup
# Install KVM and management tools
sudo zypper install -t pattern kvm_server kvm_tools
# Verify CPU virtualization support
egrep -c '(vmx|svm)' /proc/cpuinfo
lscpu | grep Virtualization
# Load KVM modules
sudo modprobe kvm
sudo modprobe kvm_intel # For Intel CPUs
# sudo modprobe kvm_amd # For AMD CPUs
# Install additional tools
sudo zypper install \
libvirt \
libvirt-daemon-qemu \
qemu-kvm \
virt-manager \
virt-install \
virt-viewer \
bridge-utils
# Start and enable libvirtd
sudo systemctl enable libvirtd
sudo systemctl start libvirtd
# Add user to libvirt group
sudo usermod -aG libvirt $USER
# Configure default network
sudo virsh net-define /usr/share/libvirt/networks/default.xml
sudo virsh net-start default
sudo virsh net-autostart default
Creating KVM Virtual Machines
# Create VM using virt-install
sudo virt-install \
--name sles15-vm \
--ram 4096 \
--vcpus 2 \
--disk path=/var/lib/libvirt/images/sles15-vm.qcow2,size=40,format=qcow2 \
--cdrom /path/to/SLE-15-SP4-Full-x86_64-GM-Media1.iso \
--network bridge=br0 \
--graphics vnc,listen=0.0.0.0 \
--noautoconsole \
--os-variant sles15sp4
# Create VM with advanced options
cat > /usr/local/bin/create-kvm-vm.sh << 'EOF'
#!/bin/bash
# Create KVM VM with advanced configuration
VM_NAME="$1"
VM_RAM="${2:-4096}"
VM_CPUS="${3:-2}"
VM_DISK="${4:-40}"
VM_ISO="$5"
if [ -z "$VM_NAME" ] || [ -z "$VM_ISO" ]; then
echo "Usage: $0 <vm-name> [ram-mb] [cpus] [disk-gb] <iso-path>"
exit 1
fi
# Create VM with performance optimizations
virt-install \
--name "$VM_NAME" \
--ram "$VM_RAM" \
--vcpus "$VM_CPUS" \
--cpu host-passthrough,cache.mode=passthrough \
--disk path=/var/lib/libvirt/images/${VM_NAME}.qcow2,size=${VM_DISK},format=qcow2,bus=virtio,cache=none,io=native \
--cdrom "$VM_ISO" \
--network bridge=br0,model=virtio \
--graphics spice,listen=0.0.0.0 \
--video qxl \
--channel spicevmc,target_type=virtio \
--memballoon model=virtio \
--rng /dev/urandom,model=virtio \
--features kvm_hidden=on \
--clock offset=localtime,rtc_tickpolicy=catchup \
--os-variant sles15sp4 \
--noautoconsole
echo "VM $VM_NAME created successfully"
EOF
chmod +x /usr/local/bin/create-kvm-vm.sh
# Create VM from template
cat > /usr/local/bin/clone-vm.sh << 'EOF'
#!/bin/bash
# Clone VM from template
TEMPLATE="$1"
NEW_VM="$2"
if [ -z "$TEMPLATE" ] || [ -z "$NEW_VM" ]; then
echo "Usage: $0 <template-vm> <new-vm-name>"
exit 1
fi
# Clone VM
virt-clone \
--original "$TEMPLATE" \
--name "$NEW_VM" \
--file /var/lib/libvirt/images/${NEW_VM}.qcow2
# Regenerate machine ID
virt-sysprep -d "$NEW_VM" \
--enable machine-id,ssh-hostkeys,dhcp-client-state,net-hwaddr
echo "VM $NEW_VM cloned from $TEMPLATE"
EOF
chmod +x /usr/local/bin/clone-vm.sh
KVM Performance Optimization
# CPU pinning for VM
cat > /usr/local/bin/optimize-vm-cpu.sh << 'EOF'
#!/bin/bash
# Optimize VM CPU performance
VM_NAME="$1"
# Get VM CPU count
VCPUS=$(virsh dominfo "$VM_NAME" | grep "CPU(s)" | awk '{print $2}')
# Pin vCPUs to physical CPUs
for ((i=0; i<$VCPUS; i++)); do
# Pin vCPU to physical CPU (adjust mapping as needed)
virsh vcpupin "$VM_NAME" $i $((i+4))
done
# Set CPU scheduler
virsh schedinfo "$VM_NAME" \
--set cpu_shares=2048 \
--set vcpu_period=100000 \
--set vcpu_quota=50000
# Enable CPU features
virsh edit "$VM_NAME"
# Add under <cpu>:
# <feature policy='require' name='vmx'/>
# <feature policy='require' name='pcid'/>
# <feature policy='require' name='invpcid'/>
echo "CPU optimization applied to $VM_NAME"
EOF
chmod +x /usr/local/bin/optimize-vm-cpu.sh
# Memory optimization
cat > /usr/local/bin/optimize-vm-memory.sh << 'EOF'
#!/bin/bash
# Optimize VM memory
VM_NAME="$1"
# Enable huge pages for VM
virsh edit "$VM_NAME"
# Add under <memoryBacking>:
# <hugepages>
# <page size='2048' unit='KiB' nodeset='0'/>
# </hugepages>
# Set memory tuning parameters
virsh memtune "$VM_NAME" \
--hard-limit $((8*1024*1024)) \
--soft-limit $((6*1024*1024)) \
--swap-hard-limit $((10*1024*1024))
# Enable memory ballooning
virsh setmem "$VM_NAME" $((4*1024*1024)) --config
echo "Memory optimization applied to $VM_NAME"
EOF
chmod +x /usr/local/bin/optimize-vm-memory.sh
KVM Storage Management
# Create storage pools
# File-based pool
sudo virsh pool-define-as vmpool dir - - - - "/var/lib/libvirt/pools/vmpool"
sudo virsh pool-build vmpool
sudo virsh pool-start vmpool
sudo virsh pool-autostart vmpool
# LVM-based pool
sudo virsh pool-define-as lvmpool logical - - - - /dev/vg_vms
sudo virsh pool-start lvmpool
sudo virsh pool-autostart lvmpool
# Create volumes
sudo virsh vol-create-as vmpool vm1-disk1.qcow2 20G --format qcow2
sudo virsh vol-create-as lvmpool vm2-disk1 30G
# Advanced storage configuration
cat > /usr/local/bin/manage-vm-storage.sh << 'EOF'
#!/bin/bash
# VM storage management
case "$1" in
create-disk)
VM_NAME="$2"
DISK_SIZE="$3"
POOL="${4:-vmpool}"
DISK_NAME="${VM_NAME}-disk-$(date +%s).qcow2"
virsh vol-create-as "$POOL" "$DISK_NAME" "$DISK_SIZE" --format qcow2
# Attach to VM
virsh attach-disk "$VM_NAME" \
--source /var/lib/libvirt/pools/$POOL/$DISK_NAME \
--target vdb \
--persistent \
--subdriver qcow2
;;
snapshot)
VM_NAME="$2"
SNAP_NAME="${3:-snapshot-$(date +%Y%m%d-%H%M%S)}"
virsh snapshot-create-as "$VM_NAME" "$SNAP_NAME" \
--description "Snapshot created on $(date)" \
--disk-only \
--atomic
;;
backup)
VM_NAME="$2"
BACKUP_DIR="${3:-/backup/vms}"
mkdir -p "$BACKUP_DIR"
# Create snapshot for backup
virsh snapshot-create-as "$VM_NAME" backup-temp \
--disk-only --atomic --quiesce
# Get disk paths
for disk in $(virsh domblklist "$VM_NAME" | grep -E "vd|sd" | awk '{print $2}'); do
cp "$disk" "$BACKUP_DIR/$(basename $disk).backup"
done
# Remove temporary snapshot
virsh blockcommit "$VM_NAME" vda --active --pivot
virsh snapshot-delete "$VM_NAME" backup-temp
;;
esac
EOF
chmod +x /usr/local/bin/manage-vm-storage.sh
Xen Virtualization
Xen Installation and Configuration
# Install Xen hypervisor
sudo zypper install -t pattern xen_server xen_tools
# Configure Xen boot
sudo grub2-mkconfig -o /boot/grub2/grub.cfg
# Set Xen as default boot option
sudo grub2-set-default "SLES 15 SP4, with Xen hypervisor"
# Configure Xen networking
cat > /etc/xen/scripts/network-custom << 'EOF'
#!/bin/bash
# Custom Xen network configuration
case "$1" in
start)
# Create bridge
brctl addbr xenbr0
brctl addif xenbr0 eth0
ip link set xenbr0 up
# Transfer IP from eth0 to bridge
IP=$(ip addr show eth0 | grep "inet " | awk '{print $2}')
ip addr del $IP dev eth0
ip addr add $IP dev xenbr0
# Set up routing
GATEWAY=$(ip route | grep default | awk '{print $3}')
ip route add default via $GATEWAY dev xenbr0
;;
stop)
ip link set xenbr0 down
brctl delbr xenbr0
;;
esac
EOF
chmod +x /etc/xen/scripts/network-custom
# Configure Xen domains
cat > /etc/xen/xl.conf << 'EOF'
# Xen configuration
autoballoon="auto"
lock_memory=1
vif.default.script="vif-bridge"
vif.default.bridge="xenbr0"
EOF
# Reboot to activate Xen
# sudo reboot
Creating Xen Domains
# Create Xen PV guest
cat > /etc/xen/sles15-pv.cfg << 'EOF'
# Xen PV guest configuration
name = "sles15-pv"
memory = 2048
vcpus = 2
disk = [ 'file:/var/lib/xen/images/sles15-pv.img,xvda,w' ]
vif = [ 'bridge=xenbr0' ]
bootloader = "/usr/bin/pygrub"
on_poweroff = 'destroy'
on_reboot = 'restart'
on_crash = 'restart'
EOF
# Create disk image
sudo dd if=/dev/zero of=/var/lib/xen/images/sles15-pv.img bs=1G count=20
# Create Xen HVM guest
cat > /etc/xen/win2022-hvm.cfg << 'EOF'
# Xen HVM guest configuration
type = "hvm"
name = "win2022-hvm"
memory = 4096
vcpus = 4
vif = [ 'type=ioemu,bridge=xenbr0,model=e1000' ]
disk = [ 'file:/var/lib/xen/images/win2022.img,hda,w',
'file:/path/to/windows.iso,hdc:cdrom,r' ]
boot = "dc"
vnc = 1
vnclisten = "0.0.0.0"
vncpasswd = "secret"
stdvga = 1
usb = 1
usbdevice = "tablet"
EOF
# Start domains
sudo xl create /etc/xen/sles15-pv.cfg
sudo xl create /etc/xen/win2022-hvm.cfg
# Domain management script
cat > /usr/local/bin/manage-xen-domains.sh << 'EOF'
#!/bin/bash
# Xen domain management
case "$1" in
list)
xl list
;;
start)
xl create /etc/xen/$2.cfg
;;
stop)
xl shutdown $2
;;
save)
xl save $2 /var/lib/xen/save/$2.save
;;
restore)
xl restore /var/lib/xen/save/$2.save
;;
migrate)
xl migrate $2 $3
;;
esac
EOF
chmod +x /usr/local/bin/manage-xen-domains.sh
Software-Defined Networking (SDN)
Open vSwitch Installation
# Install Open vSwitch
sudo zypper install openvswitch openvswitch-switch
# Start Open vSwitch
sudo systemctl enable openvswitch
sudo systemctl start openvswitch
# Create OVS bridge
sudo ovs-vsctl add-br ovsbr0
# Add physical interface to bridge
sudo ovs-vsctl add-port ovsbr0 eth1
# Configure OVS bridge IP
sudo ip addr add 10.0.0.1/24 dev ovsbr0
sudo ip link set ovsbr0 up
# Create VLAN on OVS
sudo ovs-vsctl add-port ovsbr0 vlan100 tag=100 -- set interface vlan100 type=internal
sudo ip addr add 10.100.0.1/24 dev vlan100
sudo ip link set vlan100 up
# Advanced OVS configuration
cat > /usr/local/bin/configure-ovs.sh << 'EOF'
#!/bin/bash
# Advanced Open vSwitch configuration
# Create bridge with DPDK support (if available)
ovs-vsctl add-br ovs-dpdk -- set bridge ovs-dpdk datapath_type=netdev
# Configure flow rules
# Drop all traffic by default
ovs-ofctl add-flow ovsbr0 "priority=0,actions=drop"
# Allow DHCP
ovs-ofctl add-flow ovsbr0 "priority=100,udp,tp_src=68,tp_dst=67,actions=normal"
ovs-ofctl add-flow ovsbr0 "priority=100,udp,tp_src=67,tp_dst=68,actions=normal"
# Allow specific VLANs
ovs-ofctl add-flow ovsbr0 "priority=50,dl_vlan=100,actions=normal"
ovs-ofctl add-flow ovsbr0 "priority=50,dl_vlan=200,actions=normal"
# Configure QoS
ovs-vsctl set port eth1 qos=@newqos -- \
--id=@newqos create qos type=linux-htb \
other-config:max-rate=1000000000 \
queues:0=@q0,1=@q1 -- \
--id=@q0 create queue other-config:max-rate=500000000 -- \
--id=@q1 create queue other-config:max-rate=200000000
# Enable STP
ovs-vsctl set bridge ovsbr0 stp_enable=true
# Configure OpenFlow controller
ovs-vsctl set-controller ovsbr0 tcp:192.168.1.100:6653
echo "Open vSwitch configured"
EOF
chmod +x /usr/local/bin/configure-ovs.sh
SDN Controller Integration
# Install OpenDaylight dependencies
sudo zypper install java-11-openjdk java-11-openjdk-devel
# Download and install OpenDaylight
wget https://nexus.opendaylight.org/content/groups/public/org/opendaylight/integration/opendaylight/0.16.0/opendaylight-0.16.0.tar.gz
tar -xzf opendaylight-0.16.0.tar.gz
sudo mv opendaylight-0.16.0 /opt/opendaylight
# Create systemd service
cat > /etc/systemd/system/opendaylight.service << 'EOF'
[Unit]
Description=OpenDaylight SDN Controller
After=network.target
[Service]
Type=forking
ExecStart=/opt/opendaylight/bin/start
ExecStop=/opt/opendaylight/bin/stop
User=odl
Group=odl
Environment="JAVA_HOME=/usr/lib64/jvm/java-11-openjdk"
[Install]
WantedBy=multi-user.target
EOF
# Create user and set permissions
sudo useradd -r -s /sbin/nologin odl
sudo chown -R odl:odl /opt/opendaylight
# Start OpenDaylight
sudo systemctl enable opendaylight
sudo systemctl start opendaylight
# Configure OVS to use OpenDaylight
sudo ovs-vsctl set-controller ovsbr0 tcp:127.0.0.1:6653
sudo ovs-vsctl set bridge ovsbr0 protocols=OpenFlow13
Network Function Virtualization (NFV)
Virtual Router Setup
# Create virtual router with FRR
cat > /usr/local/bin/create-vrouter.sh << 'EOF'
#!/bin/bash
# Create virtual router
VM_NAME="vrouter-${1:-01}"
RAM="2048"
CPUS="2"
DISK="10"
# Create VM
virt-install \
--name "$VM_NAME" \
--ram "$RAM" \
--vcpus "$CPUS" \
--disk path=/var/lib/libvirt/images/${VM_NAME}.qcow2,size=${DISK},format=qcow2 \
--network bridge=ovsbr0,model=virtio,virtualport_type=openvswitch \
--network bridge=ovsbr0,model=virtio,virtualport_type=openvswitch \
--cdrom /path/to/sles-minimal.iso \
--graphics none \
--console pty,target_type=serial \
--os-variant sles15sp4 \
--noautoconsole
# Configure routing inside VM (post-installation)
cat > setup-routing.sh << 'SCRIPT'
#!/bin/bash
# Install FRR
zypper install frr
# Configure interfaces
cat > /etc/sysconfig/network/ifcfg-eth0 << 'EOL'
STARTMODE='auto'
BOOTPROTO='static'
IPADDR='10.0.0.1'
NETMASK='255.255.255.0'
EOL
cat > /etc/sysconfig/network/ifcfg-eth1 << 'EOL'
STARTMODE='auto'
BOOTPROTO='static'
IPADDR='192.168.100.1'
NETMASK='255.255.255.0'
EOL
# Enable routing
echo "net.ipv4.ip_forward = 1" > /etc/sysctl.d/routing.conf
sysctl -p /etc/sysctl.d/routing.conf
# Configure FRR
cat > /etc/frr/frr.conf << 'EOL'
frr defaults traditional
hostname vrouter
service integrated-vtysh-config
router ospf
ospf router-id 10.0.0.1
network 10.0.0.0/24 area 0
network 192.168.100.0/24 area 0
EOL
systemctl enable frr
systemctl start frr
SCRIPT
echo "Virtual router $VM_NAME created"
EOF
chmod +x /usr/local/bin/create-vrouter.sh
Virtual Firewall
# Create virtual firewall appliance
cat > /usr/local/bin/create-vfirewall.sh << 'EOF'
#!/bin/bash
# Create virtual firewall
VM_NAME="vfirewall-${1:-01}"
RAM="1024"
CPUS="2"
DISK="8"
# Create firewall VM
virt-install \
--name "$VM_NAME" \
--ram "$RAM" \
--vcpus "$CPUS" \
--disk path=/var/lib/libvirt/images/${VM_NAME}.qcow2,size=${DISK},format=qcow2 \
--network bridge=ovsbr0,model=virtio,virtualport_type=openvswitch \
--network bridge=ovsbr0,model=virtio,virtualport_type=openvswitch \
--network bridge=ovsbr0,model=virtio,virtualport_type=openvswitch \
--cdrom /path/to/sles-minimal.iso \
--graphics none \
--console pty,target_type=serial \
--os-variant sles15sp4 \
--noautoconsole
# Firewall configuration script
cat > setup-firewall.sh << 'SCRIPT'
#!/bin/bash
# Configure virtual firewall
# Network interfaces
# eth0 - External
# eth1 - DMZ
# eth2 - Internal
# Configure interfaces
cat > /etc/sysconfig/network/ifcfg-eth0 << 'EOL'
STARTMODE='auto'
BOOTPROTO='dhcp'
ZONE='external'
EOL
cat > /etc/sysconfig/network/ifcfg-eth1 << 'EOL'
STARTMODE='auto'
BOOTPROTO='static'
IPADDR='10.50.0.1'
NETMASK='255.255.255.0'
ZONE='dmz'
EOL
cat > /etc/sysconfig/network/ifcfg-eth2 << 'EOL'
STARTMODE='auto'
BOOTPROTO='static'
IPADDR='192.168.1.1'
NETMASK='255.255.255.0'
ZONE='internal'
EOL
# Enable routing
echo "net.ipv4.ip_forward = 1" > /etc/sysctl.d/routing.conf
sysctl -p /etc/sysctl.d/routing.conf
# Configure firewalld
systemctl enable firewalld
systemctl start firewalld
# Zone configuration
firewall-cmd --permanent --zone=external --add-masquerade
firewall-cmd --permanent --zone=dmz --add-service=http
firewall-cmd --permanent --zone=dmz --add-service=https
firewall-cmd --permanent --zone=internal --add-service=dns
firewall-cmd --permanent --zone=internal --add-service=dhcp
# Port forwarding
firewall-cmd --permanent --zone=external \
--add-forward-port=port=80:proto=tcp:toaddr=10.50.0.10:toport=80
firewall-cmd --permanent --zone=external \
--add-forward-port=port=443:proto=tcp:toaddr=10.50.0.10:toport=443
# Reload firewall
firewall-cmd --reload
# Install and configure Suricata IDS
zypper install suricata
cat > /etc/suricata/suricata.yaml << 'EOL'
vars:
HOME_NET: "[192.168.1.0/24,10.50.0.0/24]"
EXTERNAL_NET: "!$HOME_NET"
af-packet:
- interface: eth0
threads: 2
- interface: eth1
threads: 2
- interface: eth2
threads: 2
outputs:
- eve-log:
enabled: yes
filetype: regular
filename: eve.json
types:
- alert
- http
- dns
- tls
EOL
systemctl enable suricata
systemctl start suricata
SCRIPT
echo "Virtual firewall $VM_NAME created"
EOF
chmod +x /usr/local/bin/create-vfirewall.sh
Virtual Load Balancer
# Create virtual load balancer
cat > /usr/local/bin/create-vlb.sh << 'EOF'
#!/bin/bash
# Create virtual load balancer
VM_NAME="vlb-${1:-01}"
RAM="2048"
CPUS="2"
DISK="8"
# Create load balancer VM
virt-install \
--name "$VM_NAME" \
--ram "$RAM" \
--vcpus "$CPUS" \
--disk path=/var/lib/libvirt/images/${VM_NAME}.qcow2,size=${DISK},format=qcow2 \
--network bridge=ovsbr0,model=virtio,virtualport_type=openvswitch \
--network bridge=ovsbr0,model=virtio,virtualport_type=openvswitch \
--cdrom /path/to/sles-minimal.iso \
--graphics none \
--console pty,target_type=serial \
--os-variant sles15sp4 \
--noautoconsole
# Load balancer configuration
cat > setup-loadbalancer.sh << 'SCRIPT'
#!/bin/bash
# Configure HAProxy load balancer
# Install HAProxy
zypper install haproxy
# Configure network interfaces
cat > /etc/sysconfig/network/ifcfg-eth0 << 'EOL'
STARTMODE='auto'
BOOTPROTO='static'
IPADDR='10.10.10.10'
NETMASK='255.255.255.0'
EOL
cat > /etc/sysconfig/network/ifcfg-eth1 << 'EOL'
STARTMODE='auto'
BOOTPROTO='static'
IPADDR='192.168.10.1'
NETMASK='255.255.255.0'
EOL
# Configure HAProxy
cat > /etc/haproxy/haproxy.cfg << 'EOL'
global
log /dev/log local0
log /dev/log local1 notice
chroot /var/lib/haproxy
stats socket /run/haproxy/admin.sock mode 660 level admin
stats timeout 30s
user haproxy
group haproxy
daemon
maxconn 4096
defaults
log global
mode http
option httplog
option dontlognull
timeout connect 5000
timeout client 50000
timeout server 50000
frontend web_frontend
bind *:80
bind *:443 ssl crt /etc/haproxy/certs/
redirect scheme https if !{ ssl_fc }
default_backend web_servers
backend web_servers
balance roundrobin
option httpchk GET /health
server web1 192.168.10.11:80 check
server web2 192.168.10.12:80 check
server web3 192.168.10.13:80 check
listen stats
bind *:8080
stats enable
stats uri /stats
stats refresh 30s
stats admin if TRUE
EOL
# Enable IP forwarding
echo "net.ipv4.ip_forward = 1" > /etc/sysctl.d/routing.conf
sysctl -p /etc/sysctl.d/routing.conf
# Configure keepalived for HA
zypper install keepalived
cat > /etc/keepalived/keepalived.conf << 'EOL'
vrrp_instance VI_1 {
state MASTER
interface eth0
virtual_router_id 51
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass secret
}
virtual_ipaddress {
10.10.10.100/24 dev eth0
}
}
EOL
systemctl enable haproxy keepalived
systemctl start haproxy keepalived
SCRIPT
echo "Virtual load balancer $VM_NAME created"
EOF
chmod +x /usr/local/bin/create-vlb.sh
Virtual Infrastructure Management
libvirt Management
# Advanced libvirt configuration
cat > /etc/libvirt/libvirtd.conf << 'EOF'
# Libvirt daemon configuration
listen_tls = 0
listen_tcp = 1
listen_addr = "0.0.0.0"
tcp_port = "16509"
auth_tcp = "sasl"
max_clients = 5000
max_queued_clients = 1000
max_anonymous_clients = 20
max_client_requests = 5
min_workers = 5
max_workers = 20
log_level = 3
log_outputs = "3:syslog:libvirtd"
keepalive_interval = 5
keepalive_count = 5
EOF
# Configure SASL authentication
cat > /etc/sasl2/libvirt.conf << 'EOF'
mech_list: digest-md5
sasldb_path: /etc/libvirt/passwd.db
EOF
# Create SASL user
saslpasswd2 -a libvirt admin
# Storage pool management
cat > /usr/local/bin/manage-storage-pools.sh << 'EOF'
#!/bin/bash
# Manage libvirt storage pools
case "$1" in
create-nfs)
POOL_NAME="$2"
NFS_SERVER="$3"
NFS_PATH="$4"
virsh pool-define-as "$POOL_NAME" netfs \
--source-host "$NFS_SERVER" \
--source-path "$NFS_PATH" \
--target /var/lib/libvirt/pools/$POOL_NAME
virsh pool-start "$POOL_NAME"
virsh pool-autostart "$POOL_NAME"
;;
create-iscsi)
POOL_NAME="$2"
ISCSI_HOST="$3"
ISCSI_TARGET="$4"
virsh pool-define-as "$POOL_NAME" iscsi \
--source-host "$ISCSI_HOST" \
--source-dev "$ISCSI_TARGET" \
--target /dev/disk/by-path
virsh pool-start "$POOL_NAME"
virsh pool-autostart "$POOL_NAME"
;;
create-ceph)
POOL_NAME="$2"
CEPH_MON="$3"
CEPH_POOL="$4"
virsh pool-define-as "$POOL_NAME" rbd \
--source-host "$CEPH_MON" \
--source-name "$CEPH_POOL" \
--auth-type cephx \
--auth-username libvirt
virsh pool-start "$POOL_NAME"
virsh pool-autostart "$POOL_NAME"
;;
esac
EOF
chmod +x /usr/local/bin/manage-storage-pools.sh
Virtual Network Management
# Complex virtual network setup
cat > /usr/local/bin/create-virtual-networks.sh << 'EOF'
#!/bin/bash
# Create complex virtual networks
# Create isolated network
cat > /tmp/isolated-net.xml << 'XML'
<network>
<name>isolated</name>
<bridge name="virbr-isolated"/>
<domain name="isolated.local"/>
<ip address="10.100.0.1" netmask="255.255.255.0">
<dhcp>
<range start="10.100.0.100" end="10.100.0.200"/>
</dhcp>
</ip>
</network>
XML
virsh net-define /tmp/isolated-net.xml
virsh net-start isolated
virsh net-autostart isolated
# Create routed network
cat > /tmp/routed-net.xml << 'XML'
<network>
<name>routed</name>
<forward mode="route"/>
<bridge name="virbr-routed"/>
<domain name="routed.local"/>
<ip address="10.200.0.1" netmask="255.255.255.0">
<dhcp>
<range start="10.200.0.100" end="10.200.0.200"/>
</dhcp>
</ip>
</network>
XML
virsh net-define /tmp/routed-net.xml
virsh net-start routed
virsh net-autostart routed
# Create Open vSwitch network
cat > /tmp/ovs-net.xml << 'XML'
<network>
<name>ovs-network</name>
<forward mode="bridge"/>
<bridge name="ovsbr0"/>
<virtualport type="openvswitch"/>
<portgroup name="vlan100">
<vlan>
<tag id="100"/>
</vlan>
</portgroup>
<portgroup name="vlan200">
<vlan>
<tag id="200"/>
</vlan>
</portgroup>
</network>
XML
virsh net-define /tmp/ovs-net.xml
virsh net-start ovs-network
virsh net-autostart ovs-network
EOF
chmod +x /usr/local/bin/create-virtual-networks.sh
VM Migration
# Live migration setup
cat > /usr/local/bin/migrate-vm.sh << 'EOF'
#!/bin/bash
# VM migration script
VM_NAME="$1"
TARGET_HOST="$2"
MIGRATION_TYPE="${3:-live}"
if [ -z "$VM_NAME" ] || [ -z "$TARGET_HOST" ]; then
echo "Usage: $0 <vm-name> <target-host> [live|offline]"
exit 1
fi
case "$MIGRATION_TYPE" in
live)
# Pre-migration checks
echo "Checking migration feasibility..."
virsh domjobinfo "$VM_NAME" 2>/dev/null
# Perform live migration
virsh migrate --live --persistent --undefinesource \
--copy-storage-all --compressed \
"$VM_NAME" qemu+ssh://$TARGET_HOST/system
;;
offline)
# Shutdown VM
virsh shutdown "$VM_NAME"
# Wait for shutdown
while virsh list --name | grep -q "^${VM_NAME}$"; do
sleep 1
done
# Copy disk images
for disk in $(virsh domblklist "$VM_NAME" | grep -E "vd|sd" | awk '{print $2}'); do
rsync -avz --progress "$disk" $TARGET_HOST:$disk
done
# Copy VM definition
virsh dumpxml "$VM_NAME" > /tmp/${VM_NAME}.xml
scp /tmp/${VM_NAME}.xml $TARGET_HOST:/tmp/
# Define VM on target
ssh $TARGET_HOST "virsh define /tmp/${VM_NAME}.xml"
# Remove from source
virsh undefine "$VM_NAME"
;;
esac
echo "Migration of $VM_NAME to $TARGET_HOST completed"
EOF
chmod +x /usr/local/bin/migrate-vm.sh
Monitoring and Performance
Virtual Infrastructure Monitoring
# Install monitoring tools
sudo zypper install virt-top collectd collectd-virt
# Configure collectd for virtualization metrics
cat > /etc/collectd.d/virtualization.conf << 'EOF'
LoadPlugin virt
<Plugin virt>
Connection "qemu:///system"
RefreshInterval 60
Domain "regex:.*"
BlockDevice "regex:.*"
InterfaceDevice "regex:.*"
IgnoreSelected false
ExtraStats "cpu_util disk_err domain_state fs_info job_stats_background pcpu vcpupin"
PersistentNotification false
</Plugin>
LoadPlugin network
<Plugin network>
Server "monitoring.example.com" "25826"
</Plugin>
EOF
systemctl enable collectd
systemctl start collectd
# Performance monitoring script
cat > /usr/local/bin/monitor-virt-performance.sh << 'EOF'
#!/bin/bash
# Monitor virtualization performance
LOG_DIR="/var/log/virt-performance"
mkdir -p "$LOG_DIR"
# VM performance stats
for vm in $(virsh list --name); do
echo "=== VM: $vm ===" >> "$LOG_DIR/vm-stats-$(date +%Y%m%d).log"
# CPU stats
virsh cpu-stats "$vm" >> "$LOG_DIR/vm-stats-$(date +%Y%m%d).log"
# Memory stats
virsh dommemstat "$vm" >> "$LOG_DIR/vm-stats-$(date +%Y%m%d).log"
# Disk I/O stats
for disk in $(virsh domblklist "$vm" | grep -E "vd|sd" | awk '{print $1}'); do
virsh domblkstat "$vm" "$disk" >> "$LOG_DIR/vm-stats-$(date +%Y%m%d).log"
done
# Network stats
for iface in $(virsh domiflist "$vm" | grep -E "vnet|eth" | awk '{print $1}'); do
virsh domifstat "$vm" "$iface" >> "$LOG_DIR/vm-stats-$(date +%Y%m%d).log"
done
done
# Host resource usage
echo "=== Host Stats ===" >> "$LOG_DIR/host-stats-$(date +%Y%m%d).log"
virsh nodememstats >> "$LOG_DIR/host-stats-$(date +%Y%m%d).log"
virsh nodecpustats >> "$LOG_DIR/host-stats-$(date +%Y%m%d).log"
EOF
chmod +x /usr/local/bin/monitor-virt-performance.sh
# Add to cron
echo "*/5 * * * * /usr/local/bin/monitor-virt-performance.sh" | crontab -
Best Practices
Virtualization Security
# Security hardening for virtualization
cat > /usr/local/bin/harden-virt-security.sh << 'EOF'
#!/bin/bash
# Virtualization security hardening
# SELinux/AppArmor for VMs
aa-enforce /etc/apparmor.d/libvirt/libvirt-*
# Configure sVirt
cat >> /etc/libvirt/qemu.conf << 'EOL'
security_driver = "apparmor"
security_default_confined = 1
security_require_confined = 1
EOL
# Disable VM migration if not needed
cat >> /etc/libvirt/libvirtd.conf << 'EOL'
listen_tls = 0
listen_tcp = 0
EOL
# Restrict VM resource usage
for vm in $(virsh list --name); do
# CPU quota
virsh schedinfo "$vm" --set vcpu_quota=50000
# Memory limits
virsh memtune "$vm" --hard-limit $((4*1024*1024))
# Disk I/O limits
for disk in $(virsh domblklist "$vm" | grep -E "vd|sd" | awk '{print $1}'); do
virsh blkdeviotune "$vm" "$disk" \
--read-bytes-sec 104857600 \
--write-bytes-sec 104857600
done
done
# Network isolation
# Create separate bridges for different security zones
brctl addbr br-dmz
brctl addbr br-internal
brctl addbr br-management
# Apply firewall rules
iptables -I FORWARD -i br-dmz -o br-internal -j DROP
iptables -I FORWARD -i br-dmz -o br-management -j DROP
echo "Virtualization security hardening completed"
EOF
chmod +x /usr/local/bin/harden-virt-security.sh
Performance Optimization
# Create performance tuning guide
cat > /usr/local/share/virt-performance-guide.md << 'EOF'
# Virtualization Performance Best Practices
## CPU Optimization
- Use CPU pinning for performance-critical VMs
- Enable CPU passthrough for better performance
- Configure NUMA topology awareness
- Use virtio drivers for paravirtualization
- Disable unnecessary CPU features
## Memory Optimization
- Enable huge pages for large VMs
- Use memory ballooning carefully
- Configure proper NUMA memory allocation
- Avoid memory overcommit in production
- Enable KSM for similar workloads
## Storage Optimization
- Use virtio-blk or virtio-scsi drivers
- Enable native AIO for better performance
- Use cache=none for production workloads
- Consider SR-IOV for high I/O requirements
- Use thin provisioning appropriately
## Network Optimization
- Use virtio-net drivers
- Enable multiqueue for network interfaces
- Consider SR-IOV for network-intensive VMs
- Configure proper MTU sizes
- Use Open vSwitch for complex networking
## General Guidelines
- Keep hypervisor and guest tools updated
- Monitor resource usage regularly
- Plan capacity appropriately
- Test performance changes thoroughly
- Document all optimizations
EOF
Conclusion
SUSE provides comprehensive networking and virtualization solutions suitable for enterprise deployments. From advanced networking configurations to full virtualization platforms with KVM and Xen, combined with software-defined networking capabilities, SUSE enables flexible and scalable infrastructure. Regular monitoring, security hardening, and performance optimization ensure reliable operation in production environments.