SUSE Linux Enterprise Server Performance Tuning: Complete Guide

Tyler Maginnis | February 17, 2024

SUSESLESperformancetuningoptimizationmonitoring

Need Professional SUSE Enterprise Support?

Get expert assistance with your suse enterprise support implementation and management. Tyler on Tech Louisville provides priority support for Louisville businesses.

Same-day service available for Louisville area

SUSE Linux Enterprise Server Performance Tuning: Complete Guide

This comprehensive guide covers performance tuning for SUSE Linux Enterprise Server, including kernel optimization, system tuning, application performance, storage and network optimization, and monitoring strategies.

Introduction to Performance Tuning

SLES performance optimization covers: - Kernel Tuning: CPU, memory, and I/O parameters - System Optimization: Services, processes, and resources - Application Tuning: Database and web server optimization - Storage Performance: I/O scheduling and caching - Network Optimization: TCP/IP stack tuning - Monitoring Tools: Performance analysis and metrics

Performance Analysis Tools

Installing Performance Tools

# Install comprehensive performance analysis tools
sudo zypper install -t pattern apparmor
sudo zypper install \
    sysstat \
    perf \
    htop \
    iotop \
    iftop \
    nethogs \
    dstat \
    vmstat \
    mpstat \
    pidstat \
    sar \
    collectl \
    atop \
    nmon \
    glances \
    bcc-tools \
    systemtap \
    tuned \
    numactl \
    irqbalance

# Install additional analysis tools
sudo zypper install \
    strace \
    ltrace \
    tcpdump \
    wireshark \
    blktrace \
    bpftrace \
    flamegraph \
    linux-tools

System Baseline

# Create performance baseline script
cat > /usr/local/bin/performance-baseline.sh << 'EOF'
#!/bin/bash
# System performance baseline

BASELINE_DIR="/var/log/performance/baseline-$(date +%Y%m%d-%H%M%S)"
mkdir -p "$BASELINE_DIR"

echo "Collecting system performance baseline..."

# System information
echo "=== System Information ===" > "$BASELINE_DIR/system-info.txt"
uname -a >> "$BASELINE_DIR/system-info.txt"
cat /etc/os-release >> "$BASELINE_DIR/system-info.txt"
lscpu >> "$BASELINE_DIR/system-info.txt"
free -h >> "$BASELINE_DIR/system-info.txt"
df -h >> "$BASELINE_DIR/system-info.txt"

# CPU information
echo "=== CPU Details ===" > "$BASELINE_DIR/cpu-info.txt"
cat /proc/cpuinfo >> "$BASELINE_DIR/cpu-info.txt"
lscpu --extended >> "$BASELINE_DIR/cpu-info.txt"

# Memory information
echo "=== Memory Details ===" > "$BASELINE_DIR/memory-info.txt"
cat /proc/meminfo >> "$BASELINE_DIR/memory-info.txt"
vmstat -s >> "$BASELINE_DIR/memory-info.txt"
numactl --hardware >> "$BASELINE_DIR/memory-info.txt"

# Current performance metrics
sar -A > "$BASELINE_DIR/sar-all.txt"
iostat -x 1 60 > "$BASELINE_DIR/iostat.txt" &
mpstat -P ALL 1 60 > "$BASELINE_DIR/mpstat.txt" &
pidstat 1 60 > "$BASELINE_DIR/pidstat.txt" &

# Network performance
ss -s > "$BASELINE_DIR/socket-stats.txt"
netstat -i > "$BASELINE_DIR/network-interfaces.txt"
ip -s link > "$BASELINE_DIR/network-statistics.txt"

# Process information
ps auxf > "$BASELINE_DIR/process-tree.txt"
top -b -n 1 > "$BASELINE_DIR/top-snapshot.txt"

# Kernel parameters
sysctl -a > "$BASELINE_DIR/sysctl-all.txt"

# Wait for background jobs
wait

echo "Baseline collected in: $BASELINE_DIR"
EOF

chmod +x /usr/local/bin/performance-baseline.sh

CPU Performance Tuning

CPU Frequency Scaling

# Check current CPU frequency
cat /proc/cpuinfo | grep "cpu MHz"
cpupower frequency-info

# Install CPU power management tools
sudo zypper install cpupower

# Set CPU governor to performance
cpupower frequency-set -g performance

# Make performance governor persistent
cat > /etc/systemd/system/cpupower.service << 'EOF'
[Unit]
Description=CPU power management
After=multi-user.target

[Service]
Type=oneshot
ExecStart=/usr/bin/cpupower frequency-set -g performance
RemainAfterExit=yes

[Install]
WantedBy=multi-user.target
EOF

systemctl enable cpupower.service
systemctl start cpupower.service

# Configure CPU C-states
cat > /etc/default/grub.d/cpu-cstates.cfg << 'EOF'
# Disable CPU C-states for low latency
GRUB_CMDLINE_LINUX_DEFAULT="${GRUB_CMDLINE_LINUX_DEFAULT} intel_idle.max_cstate=1 processor.max_cstate=1"
EOF

grub2-mkconfig -o /boot/grub2/grub.cfg

CPU Affinity and Isolation

# Isolate CPUs for specific workloads
# Edit GRUB configuration
cat >> /etc/default/grub << 'EOF'
# Isolate CPUs 4-7 for dedicated workload
GRUB_CMDLINE_LINUX_DEFAULT="${GRUB_CMDLINE_LINUX_DEFAULT} isolcpus=4-7 nohz_full=4-7 rcu_nocbs=4-7"
EOF

grub2-mkconfig -o /boot/grub2/grub.cfg

# Configure IRQ affinity
cat > /usr/local/bin/set-irq-affinity.sh << 'EOF'
#!/bin/bash
# Set IRQ affinity for network interfaces

# Network interface
INTERFACE="eth0"
# CPUs to handle interrupts (bitmask)
CPU_MASK="f"  # CPUs 0-3

# Get IRQs for interface
IRQS=$(grep "$INTERFACE" /proc/interrupts | awk '{print $1}' | sed 's/://')

for IRQ in $IRQS; do
    echo $CPU_MASK > /proc/irq/$IRQ/smp_affinity
    echo "Set IRQ $IRQ affinity to CPUs: $CPU_MASK"
done

# Set IRQ balance
systemctl stop irqbalance
echo 1 > /proc/irq/default_smp_affinity
EOF

chmod +x /usr/local/bin/set-irq-affinity.sh

# Configure process CPU affinity
cat > /usr/local/bin/set-process-affinity.sh << 'EOF'
#!/bin/bash
# Set process CPU affinity

# Example: Bind database to CPUs 4-7
DATABASE_PID=$(pgrep -f postgres)
if [ -n "$DATABASE_PID" ]; then
    taskset -cp 4-7 $DATABASE_PID
    echo "Bound database process to CPUs 4-7"
fi

# Example: Bind web server to CPUs 0-3
WEB_PID=$(pgrep -f nginx)
if [ -n "$WEB_PID" ]; then
    taskset -cp 0-3 $WEB_PID
    echo "Bound web server to CPUs 0-3"
fi
EOF

chmod +x /usr/local/bin/set-process-affinity.sh

NUMA Optimization

# Check NUMA topology
numactl --hardware
numastat

# NUMA memory policy configuration
cat > /etc/sysctl.d/numa-tuning.conf << 'EOF'
# NUMA tuning parameters
vm.zone_reclaim_mode = 0
kernel.numa_balancing = 1
EOF

sysctl -p /etc/sysctl.d/numa-tuning.conf

# Run applications with NUMA awareness
cat > /usr/local/bin/numa-aware-start.sh << 'EOF'
#!/bin/bash
# Start applications with NUMA optimization

# Example: Database on NUMA node 0
numactl --cpunodebind=0 --membind=0 /usr/bin/postgresql

# Example: Application on specific NUMA node
APP_NAME="myapp"
NUMA_NODE="1"
numactl --cpunodebind=$NUMA_NODE --membind=$NUMA_NODE /usr/bin/$APP_NAME
EOF

chmod +x /usr/local/bin/numa-aware-start.sh

# Monitor NUMA statistics
watch -n 1 numastat -m

Memory Performance Tuning

Memory Management

# Memory tuning parameters
cat > /etc/sysctl.d/memory-performance.conf << 'EOF'
# Memory performance tuning

# Virtual memory
vm.swappiness = 10
vm.vfs_cache_pressure = 50
vm.dirty_ratio = 15
vm.dirty_background_ratio = 5
vm.dirty_writeback_centisecs = 500
vm.dirty_expire_centisecs = 3000

# Shared memory
kernel.shmmax = 68719476736
kernel.shmall = 4294967296
kernel.shmmni = 4096

# Memory overcommit
vm.overcommit_memory = 1
vm.overcommit_ratio = 50

# Page allocation
vm.min_free_kbytes = 262144
vm.watermark_scale_factor = 200

# Transparent Huge Pages
vm.transparent_hugepage = madvise
vm.nr_hugepages = 1024

# Memory compaction
vm.compact_unevictable_allowed = 1
vm.extfrag_threshold = 500
EOF

sysctl -p /etc/sysctl.d/memory-performance.conf

# Configure huge pages
cat > /usr/local/bin/setup-hugepages.sh << 'EOF'
#!/bin/bash
# Configure huge pages for applications

# Calculate huge pages needed (example: 16GB)
MEMORY_GB=16
HUGEPAGE_SIZE_KB=$(grep Hugepagesize /proc/meminfo | awk '{print $2}')
HUGEPAGES_NEEDED=$((MEMORY_GB * 1024 * 1024 / HUGEPAGE_SIZE_KB))

echo $HUGEPAGES_NEEDED > /proc/sys/vm/nr_hugepages

# Mount hugetlbfs
mkdir -p /mnt/hugepages
mount -t hugetlbfs nodev /mnt/hugepages
echo "nodev /mnt/hugepages hugetlbfs defaults 0 0" >> /etc/fstab

# Set permissions
chmod 1777 /mnt/hugepages

echo "Configured $HUGEPAGES_NEEDED huge pages"
EOF

chmod +x /usr/local/bin/setup-hugepages.sh

# Disable transparent huge pages for databases
cat > /etc/systemd/system/disable-thp.service << 'EOF'
[Unit]
Description=Disable Transparent Huge Pages
Before=postgresql.service mariadb.service

[Service]
Type=oneshot
ExecStart=/bin/sh -c 'echo never > /sys/kernel/mm/transparent_hugepage/enabled'
ExecStart=/bin/sh -c 'echo never > /sys/kernel/mm/transparent_hugepage/defrag'
RemainAfterExit=yes

[Install]
WantedBy=multi-user.target
EOF

systemctl enable disable-thp.service
systemctl start disable-thp.service

Cache Optimization

# Page cache tuning
cat > /usr/local/bin/tune-page-cache.sh << 'EOF'
#!/bin/bash
# Tune page cache for workload

# Drop caches (use carefully in production)
sync
echo 3 > /proc/sys/vm/drop_caches

# Monitor cache usage
echo "=== Cache Statistics ==="
free -h
echo
echo "=== Slab Cache ==="
slabtop -o -s c | head -20
echo
echo "=== Page Cache Details ==="
cat /proc/meminfo | grep -E "Cached|Buffers|Dirty|Writeback"
EOF

chmod +x /usr/local/bin/tune-page-cache.sh

# Configure cache pressure for specific workloads
cat > /etc/sysctl.d/cache-tuning.conf << 'EOF'
# Database workload - favor cache
vm.vfs_cache_pressure = 10

# File server - balanced approach
#vm.vfs_cache_pressure = 100

# Application server - reduce cache pressure
#vm.vfs_cache_pressure = 200
EOF

Storage Performance Tuning

I/O Scheduler Optimization

# I/O scheduler configuration
cat > /etc/udev/rules.d/60-io-scheduler.rules << 'EOF'
# Set optimal I/O scheduler based on device type

# NVMe SSD - none/noop
ACTION=="add|change", KERNEL=="nvme[0-9]n[0-9]", ATTR{queue/scheduler}="none"

# SATA/SAS SSD - mq-deadline
ACTION=="add|change", KERNEL=="sd[a-z]", ATTR{queue/rotational}=="0", ATTR{queue/scheduler}="mq-deadline"

# HDD - bfq or mq-deadline
ACTION=="add|change", KERNEL=="sd[a-z]", ATTR{queue/rotational}=="1", ATTR{queue/scheduler}="bfq"

# Virtual disks - none
ACTION=="add|change", KERNEL=="vd[a-z]", ATTR{queue/scheduler}="none"
EOF

# Tune I/O scheduler parameters
cat > /usr/local/bin/tune-io-scheduler.sh << 'EOF'
#!/bin/bash
# Tune I/O scheduler parameters

for dev in /sys/block/sd*/queue; do
    DEVICE=$(basename $(dirname $dev))
    SCHEDULER=$(cat $dev/scheduler | grep -o '\[.*\]' | tr -d '[]')

    case $SCHEDULER in
        "mq-deadline")
            echo 100 > $dev/iosched/read_expire
            echo 3000 > $dev/iosched/write_expire
            echo 1 > $dev/iosched/writes_starved
            echo 1 > $dev/iosched/fifo_batch
            ;;
        "bfq")
            echo 0 > $dev/iosched/low_latency
            echo 180000 > $dev/iosched/timeout_sync
            echo 1 > $dev/iosched/max_budget
            ;;
        "kyber")
            echo 16 > $dev/iosched/read_lat_nsec
            echo 64 > $dev/iosched/write_lat_nsec
            ;;
    esac

    echo "Tuned $DEVICE with $SCHEDULER scheduler"
done
EOF

chmod +x /usr/local/bin/tune-io-scheduler.sh

# Add to boot
cat > /etc/systemd/system/io-scheduler-tuning.service << 'EOF'
[Unit]
Description=I/O Scheduler Tuning
After=multi-user.target

[Service]
Type=oneshot
ExecStart=/usr/local/bin/tune-io-scheduler.sh
RemainAfterExit=yes

[Install]
WantedBy=multi-user.target
EOF

systemctl enable io-scheduler-tuning.service

Storage Queue Tuning

# Block device queue tuning
cat > /etc/udev/rules.d/61-block-tuning.rules << 'EOF'
# Block device performance tuning

# Increase queue depth for enterprise storage
ACTION=="add|change", KERNEL=="sd[a-z]", ATTR{device/vendor}=="NETAPP*", ATTR{queue/nr_requests}="512"
ACTION=="add|change", KERNEL=="sd[a-z]", ATTR{device/vendor}=="EMC*", ATTR{queue/nr_requests}="512"

# Set read-ahead for different device types
# SSD - smaller read-ahead
ACTION=="add|change", KERNEL=="sd[a-z]", ATTR{queue/rotational}=="0", ATTR{queue/read_ahead_kb}="128"
# HDD - larger read-ahead
ACTION=="add|change", KERNEL=="sd[a-z]", ATTR{queue/rotational}=="1", ATTR{queue/read_ahead_kb}="1024"

# NVMe optimization
ACTION=="add|change", KERNEL=="nvme[0-9]n[0-9]", ATTR{queue/io_poll}="1"
ACTION=="add|change", KERNEL=="nvme[0-9]n[0-9]", ATTR{queue/io_poll_delay}="0"
EOF

# Manual storage tuning script
cat > /usr/local/bin/tune-storage.sh << 'EOF'
#!/bin/bash
# Storage performance tuning

# Tune all block devices
for device in /sys/block/*/queue; do
    DEV=$(basename $(dirname $device))

    # Skip loop and ram devices
    [[ "$DEV" =~ ^(loop|ram) ]] && continue

    # Set queue parameters
    echo 512 > $device/nr_requests 2>/dev/null
    echo 2 > $device/rq_affinity 2>/dev/null
    echo 0 > $device/add_random 2>/dev/null
    echo 0 > $device/rotational 2>/dev/null  # For SSDs

    # NCQ for SATA devices
    if [ -f /sys/block/$DEV/device/queue_depth ]; then
        echo 31 > /sys/block/$DEV/device/queue_depth
    fi

    echo "Tuned storage device: $DEV"
done

# Disable write barriers for battery-backed storage
# mount -o remount,nobarrier /data
EOF

chmod +x /usr/local/bin/tune-storage.sh

Filesystem Tuning

# XFS tuning
cat > /usr/local/bin/tune-xfs.sh << 'EOF'
#!/bin/bash
# XFS filesystem tuning

# Mount options for performance
XFS_MOUNT_OPTS="noatime,nodiratime,nobarrier,inode64,allocsize=16m"

# Tune existing XFS mounts
for mount in $(mount -t xfs | awk '{print $3}'); do
    mount -o remount,$XFS_MOUNT_OPTS $mount
    echo "Remounted $mount with options: $XFS_MOUNT_OPTS"
done

# XFS specific tuning
xfs_io -x -c "extsize 1m" /data  # Set extent size
echo 256 > /proc/sys/fs/xfs/xfssyncd_centisecs  # Reduce sync frequency
echo 3000 > /proc/sys/fs/xfs/age_buffer_centisecs  # Buffer aging
EOF

chmod +x /usr/local/bin/tune-xfs.sh

# ext4 tuning
cat > /usr/local/bin/tune-ext4.sh << 'EOF'
#!/bin/bash
# ext4 filesystem tuning

# Mount options for performance
EXT4_MOUNT_OPTS="noatime,nodiratime,nobarrier,data=writeback,nobh,delalloc"

# Tune existing ext4 mounts
for mount in $(mount -t ext4 | awk '{print $3}'); do
    mount -o remount,$EXT4_MOUNT_OPTS $mount
    echo "Remounted $mount with options: $EXT4_MOUNT_OPTS"
done

# Disable ext4 journal for extreme performance (data loss risk!)
# tune2fs -O ^has_journal /dev/sdX
# e2fsck -f /dev/sdX
EOF

chmod +x /usr/local/bin/tune-ext4.sh

# Btrfs tuning
cat > /usr/local/bin/tune-btrfs.sh << 'EOF'
#!/bin/bash
# Btrfs filesystem tuning

# Mount options for performance
BTRFS_MOUNT_OPTS="noatime,nodiratime,compress=zstd:1,space_cache=v2,autodefrag"

# Tune existing Btrfs mounts
for mount in $(mount -t btrfs | awk '{print $3}'); do
    mount -o remount,$BTRFS_MOUNT_OPTS $mount
    echo "Remounted $mount with options: $BTRFS_MOUNT_OPTS"
done

# Disable Btrfs quotas for performance
for mount in $(mount -t btrfs | awk '{print $3}'); do
    btrfs quota disable $mount 2>/dev/null
done
EOF

chmod +x /usr/local/bin/tune-btrfs.sh

Network Performance Tuning

TCP/IP Stack Optimization

# Network performance tuning
cat > /etc/sysctl.d/network-performance.conf << 'EOF'
# Network stack performance tuning

# Core network settings
net.core.rmem_max = 134217728
net.core.wmem_max = 134217728
net.core.rmem_default = 262144
net.core.wmem_default = 262144
net.core.optmem_max = 65536
net.core.netdev_max_backlog = 5000
net.core.somaxconn = 65535
net.core.dev_weight = 64

# TCP settings
net.ipv4.tcp_rmem = 4096 262144 134217728
net.ipv4.tcp_wmem = 4096 262144 134217728
net.ipv4.tcp_mem = 786432 1048576 26777216
net.ipv4.tcp_congestion_control = bbr
net.ipv4.tcp_mtu_probing = 1
net.ipv4.tcp_timestamps = 1
net.ipv4.tcp_sack = 1
net.ipv4.tcp_window_scaling = 1
net.ipv4.tcp_moderate_rcvbuf = 1
net.ipv4.tcp_no_metrics_save = 1
net.ipv4.tcp_max_syn_backlog = 8192
net.ipv4.tcp_slow_start_after_idle = 0
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_fin_timeout = 15
net.ipv4.tcp_keepalive_time = 300
net.ipv4.tcp_keepalive_intvl = 30
net.ipv4.tcp_keepalive_probes = 3

# IP settings
net.ipv4.ip_local_port_range = 1024 65535
net.ipv4.conf.all.accept_redirects = 0
net.ipv4.conf.all.send_redirects = 0
net.ipv4.conf.all.accept_source_route = 0
net.ipv4.conf.all.rp_filter = 1

# IPv6 settings (if using IPv6)
net.ipv6.conf.all.disable_ipv6 = 0
net.ipv6.route.max_size = 4096
EOF

sysctl -p /etc/sysctl.d/network-performance.conf

# Enable BBR congestion control
modprobe tcp_bbr
echo "tcp_bbr" > /etc/modules-load.d/tcp_bbr.conf

Network Interface Tuning

# NIC tuning script
cat > /usr/local/bin/tune-network-interfaces.sh << 'EOF'
#!/bin/bash
# Network interface performance tuning

for interface in $(ls /sys/class/net/ | grep -v lo); do
    # Check if interface is up
    if [ "$(cat /sys/class/net/$interface/operstate)" != "up" ]; then
        continue
    fi

    # Increase ring buffer sizes
    ethtool -G $interface rx 4096 tx 4096 2>/dev/null

    # Enable offload features
    ethtool -K $interface rx on tx on sg on tso on gso on gro on lro on 2>/dev/null

    # Set interrupt coalescing
    ethtool -C $interface adaptive-rx on adaptive-tx on 2>/dev/null
    ethtool -C $interface rx-usecs 100 tx-usecs 100 2>/dev/null

    # Configure multiqueue
    if [ -d /sys/class/net/$interface/queues ]; then
        NUM_CPUS=$(nproc)
        ethtool -L $interface combined $NUM_CPUS 2>/dev/null
    fi

    # Set MTU for jumbo frames (if supported)
    # ip link set dev $interface mtu 9000

    echo "Tuned network interface: $interface"
done

# Configure RSS (Receive Side Scaling)
for interface in $(ls /sys/class/net/ | grep -v lo); do
    if [ -f /sys/class/net/$interface/queues/rx-0/rps_cpus ]; then
        echo f > /sys/class/net/$interface/queues/rx-0/rps_cpus
        echo "Enabled RSS on $interface"
    fi
done

# Configure XPS (Transmit Packet Steering)
for interface in $(ls /sys/class/net/ | grep -v lo); do
    if [ -d /sys/class/net/$interface/queues ]; then
        for queue in /sys/class/net/$interface/queues/tx-*; do
            echo f > $queue/xps_cpus 2>/dev/null
        done
        echo "Configured XPS on $interface"
    fi
done
EOF

chmod +x /usr/local/bin/tune-network-interfaces.sh

# Add to systemd
cat > /etc/systemd/system/network-tuning.service << 'EOF'
[Unit]
Description=Network Interface Performance Tuning
After=network.target

[Service]
Type=oneshot
ExecStart=/usr/local/bin/tune-network-interfaces.sh
RemainAfterExit=yes

[Install]
WantedBy=multi-user.target
EOF

systemctl enable network-tuning.service

Application Performance Tuning

Database Optimization (PostgreSQL)

# PostgreSQL performance tuning
cat > /usr/local/bin/tune-postgresql.sh << 'EOF'
#!/bin/bash
# PostgreSQL performance tuning

PGDATA="/var/lib/pgsql/data"
TOTAL_MEM=$(free -b | grep Mem | awk '{print $2}')
SHARED_BUFFERS=$((TOTAL_MEM / 4))
EFFECTIVE_CACHE=$((TOTAL_MEM * 3 / 4))
WORK_MEM=$((TOTAL_MEM / 100))
MAINTENANCE_MEM=$((TOTAL_MEM / 20))

cat >> $PGDATA/postgresql.conf << EOL

# Performance Tuning
shared_buffers = ${SHARED_BUFFERS}B
effective_cache_size = ${EFFECTIVE_CACHE}B
work_mem = ${WORK_MEM}B
maintenance_work_mem = ${MAINTENANCE_MEM}B
wal_buffers = 16MB
checkpoint_completion_target = 0.9
max_connections = 200
max_prepared_transactions = 0
random_page_cost = 1.1  # For SSD
effective_io_concurrency = 200
max_worker_processes = 8
max_parallel_workers_per_gather = 4
max_parallel_workers = 8
wal_compression = on
wal_log_hints = on
shared_preload_libraries = 'pg_stat_statements'

# Logging
log_min_duration_statement = 1000
log_checkpoints = on
log_connections = on
log_disconnections = on
log_temp_files = 0
log_autovacuum_min_duration = 0

# Autovacuum tuning
autovacuum_max_workers = 4
autovacuum_naptime = 10s
autovacuum_vacuum_scale_factor = 0.02
autovacuum_analyze_scale_factor = 0.01
EOL

systemctl restart postgresql
EOF

chmod +x /usr/local/bin/tune-postgresql.sh

Web Server Optimization (Nginx)

# Nginx performance tuning
cat > /etc/nginx/conf.d/performance.conf << 'EOF'
# Nginx performance settings

# Worker processes and connections
worker_processes auto;
worker_cpu_affinity auto;
worker_rlimit_nofile 65535;

events {
    worker_connections 4096;
    use epoll;
    multi_accept on;
}

http {
    # Basic settings
    sendfile on;
    tcp_nopush on;
    tcp_nodelay on;
    keepalive_timeout 65;
    types_hash_max_size 2048;
    server_tokens off;

    # Buffer sizes
    client_body_buffer_size 128k;
    client_max_body_size 10m;
    client_header_buffer_size 1k;
    large_client_header_buffers 4 4k;
    output_buffers 1 32k;
    postpone_output 1460;

    # Timeouts
    client_header_timeout 60s;
    client_body_timeout 60s;
    send_timeout 60s;

    # Gzip compression
    gzip on;
    gzip_vary on;
    gzip_proxied any;
    gzip_comp_level 6;
    gzip_types text/plain text/css text/xml text/javascript 
               application/json application/javascript application/xml+rss 
               application/rss+xml application/atom+xml image/svg+xml;

    # Cache settings
    open_file_cache max=100000 inactive=20s;
    open_file_cache_valid 30s;
    open_file_cache_min_uses 2;
    open_file_cache_errors on;

    # Rate limiting
    limit_req_zone $binary_remote_addr zone=api:10m rate=10r/s;
    limit_conn_zone $binary_remote_addr zone=addr:10m;

    # SSL optimization
    ssl_protocols TLSv1.2 TLSv1.3;
    ssl_ciphers ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256;
    ssl_prefer_server_ciphers off;
    ssl_session_cache shared:SSL:10m;
    ssl_session_timeout 10m;
    ssl_session_tickets off;
    ssl_stapling on;
    ssl_stapling_verify on;
}
EOF

# Test and reload Nginx
nginx -t && systemctl reload nginx

Java Application Tuning

# JVM performance tuning script
cat > /usr/local/bin/tune-jvm.sh << 'EOF'
#!/bin/bash
# JVM performance tuning

APP_NAME="myapp"
TOTAL_MEM=$(free -m | grep Mem | awk '{print $2}')
HEAP_SIZE=$((TOTAL_MEM * 70 / 100))m
METASPACE_SIZE="256m"

# JVM options for performance
JVM_OPTS="-server \
-Xms${HEAP_SIZE} \
-Xmx${HEAP_SIZE} \
-XX:MaxMetaspaceSize=${METASPACE_SIZE} \
-XX:+UseG1GC \
-XX:MaxGCPauseMillis=200 \
-XX:ParallelGCThreads=8 \
-XX:ConcGCThreads=2 \
-XX:InitiatingHeapOccupancyPercent=70 \
-XX:+HeapDumpOnOutOfMemoryError \
-XX:HeapDumpPath=/var/log/${APP_NAME}/heapdump.hprof \
-XX:+UseStringDeduplication \
-XX:+AlwaysPreTouch \
-XX:+DisableExplicitGC \
-XX:+UseNUMA \
-XX:+PerfDisableSharedMem \
-Djava.net.preferIPv4Stack=true \
-Djava.awt.headless=true"

# GC logging
GC_LOG_OPTS="-Xlog:gc*:file=/var/log/${APP_NAME}/gc.log:time,uptime:filecount=10,filesize=10M"

# Start application
java $JVM_OPTS $GC_LOG_OPTS -jar /opt/${APP_NAME}/${APP_NAME}.jar
EOF

chmod +x /usr/local/bin/tune-jvm.sh

System Services Optimization

Systemd Tuning

# Systemd optimization
cat > /etc/systemd/system.conf.d/performance.conf << 'EOF'
[Manager]
# Increase default limits
DefaultLimitNOFILE=65535
DefaultLimitNPROC=65535
DefaultLimitMEMLOCK=infinity
DefaultTasksMax=65535
DefaultTimeoutStopSec=10s
DefaultRestartSec=100ms

# CPU affinity for system manager
CPUAffinity=0-3

# Watchdog
RuntimeWatchdogSec=0
ShutdownWatchdogSec=10min
EOF

# Systemd journal tuning
cat > /etc/systemd/journald.conf.d/performance.conf << 'EOF'
[Journal]
Storage=persistent
Compress=yes
SystemMaxUse=1G
SystemKeepFree=15%
SystemMaxFileSize=100M
MaxRetentionSec=30day
ForwardToSyslog=no
MaxLevelStore=info
RateLimitIntervalSec=30s
RateLimitBurst=10000
EOF

systemctl daemon-reload
systemctl restart systemd-journald

Service Startup Optimization

# Analyze boot performance
systemd-analyze
systemd-analyze blame
systemd-analyze critical-chain

# Disable unnecessary services
cat > /usr/local/bin/optimize-services.sh << 'EOF'
#!/bin/bash
# Optimize system services

# Services to disable (adjust based on needs)
DISABLE_SERVICES=(
    "bluetooth.service"
    "cups.service"
    "ModemManager.service"
    "packagekit.service"
    "avahi-daemon.service"
)

for service in "${DISABLE_SERVICES[@]}"; do
    if systemctl is-enabled "$service" &>/dev/null; then
        systemctl disable "$service"
        systemctl stop "$service"
        echo "Disabled: $service"
    fi
done

# Mask services not needed
systemctl mask systemd-readahead-collect.service
systemctl mask systemd-readahead-replay.service

# Enable performance-related services
systemctl enable tuned.service
systemctl start tuned.service
tuned-adm profile throughput-performance
EOF

chmod +x /usr/local/bin/optimize-services.sh

Monitoring and Analysis

Performance Monitoring Setup

# Continuous performance monitoring
cat > /usr/local/bin/performance-monitor.sh << 'EOF'
#!/bin/bash
# Real-time performance monitoring

LOG_DIR="/var/log/performance"
mkdir -p "$LOG_DIR"

# Start monitoring processes
sar -A -o "$LOG_DIR/sar-$(date +%Y%m%d).dat" 60 >/dev/null 2>&1 &
iostat -x 60 >> "$LOG_DIR/iostat-$(date +%Y%m%d).log" 2>&1 &
mpstat -P ALL 60 >> "$LOG_DIR/mpstat-$(date +%Y%m%d).log" 2>&1 &
vmstat 60 >> "$LOG_DIR/vmstat-$(date +%Y%m%d).log" 2>&1 &

# Network monitoring
iftop -t -s 60 >> "$LOG_DIR/iftop-$(date +%Y%m%d).log" 2>&1 &

# Process monitoring
pidstat 60 >> "$LOG_DIR/pidstat-$(date +%Y%m%d).log" 2>&1 &

echo "Performance monitoring started. Logs in: $LOG_DIR"
EOF

chmod +x /usr/local/bin/performance-monitor.sh

# Create systemd service for monitoring
cat > /etc/systemd/system/performance-monitor.service << 'EOF'
[Unit]
Description=System Performance Monitoring
After=multi-user.target

[Service]
Type=forking
ExecStart=/usr/local/bin/performance-monitor.sh
Restart=always
RestartSec=60

[Install]
WantedBy=multi-user.target
EOF

systemctl enable performance-monitor.service
systemctl start performance-monitor.service

Performance Analysis Tools

# Performance analysis script
cat > /usr/local/bin/analyze-performance.sh << 'EOF'
#!/bin/bash
# Analyze system performance

echo "=== System Performance Analysis ==="
echo "Date: $(date)"
echo

# CPU Analysis
echo "=== CPU Performance ==="
echo "CPU Utilization:"
mpstat 1 5 | tail -1
echo
echo "Top CPU Consumers:"
ps aux --sort=-%cpu | head -10
echo

# Memory Analysis
echo "=== Memory Performance ==="
free -h
echo
echo "Memory Stats:"
vmstat -s | head -20
echo
echo "Top Memory Consumers:"
ps aux --sort=-%mem | head -10
echo

# I/O Analysis
echo "=== I/O Performance ==="
iostat -x 1 5 | grep -A50 "Device"
echo

# Network Analysis
echo "=== Network Performance ==="
ss -s
echo
netstat -i
echo

# Process Analysis
echo "=== Process Statistics ==="
ps aux | wc -l | xargs echo "Total Processes:"
ps aux | grep -c " D " | xargs echo "Processes in D state:"
echo

# System Load
echo "=== System Load ==="
uptime
echo
cat /proc/loadavg
echo

# Recommendations
echo "=== Performance Recommendations ==="
if [ $(cat /proc/loadavg | awk '{print $1}' | cut -d. -f1) -gt $(nproc) ]; then
    echo "- High load average detected. Check CPU-bound processes."
fi

if [ $(free | grep Mem | awk '{print ($3/$2)*100}' | cut -d. -f1) -gt 90 ]; then
    echo "- High memory usage. Consider adding swap or upgrading RAM."
fi

if [ $(iostat -x 1 2 | tail -n +7 | awk '{sum+=$14} END {print sum/NR}' | cut -d. -f1) -gt 80 ]; then
    echo "- High I/O utilization. Check disk-intensive processes."
fi
EOF

chmod +x /usr/local/bin/analyze-performance.sh

Kernel Tuning

Kernel Parameters

# Advanced kernel tuning
cat > /etc/sysctl.d/advanced-kernel-tuning.conf << 'EOF'
# Advanced kernel performance tuning

# Scheduler tuning
kernel.sched_migration_cost_ns = 5000000
kernel.sched_autogroup_enabled = 0
kernel.sched_tunable_scaling = 0
kernel.sched_latency_ns = 24000000
kernel.sched_min_granularity_ns = 8000000
kernel.sched_wakeup_granularity_ns = 10000000

# Real-time tuning
kernel.sched_rt_runtime_us = 950000
kernel.sched_rt_period_us = 1000000

# Timer frequency
kernel.timer_migration = 0

# File system
fs.file-max = 2097152
fs.nr_open = 1048576
fs.aio-max-nr = 1048576

# IPC
kernel.msgmnb = 65536
kernel.msgmax = 65536
kernel.sem = 250 32000 100 128

# Security (disable if not needed)
kernel.kptr_restrict = 0
kernel.yama.ptrace_scope = 0
EOF

sysctl -p /etc/sysctl.d/advanced-kernel-tuning.conf

CPU Governor Tuning

# Advanced CPU governor configuration
cat > /usr/local/bin/cpu-governor-tuning.sh << 'EOF'
#!/bin/bash
# CPU governor advanced tuning

GOVERNOR="performance"

# Set governor for all CPUs
for cpu in /sys/devices/system/cpu/cpu[0-9]*; do
    echo $GOVERNOR > $cpu/cpufreq/scaling_governor 2>/dev/null
done

# Performance governor specific tuning
if [ "$GOVERNOR" = "performance" ]; then
    # Set to maximum frequency
    for cpu in /sys/devices/system/cpu/cpu[0-9]*; do
        cat $cpu/cpufreq/cpuinfo_max_freq > $cpu/cpufreq/scaling_min_freq 2>/dev/null
    done
fi

# Intel P-state tuning (if applicable)
if [ -d /sys/devices/system/cpu/intel_pstate ]; then
    echo 100 > /sys/devices/system/cpu/intel_pstate/min_perf_pct
    echo 0 > /sys/devices/system/cpu/intel_pstate/no_turbo
fi

# Disable CPU idle states for low latency
for i in /sys/devices/system/cpu/cpu*/cpuidle/state*/disable; do
    echo 1 > $i 2>/dev/null
done

echo "CPU governor tuning completed"
EOF

chmod +x /usr/local/bin/cpu-governor-tuning.sh

Performance Testing

Benchmark Suite

# Install benchmark tools
sudo zypper install sysbench fio iperf3 stress-ng

# System benchmark script
cat > /usr/local/bin/run-benchmarks.sh << 'EOF'
#!/bin/bash
# Comprehensive system benchmarks

RESULTS_DIR="/var/log/benchmarks/$(date +%Y%m%d-%H%M%S)"
mkdir -p "$RESULTS_DIR"

echo "Running system benchmarks..."

# CPU benchmark
echo "=== CPU Benchmark ===" | tee "$RESULTS_DIR/cpu-benchmark.txt"
sysbench cpu --cpu-max-prime=20000 --threads=$(nproc) run | tee -a "$RESULTS_DIR/cpu-benchmark.txt"

# Memory benchmark
echo "=== Memory Benchmark ===" | tee "$RESULTS_DIR/memory-benchmark.txt"
sysbench memory --memory-total-size=10G --threads=$(nproc) run | tee -a "$RESULTS_DIR/memory-benchmark.txt"

# Disk I/O benchmark
echo "=== Disk I/O Benchmark ===" | tee "$RESULTS_DIR/disk-benchmark.txt"
fio --name=randread --ioengine=libaio --iodepth=16 --rw=randread --bs=4k --direct=1 --size=1G --numjobs=4 --runtime=60 --group_reporting | tee -a "$RESULTS_DIR/disk-benchmark.txt"

# Network benchmark (requires server)
# iperf3 -c <server_ip> -t 60 | tee "$RESULTS_DIR/network-benchmark.txt"

echo "Benchmarks completed. Results in: $RESULTS_DIR"
EOF

chmod +x /usr/local/bin/run-benchmarks.sh

Troubleshooting Performance Issues

Performance Debugging

# Performance troubleshooting script
cat > /usr/local/bin/debug-performance.sh << 'EOF'
#!/bin/bash
# Debug performance issues

echo "=== Performance Debugging ==="

# Check for CPU throttling
echo "CPU Throttling Check:"
for cpu in /sys/devices/system/cpu/cpu[0-9]*; do
    if [ -f $cpu/thermal_throttle/core_throttle_count ]; then
        count=$(cat $cpu/thermal_throttle/core_throttle_count)
        if [ $count -gt 0 ]; then
            echo "$(basename $cpu): Throttled $count times"
        fi
    fi
done

# Check for memory pressure
echo -e "\nMemory Pressure:"
if [ -f /proc/pressure/memory ]; then
    cat /proc/pressure/memory
fi

# Check for I/O pressure
echo -e "\nI/O Pressure:"
if [ -f /proc/pressure/io ]; then
    cat /proc/pressure/io
fi

# Check for blocked processes
echo -e "\nBlocked Processes:"
ps aux | awk '$8 ~ /D/ { print $0 }'

# Check interrupt distribution
echo -e "\nInterrupt Distribution:"
cat /proc/interrupts | grep -E "CPU0|eth|nvme|xhci"

# Check for swap usage
echo -e "\nSwap Usage:"
free -h | grep Swap
swapon -s

# Check for OOM killer activity
echo -e "\nOOM Killer Activity:"
dmesg | grep -i "killed process" | tail -5
EOF

chmod +x /usr/local/bin/debug-performance.sh

Best Practices

Performance Tuning Checklist

# Create performance tuning checklist
cat > /usr/local/share/performance-tuning-checklist.md << 'EOF'
# SLES Performance Tuning Checklist

## CPU Optimization
- [ ] Set CPU governor to performance
- [ ] Configure CPU affinity for critical processes
- [ ] Disable CPU power saving features (if needed)
- [ ] Enable CPU isolation for dedicated workloads
- [ ] Configure NUMA properly

## Memory Optimization
- [ ] Set appropriate vm.swappiness
- [ ] Configure huge pages if needed
- [ ] Tune page cache settings
- [ ] Disable transparent huge pages for databases
- [ ] Set proper overcommit ratios

## Storage Optimization
- [ ] Choose correct I/O scheduler
- [ ] Tune queue depths
- [ ] Configure read-ahead values
- [ ] Use appropriate filesystem mount options
- [ ] Enable write caching (with UPS)

## Network Optimization
- [ ] Increase buffer sizes
- [ ] Enable offloading features
- [ ] Configure interrupt affinity
- [ ] Tune TCP parameters
- [ ] Enable jumbo frames (if supported)

## Application Tuning
- [ ] Database-specific optimizations
- [ ] Web server configuration
- [ ] JVM heap sizing
- [ ] Application-specific settings
- [ ] Connection pool sizing

## System Services
- [ ] Disable unnecessary services
- [ ] Optimize systemd settings
- [ ] Configure proper logging levels
- [ ] Enable only required kernel modules
- [ ] Review security vs performance trade-offs

## Monitoring Setup
- [ ] Deploy performance monitoring
- [ ] Configure alerting thresholds
- [ ] Set up trending analysis
- [ ] Document baseline metrics
- [ ] Regular performance reviews
EOF

Conclusion

SUSE Linux Enterprise Server performance tuning requires a systematic approach covering CPU, memory, storage, and network optimization. Regular monitoring and analysis ensure sustained performance improvements. Always test changes in non-production environments and document all modifications for future reference.