SUSE Manager: Complete Configuration and Management Guide
SUSE Manager provides comprehensive Linux systems management at enterprise scale. This guide covers installation, configuration, and advanced features for managing thousands of systems efficiently.
Introduction to SUSE Manager
SUSE Manager offers: - Centralized Management: Single interface for all systems - Automated Patching: Streamlined update management - Configuration Management: Salt-based automation - Compliance Scanning: Security and regulatory compliance - Content Lifecycle: Staged deployment environments - Multi-Platform Support: SLES, RHEL, Ubuntu, and more
Planning and Architecture
System Requirements
# SUSE Manager Server Requirements
# CPU: 4+ cores (8+ recommended)
# RAM: 16GB minimum (32GB+ for production)
# Disk: 100GB+ for OS, 50GB+ per channel
# Database: PostgreSQL (embedded or external)
# Supported client systems:
# - SUSE Linux Enterprise 12/15
# - Red Hat Enterprise Linux 6/7/8/9
# - Ubuntu 18.04/20.04/22.04
# - openSUSE Leap
# - CentOS/AlmaLinux/Rocky Linux
Network Architecture
# Network planning
# SUSE Manager Server: 192.168.1.10
# Database Server: 192.168.1.11 (if external)
# SUSE Manager Proxy: 192.168.2.10 (DMZ)
# Client Systems: Various subnets
# Required ports
# 80/443: Web UI and client connections
# 4505/4506: Salt master ports
# 5222: Push notifications (jabber)
# 5269: Push to proxy (jabber)
# 69: TFTP (for PXE boot)
# 25151: Cobbler (provisioning)
Installation and Initial Setup
Installing SUSE Manager Server
# Register system
sudo SUSEConnect -r <REGISTRATION_CODE>
# Add SUSE Manager module
sudo SUSEConnect -p sle-module-suse-manager-server/4.3/x86_64
# Install SUSE Manager pattern
sudo zypper install -t pattern suma_server
# Run setup script
sudo yast2 susemanager_setup
# Alternative command-line setup
sudo /usr/lib/susemanager/bin/mgr-setup -s
# Setup configuration file
cat > /root/mgr-setup.env << EOF
MANAGER_ADMIN_EMAIL="admin@example.com"
MANAGER_ADMIN_PASSWORD="SecurePassword123!"
MANAGER_DB_HOST="localhost"
MANAGER_DB_NAME="susemanager"
MANAGER_DB_USER="susemanager"
MANAGER_DB_PASSWORD="DBPassword123!"
MANAGER_ENABLE_TFTP="y"
CERT_CITY="Louisville"
CERT_STATE="KY"
CERT_COUNTRY="US"
CERT_ORG="Example Corp"
CERT_OU="IT Department"
CERT_EMAIL="admin@example.com"
CERT_PASS="CertPassword123!"
EOF
# Run setup with configuration
sudo mgr-setup -s /root/mgr-setup.env
Post-Installation Configuration
# Access SUSE Manager Web UI
# https://suma-server.example.com
# Configure organization
spacecmd org_create --name="Example Corp" \
--email="admin@example.com"
# Create admin users
spacecmd user_create --username="jdoe" \
--password="UserPass123!" \
--email="jdoe@example.com" \
--first="John" --last="Doe" \
--org="Example Corp"
# Set up sudo access for admins
spacecmd user_addrole jdoe org_admin
# Configure email notifications
cat > /etc/postfix/main.cf << EOF
myhostname = suma-server.example.com
mydomain = example.com
myorigin = \$mydomain
inet_interfaces = localhost
relayhost = [smtp.example.com]:587
smtp_use_tls = yes
smtp_sasl_auth_enable = yes
smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd
EOF
# Create SMTP authentication
echo "[smtp.example.com]:587 username:password" > /etc/postfix/sasl_passwd
postmap /etc/postfix/sasl_passwd
systemctl restart postfix
Channel and Repository Management
Synchronizing Software Channels
# Configure SCC credentials
mgr-sync add_credentials
# List available products
mgr-sync list products
# Add SLES 15 SP5 channels
mgr-sync add products sles15-sp5-x86_64
# Sync specific channels
spacewalk-repo-sync --channel sles15-sp5-pool-x86_64
# Schedule regular sync
cat > /etc/cron.d/suma-sync << EOF
0 2 * * * root /usr/bin/mgr-sync refresh > /var/log/suma-sync.log 2>&1
0 3 * * * root /usr/bin/spacewalk-repo-sync > /var/log/repo-sync.log 2>&1
EOF
# Custom channels
spacecmd softwarechannel_create \
--name="custom-apps" \
--label="custom-apps" \
--summary="Custom Applications"
# Add packages to custom channel
rhnpush --channel=custom-apps --server=localhost \
/path/to/packages/*.rpm
Content Lifecycle Management
# Create lifecycle project
spacecmd contentproject_create \
--name="webserver-lifecycle" \
--label="webserver-lifecycle" \
--description="Web Server Lifecycle"
# Create environments
spacecmd contentproject_addenvironment \
--project="webserver-lifecycle" \
--name="Development" \
--description="Development Environment"
spacecmd contentproject_addenvironment \
--project="webserver-lifecycle" \
--name="Testing" \
--description="Testing Environment"
spacecmd contentproject_addenvironment \
--project="webserver-lifecycle" \
--name="Production" \
--description="Production Environment"
# Attach source channels
spacecmd contentproject_attachsource \
--project="webserver-lifecycle" \
--channel="sles15-sp5-pool-x86_64"
# Build project
spacecmd contentproject_build \
--project="webserver-lifecycle"
# Promote content between environments
spacecmd contentproject_promote \
--project="webserver-lifecycle" \
--from="Development" \
--to="Testing"
Client System Registration
Bootstrapping Clients
# Generate bootstrap script
mgr-bootstrap
# Edit bootstrap script
cat > /srv/www/htdocs/pub/bootstrap/bootstrap.sh << 'EOF'
#!/bin/bash
# SUSE Manager Client Bootstrap Script
SUMA_SERVER="suma-server.example.com"
ACTIVATION_KEY="1-default-key"
# Install CA certificate
curl -Sks https://${SUMA_SERVER}/pub/RHN-ORG-TRUSTED-SSL-CERT \
-o /usr/share/rhn/RHN-ORG-TRUSTED-SSL-CERT
ln -s /usr/share/rhn/RHN-ORG-TRUSTED-SSL-CERT \
/usr/share/pki/trust/anchors/
update-ca-certificates
# Install client tools
zypper -n install salt-minion
# Configure Salt minion
cat > /etc/salt/minion << MINION_EOF
master: ${SUMA_SERVER}
MINION_EOF
# Start Salt minion
systemctl enable salt-minion
systemctl start salt-minion
# Register with activation key
salt-call state.apply channels.repos activation_key=${ACTIVATION_KEY}
EOF
# Make bootstrap script executable
chmod +x /srv/www/htdocs/pub/bootstrap/bootstrap.sh
# Bootstrap via Salt SSH
salt-ssh -i --roster=scan 192.168.1.0/24 \
state.apply certs,channels.repos \
pillar='{"mgr_server": "suma-server.example.com",
"activation_key": "1-default-key"}'
Creating Activation Keys
# Create activation key
spacecmd activationkey_create \
--key="1-webserver-key" \
--description="Web Server Systems" \
--base-channel="sles15-sp5-pool-x86_64"
# Add child channels
spacecmd activationkey_addchildchannel \
1-webserver-key \
sles15-sp5-updates-x86_64
# Add packages
spacecmd activationkey_addpackages \
1-webserver-key \
apache2 php7 mysql
# Add configuration channels
spacecmd activationkey_addconfigchannel \
1-webserver-key \
webserver-config
# Set system details
spacecmd activationkey_setdetails \
1-webserver-key \
--description="Production Web Servers" \
--usage-limit="50"
Configuration Management with Salt
Salt State Management
# Create custom Salt states
mkdir -p /srv/salt/webserver
# Web server state
cat > /srv/salt/webserver/init.sls << 'EOF'
apache2:
pkg.installed:
- pkgs:
- apache2
- apache2-mod_php7
service.running:
- enable: True
- watch:
- file: /etc/apache2/httpd.conf
/etc/apache2/httpd.conf:
file.managed:
- source: salt://webserver/files/httpd.conf
- user: root
- group: root
- mode: 644
/var/www/html/index.html:
file.managed:
- source: salt://webserver/files/index.html
- user: wwwrun
- group: www
- mode: 644
firewall_http:
firewalld.present:
- name: public
- services:
- http
- https
EOF
# Create configuration files
mkdir -p /srv/salt/webserver/files
cp /etc/apache2/httpd.conf /srv/salt/webserver/files/
# Apply state to systems
salt 'web*' state.apply webserver
# Schedule regular state enforcement
salt 'web*' schedule.add highstate \
function='state.apply' hours=1
Salt Formulas
# Install formula packages
zypper install locale-formula
zypper install prometheus-formula
# Configure formula
cat > /srv/pillar/locale.sls << EOF
locale:
present:
- "en_US.UTF-8 UTF-8"
default:
LANG: "en_US.UTF-8"
EOF
# Assign formula to systems
salt 'web*' state.apply locale
# Custom formula
mkdir -p /srv/formulas/monitoring-formula
cat > /srv/formulas/monitoring-formula/monitoring/init.sls << 'EOF'
monitoring_packages:
pkg.installed:
- pkgs:
- nagios-nrpe
- monitoring-plugins
/etc/nrpe.cfg:
file.managed:
- source: salt://monitoring/files/nrpe.cfg
- template: jinja
- context:
allowed_hosts: {{ pillar['monitoring']['nagios_server'] }}
nrpe:
service.running:
- enable: True
- watch:
- file: /etc/nrpe.cfg
EOF
Automated Patching
Patch Management
# Clone erratas to custom channel
spacewalk-clone-by-date \
--channel=sles15-sp5-pool-x86_64 \
--to_channel=sles15-sp5-prod \
--to_date="2024-02-01"
# Schedule patch installation
spacecmd system_scheduleapplyerrata \
web-server-01.example.com \
--errata="SUSE-SLE-15-SP5-2024-123"
# Create patch schedule
spacecmd schedule_create \
--name="Monthly Patching" \
--description="Monthly security patches" \
--schedule="0 2 15 * *"
# Automated patch workflow
cat > /usr/local/bin/patch-workflow.sh << 'EOF'
#!/bin/bash
# Automated patch workflow
# Variables
DEV_GROUP="development-servers"
TEST_GROUP="test-servers"
PROD_GROUP="production-servers"
# Function to patch group
patch_group() {
local group=$1
local date=$(date +%Y%m%d-%H%M%S)
echo "Patching $group at $date"
# Schedule patches
spacecmd group_applypatchesbytype $group \
--type="Security Advisory" \
--date="$(date +%Y-%m-%d %H:%M:%S)"
# Wait for completion
sleep 3600
# Verify patches
spacecmd group_listinstalledpatches $group > \
/var/log/patches-$group-$date.log
}
# Patch development first
patch_group $DEV_GROUP
# Wait 24 hours
sleep 86400
# Patch test
patch_group $TEST_GROUP
# Wait 48 hours
sleep 172800
# Patch production
patch_group $PROD_GROUP
EOF
chmod +x /usr/local/bin/patch-workflow.sh
Maintenance Windows
# Create maintenance windows
spacecmd system_setmaintenancewindow \
web-server-01.example.com \
--start="2024-02-15 22:00:00" \
--end="2024-02-16 06:00:00" \
--type="patch"
# Bulk maintenance windows
for server in $(spacecmd group_listsystems production-servers); do
spacecmd system_setmaintenancewindow $server \
--start="2024-02-15 22:00:00" \
--end="2024-02-16 06:00:00"
done
# Check maintenance windows
spacecmd system_listmaintenancewindows \
web-server-01.example.com
System Provisioning
Cobbler Integration
# Configure Cobbler
cobbler sync
# Import distribution
cobbler import --name=SLES15-SP5 \
--path=/mnt/sles15sp5 \
--arch=x86_64
# Create profile
cobbler profile add \
--name=webserver \
--distro=SLES15-SP5-x86_64 \
--kickstart=/var/lib/cobbler/kickstarts/webserver.ks
# Create system
cobbler system add \
--name=web-server-new \
--profile=webserver \
--mac=00:11:22:33:44:55 \
--ip-address=192.168.1.50 \
--hostname=web-server-new.example.com
# AutoYaST template
cat > /var/lib/cobbler/kickstarts/webserver.autoyast << 'EOF'
<?xml version="1.0"?>
<!DOCTYPE profile>
<profile xmlns="http://www.suse.com/1.0/yast2ns">
<general>
<mode>
<confirm config:type="boolean">false</confirm>
</mode>
</general>
<networking>
<keep_install_network config:type="boolean">true</keep_install_network>
</networking>
<software>
<packages config:type="list">
<package>salt-minion</package>
<package>apache2</package>
</packages>
</software>
<scripts>
<chroot-scripts config:type="list">
<script>
<source><![CDATA[
#!/bin/bash
# Configure Salt minion
echo "master: suma-server.example.com" > /etc/salt/minion
systemctl enable salt-minion
]]></source>
</script>
</chroot-scripts>
</scripts>
</profile>
EOF
Image Management
# Build system image
mgr-image-build --name="webserver-image" \
--version="1.0" \
--profile="webserver" \
--activation-key="1-webserver-key"
# List images
spacecmd image_list
# Deploy image
spacecmd system_deployimage \
web-server-01.example.com \
--image="webserver-image-1.0"
# Container images
cat > /srv/salt/container/Dockerfile << EOF
FROM registry.suse.com/suse/sles15:latest
RUN zypper -n install apache2 php7
EXPOSE 80
CMD ["/usr/sbin/httpd", "-D", "FOREGROUND"]
EOF
# Build container
salt 'build-host' docker.build /srv/salt/container \
image=webserver:latest
Compliance and Security
OpenSCAP Integration
# Schedule SCAP scan
spacecmd system_schedulescap \
web-server-01.example.com \
--profile="suse-sles15-stig"
# Create audit schedule
spacecmd schedule_create \
--name="Weekly SCAP Audit" \
--description="Weekly compliance scanning" \
--schedule="0 1 * * 0"
# Custom SCAP content
cat > /usr/share/openscap/custom-profile.xml << 'EOF'
<?xml version="1.0" encoding="UTF-8"?>
<Benchmark xmlns="http://checklists.nist.gov/xccdf/1.2">
<Profile id="custom-security">
<title>Custom Security Profile</title>
<select idref="xccdf_com.example_rule_ssh_config" selected="true"/>
<select idref="xccdf_com.example_rule_firewall" selected="true"/>
</Profile>
</Benchmark>
EOF
# Apply remediation
spacecmd system_scapremediateprofile \
web-server-01.example.com \
--profile="custom-security"
CVE Audit
# List systems affected by CVE
spacecmd system_listaffectedbycve CVE-2024-12345
# Generate CVE report
spacecmd report_cve --output=/tmp/cve-report.csv
# Automated CVE patching
cat > /usr/local/bin/cve-autopatch.sh << 'EOF'
#!/bin/bash
# Auto-patch critical CVEs
CRITICAL_CVES=$(spacecmd errata_listbycve --type="Security Advisory" |
grep -E "Critical|Important")
for CVE in $CRITICAL_CVES; do
SYSTEMS=$(spacecmd system_listaffectedbycve $CVE)
for SYSTEM in $SYSTEMS; do
echo "Patching $SYSTEM for $CVE"
spacecmd system_applyerrata $SYSTEM --errata=$CVE
done
done
EOF
chmod +x /usr/local/bin/cve-autopatch.sh
Monitoring and Reporting
System Monitoring
# Configure Prometheus exporter
salt '*' state.apply prometheus.node_exporter
# Monitoring dashboard
cat > /srv/salt/monitoring/prometheus.yml << EOF
global:
scrape_interval: 15s
scrape_configs:
- job_name: 'suma-nodes'
static_configs:
- targets: {{ salt['mine.get']('*', 'network.ip_addrs') | json }}
EOF
# Custom monitoring checks
cat > /srv/salt/_modules/suma_health.py << 'EOF'
def check_updates():
"""Check for pending updates"""
import subprocess
result = subprocess.run(['zypper', 'list-updates'],
capture_output=True, text=True)
if "No updates found" in result.stdout:
return {"status": "ok", "updates": 0}
else:
updates = len(result.stdout.strip().split('\n')) - 5
return {"status": "warning", "updates": updates}
def check_services():
"""Check critical services"""
services = ['salt-minion', 'sshd']
failed = []
for service in services:
result = __salt__['service.status'](service)
if not result:
failed.append(service)
if failed:
return {"status": "critical", "failed_services": failed}
else:
return {"status": "ok", "failed_services": []}
EOF
# Apply monitoring
salt '*' saltutil.sync_modules
salt '*' suma_health.check_updates
Reporting
# Generate system reports
spacecmd report_systems --all > /tmp/systems-report.csv
# Patch compliance report
spacecmd report_patchcompliance \
--group="production-servers" \
--output=/tmp/patch-compliance.csv
# Custom reports
cat > /usr/local/bin/suma-report.py << 'EOF'
#!/usr/bin/env python3
import xmlrpc.client
import csv
from datetime import datetime
# Connect to SUSE Manager
SUMA_URL = "https://suma-server.example.com/rpc/api"
SUMA_LOGIN = "admin"
SUMA_PASSWORD = "password"
client = xmlrpc.client.ServerProxy(SUMA_URL)
session = client.auth.login(SUMA_LOGIN, SUMA_PASSWORD)
# Get all systems
systems = client.system.listSystems(session)
# Generate report
with open('/tmp/suma-inventory.csv', 'w', newline='') as f:
writer = csv.writer(f)
writer.writerow(['ID', 'Name', 'OS', 'Kernel', 'Last Checkin'])
for system in systems:
details = client.system.getDetails(session, system['id'])
kernel = client.system.getRunningKernel(session, system['id'])
writer.writerow([
system['id'],
details['hostname'],
details['release'],
kernel,
system['last_checkin']
])
client.auth.logout(session)
EOF
chmod +x /usr/local/bin/suma-report.py
# Schedule reports
echo "0 8 * * 1 /usr/local/bin/suma-report.py" | crontab -
High Availability Setup
SUSE Manager HA Configuration
# Install HA components
zypper install suse-manager-ha
# Configure shared storage
# Mount NFS share for /var/spacewalk
echo "nfs-server:/suma-data /var/spacewalk nfs defaults 0 0" >> /etc/fstab
mount /var/spacewalk
# Configure database HA
# Set up PostgreSQL streaming replication
# Primary node
cat >> /var/lib/pgsql/data/postgresql.conf << EOF
wal_level = replica
max_wal_senders = 3
wal_keep_segments = 64
synchronous_commit = on
EOF
# Configure Pacemaker resources
crm configure primitive suma-vip IPaddr2 \
params ip="192.168.1.100" cidr_netmask="24"
crm configure primitive suma-postgres ocf:heartbeat:pgsql \
params pgctl="/usr/bin/pg_ctl" \
pgdata="/var/lib/pgsql/data" \
rep_mode="sync" \
op monitor interval="30s"
crm configure ms ms-postgres suma-postgres \
meta master-max="1" master-node-max="1" \
clone-max="2" clone-node-max="1"
Integration with External Systems
API Automation
# Python API example
cat > /usr/local/bin/suma-api-example.py << 'EOF'
#!/usr/bin/env python3
import xmlrpc.client
import ssl
# Disable SSL verification for self-signed certificates
ssl._create_default_https_context = ssl._create_unverified_context
class SUMAClient:
def __init__(self, url, username, password):
self.client = xmlrpc.client.ServerProxy(url)
self.session = self.client.auth.login(username, password)
def list_systems(self):
return self.client.system.listSystems(self.session)
def apply_updates(self, system_id):
errata = self.client.system.getRelevantErrata(self.session, system_id)
for erratum in errata:
self.client.system.scheduleApplyErrata(
self.session, system_id, [erratum['id']]
)
def __del__(self):
self.client.auth.logout(self.session)
# Usage
suma = SUMAClient(
"https://suma-server.example.com/rpc/api",
"admin",
"password"
)
for system in suma.list_systems():
print(f"Updating {system['name']}")
suma.apply_updates(system['id'])
EOF
# REST API example using spacecmd
spacecmd api system.listSystems > systems.json
Ansible Integration
# SUSE Manager Ansible playbook
cat > /srv/ansible/suma-manage.yml << 'EOF'
---
- name: Manage systems via SUSE Manager
hosts: localhost
vars:
suma_url: https://suma-server.example.com/rpc/api
suma_user: admin
suma_pass: password
tasks:
- name: Get system list
uri:
url: "{{ suma_url }}"
method: POST
body_format: json
body:
method: "system.listSystems"
params:
- "{{ session_key }}"
register: systems
- name: Apply updates to web servers
uri:
url: "{{ suma_url }}"
method: POST
body_format: json
body:
method: "system.scheduleApplyAllErrata"
params:
- "{{ session_key }}"
- "{{ item.id }}"
loop: "{{ systems.json.result }}"
when: "'web' in item.name"
EOF
Troubleshooting
Common Issues
# Check service status
systemctl status spacewalk-service
spacewalk-service status
# Database issues
spacewalk-sql -i
SELECT * FROM rhnserver WHERE name LIKE '%problem%';
# Clear Taskomatic cache
systemctl stop taskomatic
rm -rf /var/cache/rhn/satsync/*
systemctl start taskomatic
# Repair repository metadata
spacewalk-data-fsck
# Check logs
tail -f /var/log/rhn/rhn_web_ui.log
tail -f /var/log/rhn/rhn_taskomatic_daemon.log
journalctl -u salt-master -f
# Salt minion connection issues
salt-key -L
salt-key -a minion-name
salt 'minion-name' test.ping
# Repository sync issues
/usr/bin/spacewalk-repo-sync --channel channel-label -t http
# Performance tuning
# Increase Tomcat memory
sed -i 's/-Xms2G/-Xms4G/g' /etc/sysconfig/tomcat
sed -i 's/-Xmx2G/-Xmx8G/g' /etc/sysconfig/tomcat
systemctl restart tomcat
# Database optimization
su - postgres -c "vacuumdb -z susemanager"
su - postgres -c "reindexdb susemanager"
Best Practices
Security Hardening
# Enable SSL/TLS only
sed -i 's/SSLProtocol.*/SSLProtocol -all +TLSv1.2 +TLSv1.3/' \
/etc/apache2/ssl-global.conf
# Restrict API access
cat >> /etc/apache2/conf.d/suma-api.conf << EOF
<Location /rpc/api>
Require ip 192.168.1.0/24
</Location>
EOF
# Regular security updates
echo "0 2 * * * root zypper -n update --type security" >> /etc/crontab
# Audit logging
cat >> /etc/salt/master.d/audit.conf << EOF
keep_jobs: 720
ext_job_cache: postgres
master_job_cache: postgres
EOF
Conclusion
SUSE Manager provides powerful capabilities for enterprise Linux management. Proper planning, regular maintenance, and following best practices ensure efficient operation at scale. Always test changes in a non-production environment before deploying to production systems.