SUSE Container Platform: Enterprise Container Management Guide

Tyler Maginnis | February 15, 2024

SUSEcontainersDockerKubernetesRanchercontainerization

Need Professional SUSE Enterprise Support?

Get expert assistance with your suse enterprise support implementation and management. Tyler on Tech Louisville provides priority support for Louisville businesses.

Same-day service available for Louisville area

SUSE Container Platform: Enterprise Container Management Guide

This comprehensive guide covers SUSE's container ecosystem, including Docker and Podman runtime management, Kubernetes orchestration, Rancher platform deployment, and enterprise container best practices.

Introduction to SUSE Container Ecosystem

SUSE's container platform includes: - Container Runtimes: Docker, Podman, CRI-O - SUSE CaaS Platform: Enterprise Kubernetes - Rancher: Multi-cluster Kubernetes management - SUSE Linux Enterprise Base Container Images: Secure base images - Container Security: Scanning and compliance - Registry Management: Harbor integration

Container Runtime Setup

Docker Installation and Configuration

# Install Docker on SLES
sudo zypper install docker docker-compose

# Enable and start Docker
sudo systemctl enable docker
sudo systemctl start docker

# Add user to docker group
sudo usermod -aG docker $USER
newgrp docker

# Configure Docker daemon
cat > /etc/docker/daemon.json << 'EOF'
{
  "storage-driver": "overlay2",
  "storage-opts": [
    "overlay2.override_kernel_check=true"
  ],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m",
    "max-file": "10"
  },
  "live-restore": true,
  "userland-proxy": false,
  "no-new-privileges": true,
  "selinux-enabled": false,
  "insecure-registries": [],
  "registry-mirrors": ["https://registry.example.com"],
  "exec-opts": ["native.cgroupdriver=systemd"],
  "default-ulimits": {
    "nofile": {
      "Name": "nofile",
      "Hard": 64000,
      "Soft": 64000
    }
  }
}
EOF

# Restart Docker
sudo systemctl restart docker

# Verify installation
docker version
docker info

Podman Setup (Rootless Containers)

# Install Podman
sudo zypper install podman podman-docker buildah skopeo

# Configure Podman for rootless operation
cat > ~/.config/containers/containers.conf << 'EOF'
[containers]
default_capabilities = [
  "CHOWN",
  "DAC_OVERRIDE",
  "FOWNER",
  "FSETID",
  "KILL",
  "NET_BIND_SERVICE",
  "SETFCAP",
  "SETGID",
  "SETPCAP",
  "SETUID",
]
default_sysctls = [
  "net.ipv4.ping_group_range=0 0",
]
ipcns = "private"
netns = "private"
userns = "private"
utsns = "private"

[network]
default_network = "podman"

[engine]
cgroup_manager = "systemd"
events_logger = "journald"
runtime = "crun"
EOF

# Configure registries
cat > ~/.config/containers/registries.conf << 'EOF'
unqualified-search-registries = ["docker.io", "registry.suse.com"]

[[registry]]
prefix = "docker.io"
location = "docker.io"

[[registry.mirror]]
location = "registry.example.com"

[[registry]]
prefix = "registry.suse.com"
location = "registry.suse.com"
insecure = false
EOF

# Enable podman socket for Docker compatibility
systemctl --user enable podman.socket
systemctl --user start podman.socket

# Test Podman
podman run --rm registry.suse.com/suse/sle15:latest cat /etc/os-release

SUSE Base Container Images

Working with SUSE Container Images

# Login to SUSE registry
docker login registry.suse.com

# Pull SUSE base images
docker pull registry.suse.com/suse/sle15:latest
docker pull registry.suse.com/suse/sles12sp5:latest
docker pull registry.suse.com/bci/bci-base:latest

# Create custom SUSE-based image
cat > Dockerfile << 'EOF'
FROM registry.suse.com/bci/bci-base:latest

# Update system
RUN zypper ref && \
    zypper -n update && \
    zypper clean -a

# Install required packages
RUN zypper -n install \
    python3 \
    python3-pip \
    curl \
    vim \
    less && \
    zypper clean -a

# Create application user
RUN useradd -m -u 1000 -s /bin/bash appuser

# Set up application directory
WORKDIR /app
RUN chown appuser:appuser /app

# Switch to non-root user
USER appuser

# Copy application
COPY --chown=appuser:appuser . /app/

# Install Python dependencies
RUN pip3 install --user -r requirements.txt

# Expose port
EXPOSE 8080

# Health check
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
  CMD curl -f http://localhost:8080/health || exit 1

# Run application
CMD ["python3", "app.py"]
EOF

# Build image with proper labels
docker build \
  --label "org.opencontainers.image.title=myapp" \
  --label "org.opencontainers.image.description=My Application" \
  --label "org.opencontainers.image.version=1.0.0" \
  --label "org.opencontainers.image.vendor=Example Corp" \
  --label "org.opencontainers.image.created=$(date -u +%Y-%m-%dT%H:%M:%SZ)" \
  -t myapp:1.0.0 .

Multi-stage Builds

# Efficient multi-stage build
cat > Dockerfile.multistage << 'EOF'
# Build stage
FROM registry.suse.com/bci/golang:1.20 AS builder

WORKDIR /build
COPY go.mod go.sum ./
RUN go mod download

COPY . .
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o app .

# Runtime stage
FROM registry.suse.com/bci/bci-micro:latest

# Install ca-certificates for HTTPS
RUN echo "registry.suse.com" > /etc/zypp/vendors.d/suse && \
    zypper -n install ca-certificates && \
    zypper clean -a

# Create non-root user
RUN echo "appuser:x:1000:1000::/app:/bin/false" >> /etc/passwd && \
    echo "appuser:x:1000:" >> /etc/group

WORKDIR /app
COPY --from=builder /build/app .
RUN chown -R appuser:appuser /app

USER appuser
EXPOSE 8080
ENTRYPOINT ["./app"]
EOF

# Build optimized image
docker build -f Dockerfile.multistage -t myapp:optimized .

Container Security

Image Scanning and Compliance

# Install Trivy for vulnerability scanning
sudo zypper install trivy

# Scan image for vulnerabilities
trivy image myapp:1.0.0

# Create security scanning script
cat > /usr/local/bin/container-security-scan.sh << 'EOF'
#!/bin/bash
# Container security scanning

IMAGE="$1"
REPORT_DIR="/var/log/container-security"
mkdir -p "$REPORT_DIR"

echo "Scanning container image: $IMAGE"

# Trivy scan
trivy image --severity HIGH,CRITICAL \
  --format json \
  --output "$REPORT_DIR/trivy-scan-$(date +%Y%m%d-%H%M%S).json" \
  "$IMAGE"

# Check for root user
if docker run --rm --entrypoint="" "$IMAGE" id -u | grep -q "^0$"; then
  echo "WARNING: Container runs as root user"
fi

# Check for sensitive files
SENSITIVE_FILES=(
  "/etc/passwd"
  "/etc/shadow"
  "/.ssh/id_rsa"
  "/.aws/credentials"
)

for file in "${SENSITIVE_FILES[@]}"; do
  if docker run --rm --entrypoint="" "$IMAGE" test -f "$file"; then
    echo "WARNING: Sensitive file found: $file"
  fi
done

# Generate compliance report
cat > "$REPORT_DIR/compliance-$(date +%Y%m%d).txt" << REPORT
Container Security Compliance Report
Image: $IMAGE
Date: $(date)

Vulnerabilities: $(trivy image -q --severity HIGH,CRITICAL "$IMAGE" | grep -c "Total:")
Running as root: $(docker run --rm --entrypoint="" "$IMAGE" id -u | grep -q "^0$" && echo "Yes" || echo "No")
Base image: $(docker inspect "$IMAGE" | jq -r '.[0].Config.Image // "Unknown"')
Size: $(docker images "$IMAGE" --format "{{.Size}}")
REPORT

echo "Security scan completed. Reports saved to $REPORT_DIR"
EOF

chmod +x /usr/local/bin/container-security-scan.sh

# Create container security policy
cat > /etc/docker/policies/security-policy.json << 'EOF'
{
  "policies": {
    "default": [
      {
        "type": "insecureAcceptAnything"
      }
    ],
    "transports": {
      "docker": {
        "registry.example.com": [
          {
            "type": "signedBy",
            "keyType": "GPGKeys",
            "keyPath": "/etc/docker/certs.d/registry.example.com/ca.crt"
          }
        ]
      }
    }
  }
}
EOF

Runtime Security

# AppArmor profile for containers
cat > /etc/apparmor.d/docker-custom << 'EOF'
#include <tunables/global>

profile docker-custom flags=(attach_disconnected,mediate_deleted) {
  #include <abstractions/base>

  network,
  capability,

  # Deny specific capabilities
  deny capability sys_admin,
  deny capability sys_module,
  deny capability sys_rawio,

  # File access
  /usr/bin/docker ix,
  /proc/sys/net/** r,
  /sys/kernel/mm/transparent_hugepage/hpage_pmd_size r,

  # Deny access to sensitive files
  deny /etc/shadow r,
  deny /etc/gshadow r,
  deny /boot/** rwx,
  deny /root/** rwx,

  # Allow container directories
  /var/lib/docker/** rw,
  /var/run/docker.sock rw,
}
EOF

# Load AppArmor profile
sudo apparmor_parser -r /etc/apparmor.d/docker-custom

# SELinux context for containers (if using SELinux)
# sudo semanage fcontext -a -t container_file_t "/data/containers(/.*)?"
# sudo restorecon -Rv /data/containers

# Seccomp profile
cat > /etc/docker/seccomp/custom.json << 'EOF'
{
  "defaultAction": "SCMP_ACT_ERRNO",
  "architectures": [
    "SCMP_ARCH_X86_64",
    "SCMP_ARCH_X86",
    "SCMP_ARCH_X32"
  ],
  "syscalls": [
    {
      "names": [
        "accept", "accept4", "access", "alarm", "bind", "brk",
        "capget", "capset", "chdir", "chmod", "chown", "close",
        "connect", "copy_file_range", "creat", "dup", "dup2",
        "epoll_create", "epoll_ctl", "epoll_wait", "execve",
        "exit", "exit_group", "fchdir", "fchmod", "fchown",
        "fcntl", "fdatasync", "fgetxattr", "flock", "fork",
        "fstat", "fsync", "ftruncate", "futex", "getcwd",
        "getdents", "getegid", "geteuid", "getgid", "getgroups",
        "getpeername", "getpgrp", "getpid", "getppid", "getpriority",
        "getrandom", "getresgid", "getresuid", "getrlimit",
        "getsockname", "getsockopt", "gettid", "getuid", "ioctl",
        "kill", "lchown", "link", "listen", "lseek", "lstat",
        "madvise", "memfd_create", "mkdir", "mmap", "mprotect",
        "mremap", "munmap", "nanosleep", "newfstatat", "open",
        "openat", "pause", "pipe", "poll", "pread64", "pselect6",
        "pwrite64", "read", "readlink", "recvfrom", "recvmsg",
        "rename", "rmdir", "rt_sigaction", "rt_sigpending",
        "rt_sigprocmask", "rt_sigqueueinfo", "rt_sigreturn",
        "rt_sigsuspend", "rt_sigtimedwait", "select", "semctl",
        "semget", "semop", "sendfile", "sendmsg", "sendto",
        "set_robust_list", "setgid", "setgroups", "setitimer",
        "setpgid", "setpriority", "setresgid", "setresuid",
        "setsid", "setsockopt", "setuid", "shutdown", "sigaltstack",
        "socket", "socketpair", "stat", "statfs", "symlink",
        "sync", "sysinfo", "tgkill", "time", "times", "truncate",
        "umask", "uname", "unlink", "utime", "utimensat", "wait4",
        "waitid", "write", "writev"
      ],
      "action": "SCMP_ACT_ALLOW"
    }
  ]
}
EOF

Kubernetes on SUSE

SUSE CaaS Platform Deployment

# Install skuba (SUSE Kubernetes deployment tool)
sudo zypper install skuba

# Initialize cluster configuration
skuba cluster init --control-plane load-balancer.example.com my-cluster
cd my-cluster

# Deploy first master node
skuba node bootstrap --user sles --sudo --target master01.example.com master01

# Join additional master nodes
skuba node join --role master --user sles --sudo --target master02.example.com master02
skuba node join --role master --user sles --sudo --target master03.example.com master03

# Join worker nodes
skuba node join --role worker --user sles --sudo --target worker01.example.com worker01
skuba node join --role worker --user sles --sudo --target worker02.example.com worker02
skuba node join --role worker --user sles --sudo --target worker03.example.com worker03

# Verify cluster status
kubectl get nodes
kubectl get pods --all-namespaces

Kubernetes Configuration

# Configure kubectl
mkdir -p ~/.kube
cp admin.conf ~/.kube/config
chmod 600 ~/.kube/config

# Install Helm
curl https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 | bash

# Add SUSE Helm repository
helm repo add suse https://kubernetes-charts.suse.com
helm repo update

# Deploy metrics server
kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml

# Configure storage class
cat > storage-class.yaml << 'EOF'
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: fast-ssd
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
reclaimPolicy: Delete
EOF

kubectl apply -f storage-class.yaml

# Deploy Nginx Ingress Controller
helm install nginx-ingress suse/nginx-ingress \
  --namespace ingress-nginx \
  --create-namespace \
  --set controller.service.type=LoadBalancer \
  --set controller.metrics.enabled=true \
  --set controller.podSecurityPolicy.enabled=true

Rancher Deployment

Installing Rancher on SLES

# Install Rancher prerequisites
sudo zypper install docker curl

# Install RKE (Rancher Kubernetes Engine)
curl -LO https://github.com/rancher/rke/releases/latest/download/rke_linux-amd64
sudo mv rke_linux-amd64 /usr/local/bin/rke
sudo chmod +x /usr/local/bin/rke

# Create RKE configuration
cat > rancher-cluster.yml << 'EOF'
nodes:
  - address: rancher01.example.com
    port: "22"
    internal_address: 10.0.1.10
    user: sles
    role: [controlplane, worker, etcd]
  - address: rancher02.example.com
    port: "22"
    internal_address: 10.0.1.11
    user: sles
    role: [controlplane, worker, etcd]
  - address: rancher03.example.com
    port: "22"
    internal_address: 10.0.1.12
    user: sles
    role: [controlplane, worker, etcd]

services:
  etcd:
    snapshot: true
    creation: "6h"
    retention: "24h"

network:
  plugin: canal

ingress:
  provider: nginx
  options:
    use-forwarded-headers: "true"
EOF

# Deploy RKE cluster
rke up --config ./rancher-cluster.yml

# Install cert-manager
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.11.0/cert-manager.yaml

# Wait for cert-manager
kubectl -n cert-manager rollout status deploy/cert-manager

# Install Rancher
helm repo add rancher-stable https://releases.rancher.com/server-charts/stable
helm repo update

kubectl create namespace cattle-system

# Install Rancher with Let's Encrypt
helm install rancher rancher-stable/rancher \
  --namespace cattle-system \
  --set hostname=rancher.example.com \
  --set ingress.tls.source=letsEncrypt \
  --set letsEncrypt.email=admin@example.com \
  --set replicas=3

# Monitor deployment
kubectl -n cattle-system rollout status deploy/rancher
kubectl -n cattle-system get pods

Rancher Configuration

# Get initial admin password
kubectl get secret --namespace cattle-system bootstrap-secret -o go-template='{{.data.bootstrapPassword|base64decode}}'

# Create Rancher backup
cat > rancher-backup.yaml << 'EOF'
apiVersion: resources.cattle.io/v1
kind: Backup
metadata:
  name: rancher-backup-$(date +%Y%m%d)
spec:
  storageLocation:
    s3:
      bucketName: rancher-backups
      folder: backups
      region: us-east-1
      endpoint: s3.amazonaws.com
  resourceSetName: rancher-resource-set
  schedule: "0 2 * * *"
  retentionCount: 30
EOF

# Configure multi-cluster management
cat > cluster-registration.sh << 'EOF'
#!/bin/bash
# Register downstream cluster with Rancher

RANCHER_URL="https://rancher.example.com"
RANCHER_TOKEN="token-xxxxx:yyyyyyyy"
CLUSTER_NAME="production-cluster"

# Create registration command
curl -k -X POST \
  -H "Authorization: Bearer ${RANCHER_TOKEN}" \
  -H "Content-Type: application/json" \
  -d "{\"name\":\"${CLUSTER_NAME}\"}" \
  "${RANCHER_URL}/v3/clusters" | jq -r '.registrationToken.command'
EOF

chmod +x cluster-registration.sh

Container Orchestration Patterns

Microservices Deployment

# Microservices application deployment
cat > microservices-app.yaml << 'EOF'
---
apiVersion: v1
kind: Namespace
metadata:
  name: microservices
---
# Frontend Service
apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
  namespace: microservices
spec:
  replicas: 3
  selector:
    matchLabels:
      app: frontend
  template:
    metadata:
      labels:
        app: frontend
    spec:
      containers:
      - name: frontend
        image: registry.example.com/frontend:1.0.0
        ports:
        - containerPort: 80
        env:
        - name: API_URL
          value: "http://api-service:8080"
        resources:
          requests:
            memory: "128Mi"
            cpu: "100m"
          limits:
            memory: "256Mi"
            cpu: "200m"
        livenessProbe:
          httpGet:
            path: /health
            port: 80
          initialDelaySeconds: 30
          periodSeconds: 10
        readinessProbe:
          httpGet:
            path: /ready
            port: 80
          initialDelaySeconds: 5
          periodSeconds: 5
---
apiVersion: v1
kind: Service
metadata:
  name: frontend-service
  namespace: microservices
spec:
  selector:
    app: frontend
  ports:
  - port: 80
    targetPort: 80
  type: ClusterIP
---
# API Service
apiVersion: apps/v1
kind: Deployment
metadata:
  name: api
  namespace: microservices
spec:
  replicas: 3
  selector:
    matchLabels:
      app: api
  template:
    metadata:
      labels:
        app: api
    spec:
      containers:
      - name: api
        image: registry.example.com/api:1.0.0
        ports:
        - containerPort: 8080
        env:
        - name: DATABASE_URL
          valueFrom:
            secretKeyRef:
              name: db-credentials
              key: url
        resources:
          requests:
            memory: "256Mi"
            cpu: "200m"
          limits:
            memory: "512Mi"
            cpu: "500m"
---
apiVersion: v1
kind: Service
metadata:
  name: api-service
  namespace: microservices
spec:
  selector:
    app: api
  ports:
  - port: 8080
    targetPort: 8080
  type: ClusterIP
---
# Ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: microservices-ingress
  namespace: microservices
  annotations:
    cert-manager.io/cluster-issuer: letsencrypt-prod
    nginx.ingress.kubernetes.io/rate-limit: "100"
spec:
  tls:
  - hosts:
    - app.example.com
    secretName: app-tls
  rules:
  - host: app.example.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: frontend-service
            port:
              number: 80
      - path: /api
        pathType: Prefix
        backend:
          service:
            name: api-service
            port:
              number: 8080
EOF

kubectl apply -f microservices-app.yaml

StatefulSet for Databases

# PostgreSQL StatefulSet
cat > postgres-statefulset.yaml << 'EOF'
apiVersion: v1
kind: ConfigMap
metadata:
  name: postgres-config
  namespace: databases
data:
  postgresql.conf: |
    listen_addresses = '*'
    max_connections = 100
    shared_buffers = 256MB
    effective_cache_size = 1GB
    maintenance_work_mem = 64MB
    checkpoint_completion_target = 0.9
    wal_buffers = 16MB
    default_statistics_target = 100
    random_page_cost = 1.1
    effective_io_concurrency = 200
    work_mem = 4MB
    min_wal_size = 1GB
    max_wal_size = 4GB
---
apiVersion: v1
kind: Service
metadata:
  name: postgres-headless
  namespace: databases
spec:
  clusterIP: None
  selector:
    app: postgres
  ports:
  - port: 5432
    targetPort: 5432
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: postgres
  namespace: databases
spec:
  serviceName: postgres-headless
  replicas: 3
  selector:
    matchLabels:
      app: postgres
  template:
    metadata:
      labels:
        app: postgres
    spec:
      initContainers:
      - name: postgres-init
        image: registry.suse.com/suse/sle15:latest
        command: ['sh', '-c', 'chown -R 999:999 /var/lib/postgresql/data']
        volumeMounts:
        - name: postgres-storage
          mountPath: /var/lib/postgresql/data
      containers:
      - name: postgres
        image: postgres:14-alpine
        ports:
        - containerPort: 5432
        env:
        - name: POSTGRES_DB
          value: myapp
        - name: POSTGRES_USER
          value: postgres
        - name: POSTGRES_PASSWORD
          valueFrom:
            secretKeyRef:
              name: postgres-secret
              key: password
        - name: POSTGRES_REPLICATION_MODE
          value: master
        - name: POSTGRES_REPLICATION_USER
          value: replicator
        - name: POSTGRES_REPLICATION_PASSWORD
          valueFrom:
            secretKeyRef:
              name: postgres-secret
              key: replication-password
        volumeMounts:
        - name: postgres-storage
          mountPath: /var/lib/postgresql/data
        - name: postgres-config
          mountPath: /etc/postgresql
        resources:
          requests:
            memory: "1Gi"
            cpu: "500m"
          limits:
            memory: "2Gi"
            cpu: "1000m"
  volumeClaimTemplates:
  - metadata:
      name: postgres-storage
    spec:
      accessModes: ["ReadWriteOnce"]
      storageClassName: "fast-ssd"
      resources:
        requests:
          storage: 10Gi
EOF

kubectl apply -f postgres-statefulset.yaml

Container Registry Management

Harbor Installation

# Install Harbor using Helm
helm repo add harbor https://helm.goharbor.io
helm repo update

# Create values file for Harbor
cat > harbor-values.yaml << 'EOF'
expose:
  type: ingress
  tls:
    enabled: true
    certSource: secret
    secret:
      secretName: "harbor-tls"
  ingress:
    hosts:
      core: registry.example.com
      notary: notary.example.com
    className: nginx
    annotations:
      cert-manager.io/cluster-issuer: letsencrypt-prod

externalURL: https://registry.example.com

persistence:
  enabled: true
  resourcePolicy: "keep"
  persistentVolumeClaim:
    registry:
      storageClass: "fast-ssd"
      size: 100Gi
    chartmuseum:
      storageClass: "fast-ssd"
      size: 5Gi
    database:
      storageClass: "fast-ssd"
      size: 10Gi
    redis:
      storageClass: "fast-ssd"
      size: 5Gi
    trivy:
      storageClass: "fast-ssd"
      size: 5Gi

harborAdminPassword: "Harbor12345!"

portal:
  replicas: 2

core:
  replicas: 2

registry:
  replicas: 2

trivy:
  enabled: true
  vulnType: "os,library"

notary:
  enabled: true

metrics:
  enabled: true
  serviceMonitor:
    enabled: true
EOF

# Install Harbor
kubectl create namespace harbor
helm install harbor harbor/harbor \
  --namespace harbor \
  --values harbor-values.yaml \
  --wait

# Configure Docker to use Harbor
cat > /etc/docker/daemon.json << 'EOF'
{
  "insecure-registries": [],
  "registry-mirrors": ["https://registry.example.com"]
}
EOF

# Login to Harbor
docker login registry.example.com

# Push image to Harbor
docker tag myapp:1.0.0 registry.example.com/library/myapp:1.0.0
docker push registry.example.com/library/myapp:1.0.0

Registry Replication

# Configure Harbor replication
cat > harbor-replication.sh << 'EOF'
#!/bin/bash
# Harbor registry replication setup

HARBOR_URL="https://registry.example.com"
HARBOR_USER="admin"
HARBOR_PASS="Harbor12345!"

# Create replication endpoint
curl -X POST "$HARBOR_URL/api/v2.0/registries" \
  -H "Content-Type: application/json" \
  -u "$HARBOR_USER:$HARBOR_PASS" \
  -d '{
    "name": "backup-registry",
    "type": "harbor",
    "url": "https://backup-registry.example.com",
    "credential": {
      "access_key": "admin",
      "access_secret": "BackupHarbor123!"
    },
    "insecure": false
  }'

# Create replication policy
curl -X POST "$HARBOR_URL/api/v2.0/replication/policies" \
  -H "Content-Type: application/json" \
  -u "$HARBOR_USER:$HARBOR_PASS" \
  -d '{
    "name": "backup-policy",
    "src_registry": {
      "id": 0
    },
    "dest_registry": {
      "id": 1
    },
    "dest_namespace": "backup",
    "trigger": {
      "type": "scheduled",
      "trigger_settings": {
        "cron": "0 2 * * *"
      }
    },
    "enabled": true,
    "deletion": false,
    "override": true,
    "filters": [
      {
        "type": "name",
        "value": "library/**"
      }
    ]
  }'
EOF

chmod +x harbor-replication.sh

CI/CD Integration

GitLab Runner on SUSE

# Install GitLab Runner
curl -L https://packages.gitlab.com/install/repositories/runner/gitlab-runner/script.rpm.sh | sudo bash
sudo zypper install gitlab-runner

# Register runner
sudo gitlab-runner register \
  --non-interactive \
  --url "https://gitlab.example.com/" \
  --registration-token "PROJECT_REGISTRATION_TOKEN" \
  --executor "docker" \
  --docker-image "registry.suse.com/bci/bci-base:latest" \
  --description "SUSE Docker Runner" \
  --tag-list "docker,suse" \
  --run-untagged="true" \
  --locked="false" \
  --access-level="not_protected"

# Configure runner for Kubernetes
cat > /etc/gitlab-runner/config.toml << 'EOF'
concurrent = 10
check_interval = 0

[session_server]
  session_timeout = 1800

[[runners]]
  name = "Kubernetes Runner"
  url = "https://gitlab.example.com/"
  token = "TOKEN"
  executor = "kubernetes"
  [runners.custom_build_dir]
  [runners.cache]
    [runners.cache.s3]
    [runners.cache.gcs]
  [runners.kubernetes]
    host = ""
    bearer_token_overwrite_allowed = false
    image = "registry.suse.com/bci/bci-base:latest"
    namespace = "gitlab-runner"
    namespace_overwrite_allowed = ""
    privileged = true
    service_account = "gitlab-runner"
    service_account_overwrite_allowed = ""
    pod_annotations_overwrite_allowed = ""
    [runners.kubernetes.node_selector]
      "kubernetes.io/os" = "linux"
    [runners.kubernetes.volumes]
      [[runners.kubernetes.volumes.host_path]]
        name = "docker"
        mount_path = "/var/run/docker.sock"
        host_path = "/var/run/docker.sock"
EOF

Container Build Pipeline

# .gitlab-ci.yml
stages:
  - build
  - test
  - scan
  - deploy

variables:
  DOCKER_DRIVER: overlay2
  DOCKER_TLS_CERTDIR: ""
  IMAGE_NAME: $CI_REGISTRY_IMAGE/$CI_COMMIT_REF_SLUG
  IMAGE_TAG: $CI_COMMIT_SHORT_SHA

before_script:
  - docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY

build:
  stage: build
  image: docker:20.10
  services:
    - docker:20.10-dind
  script:
    - docker build -t $IMAGE_NAME:$IMAGE_TAG .
    - docker tag $IMAGE_NAME:$IMAGE_TAG $IMAGE_NAME:latest
    - docker push $IMAGE_NAME:$IMAGE_TAG
    - docker push $IMAGE_NAME:latest
  only:
    - main
    - develop

test:
  stage: test
  image: $IMAGE_NAME:$IMAGE_TAG
  script:
    - python -m pytest tests/
    - python -m pylint src/
  coverage: '/TOTAL.*\s+(\d+%)$/'
  artifacts:
    reports:
      junit: test-results.xml
      coverage: coverage.xml

security_scan:
  stage: scan
  image: aquasec/trivy:latest
  script:
    - trivy image --exit-code 0 --no-progress --format json --output scanning-report.json $IMAGE_NAME:$IMAGE_TAG
    - trivy image --exit-code 1 --severity HIGH,CRITICAL --no-progress $IMAGE_NAME:$IMAGE_TAG
  artifacts:
    reports:
      container_scanning: scanning-report.json
  allow_failure: true

deploy_staging:
  stage: deploy
  image: bitnami/kubectl:latest
  script:
    - kubectl set image deployment/app app=$IMAGE_NAME:$IMAGE_TAG -n staging
    - kubectl rollout status deployment/app -n staging
  environment:
    name: staging
    url: https://staging.example.com
  only:
    - develop

deploy_production:
  stage: deploy
  image: bitnami/kubectl:latest
  script:
    - kubectl set image deployment/app app=$IMAGE_NAME:$IMAGE_TAG -n production
    - kubectl rollout status deployment/app -n production
  environment:
    name: production
    url: https://app.example.com
  when: manual
  only:
    - main

Container Monitoring

Prometheus and Grafana Setup

# Install Prometheus Operator
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update

# Install kube-prometheus-stack
helm install prometheus prometheus-community/kube-prometheus-stack \
  --namespace monitoring \
  --create-namespace \
  --set prometheus.prometheusSpec.storageSpec.volumeClaimTemplate.spec.storageClassName=fast-ssd \
  --set prometheus.prometheusSpec.storageSpec.volumeClaimTemplate.spec.resources.requests.storage=50Gi \
  --set grafana.persistence.enabled=true \
  --set grafana.persistence.storageClassName=fast-ssd \
  --set grafana.persistence.size=10Gi

# Container metrics configuration
cat > container-metrics.yaml << 'EOF'
apiVersion: v1
kind: ServiceMonitor
metadata:
  name: container-metrics
  namespace: monitoring
spec:
  selector:
    matchLabels:
      app: containerized-app
  endpoints:
  - port: metrics
    interval: 30s
    path: /metrics
---
apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
  name: container-alerts
  namespace: monitoring
spec:
  groups:
  - name: container.rules
    interval: 30s
    rules:
    - alert: ContainerCPUUsage
      expr: |
        (sum(rate(container_cpu_usage_seconds_total[5m])) BY (pod, container) * 100) > 80
      for: 5m
      labels:
        severity: warning
      annotations:
        summary: "Container CPU usage is above 80%"
        description: "Container {{ $labels.container }} in pod {{ $labels.pod }} CPU usage is {{ $value }}%"

    - alert: ContainerMemoryUsage
      expr: |
        (sum(container_memory_working_set_bytes) BY (pod, container) / sum(container_spec_memory_limit_bytes) BY (pod, container) * 100) > 80
      for: 5m
      labels:
        severity: warning
      annotations:
        summary: "Container memory usage is above 80%"
        description: "Container {{ $labels.container }} in pod {{ $labels.pod }} memory usage is {{ $value }}%"

    - alert: ContainerRestart
      expr: |
        rate(kube_pod_container_status_restarts_total[15m]) > 0
      for: 5m
      labels:
        severity: warning
      annotations:
        summary: "Container is restarting"
        description: "Container {{ $labels.container }} in pod {{ $labels.pod }} has restarted {{ $value }} times"
EOF

kubectl apply -f container-metrics.yaml

Log Aggregation with EFK

# Install Elasticsearch
helm repo add elastic https://helm.elastic.co
helm repo update

# Install Elasticsearch
helm install elasticsearch elastic/elasticsearch \
  --namespace logging \
  --create-namespace \
  --set replicas=3 \
  --set volumeClaimTemplate.resources.requests.storage=100Gi \
  --set volumeClaimTemplate.storageClassName=fast-ssd

# Install Kibana
helm install kibana elastic/kibana \
  --namespace logging \
  --set elasticsearchHosts="http://elasticsearch-master:9200"

# Install Fluentd
cat > fluentd-values.yaml << 'EOF'
image:
  repository: fluent/fluentd-kubernetes-daemonset
  tag: v1.14-debian-elasticsearch7-1

elasticsearch:
  host: elasticsearch-master.logging.svc.cluster.local
  port: 9200
  scheme: http

configMaps:
  useDefaultConfig: false
  config: |
    <system>
      root_dir /tmp/fluentd-buffers/
    </system>
    <source>
      @type tail
      @id in_tail_container_logs
      path /var/log/containers/*.log
      pos_file /var/log/fluentd-containers.log.pos
      tag kubernetes.*
      read_from_head true
      <parse>
        @type multi_format
        <pattern>
          format json
          time_key time
          time_format %Y-%m-%dT%H:%M:%S.%NZ
        </pattern>
        <pattern>
          format /^(?<time>.+) (?<stream>stdout|stderr) [^ ]* (?<log>.*)$/
          time_format %Y-%m-%dT%H:%M:%S.%N%:z
        </pattern>
      </parse>
    </source>

    <filter kubernetes.**>
      @type kubernetes_metadata
      @id filter_kube_metadata
      kubernetes_url "https://#{ENV['KUBERNETES_SERVICE_HOST']}:#{ENV['KUBERNETES_SERVICE_PORT']}/api"
      verify_ssl true
      ca_file /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
      skip_labels false
      skip_container_metadata false
      skip_master_url false
      skip_namespace_metadata false
    </filter>

    <match **>
      @type elasticsearch
      @id out_es
      @log_level info
      include_tag_key true
      host "#{ENV['FLUENT_ELASTICSEARCH_HOST']}"
      port "#{ENV['FLUENT_ELASTICSEARCH_PORT']}"
      scheme "#{ENV['FLUENT_ELASTICSEARCH_SCHEME'] || 'http'}"
      ssl_verify "#{ENV['FLUENT_ELASTICSEARCH_SSL_VERIFY'] || 'true'}"
      ssl_version "#{ENV['FLUENT_ELASTICSEARCH_SSL_VERSION'] || 'TLSv1_2'}"
      user "#{ENV['FLUENT_ELASTICSEARCH_USER']}"
      password "#{ENV['FLUENT_ELASTICSEARCH_PASSWORD']}"
      reload_connections false
      reconnect_on_error true
      reload_on_failure true
      log_es_400_reason false
      logstash_prefix logstash
      logstash_dateformat %Y.%m.%d
      type_name _doc
      include_timestamp true
      template_name logstash
      template_file /fluentd/etc/logstash-template.json
      template_overwrite false
      <buffer>
        flush_thread_count 8
        flush_interval 5s
        chunk_limit_size 2M
        queue_limit_length 32
        retry_max_interval 30
        retry_forever true
      </buffer>
    </match>
EOF

helm install fluentd stable/fluentd \
  --namespace logging \
  --values fluentd-values.yaml

Best Practices

Container Security Checklist

# Create security checklist
cat > /usr/local/share/container-security-checklist.md << 'EOF'
# Container Security Checklist

## Image Security
- [ ] Use official base images
- [ ] Scan images for vulnerabilities
- [ ] Sign and verify images
- [ ] Use minimal base images (distroless/scratch)
- [ ] Remove unnecessary packages
- [ ] Don't store secrets in images
- [ ] Use specific image tags (not latest)
- [ ] Implement multi-stage builds

## Runtime Security
- [ ] Run containers as non-root
- [ ] Use read-only root filesystem
- [ ] Drop unnecessary capabilities
- [ ] Use security profiles (AppArmor/SELinux)
- [ ] Implement network policies
- [ ] Use Pod Security Standards
- [ ] Enable audit logging
- [ ] Implement resource limits

## Registry Security
- [ ] Use private registries
- [ ] Enable vulnerability scanning
- [ ] Implement access controls
- [ ] Use image signing
- [ ] Regular cleanup of old images
- [ ] Backup registry data
- [ ] Monitor registry access
- [ ] Implement replication

## Kubernetes Security
- [ ] Enable RBAC
- [ ] Use namespaces for isolation
- [ ] Implement network policies
- [ ] Use Pod Security Policies/Standards
- [ ] Enable audit logging
- [ ] Secure etcd
- [ ] Use secrets management
- [ ] Regular security updates

## Operational Security
- [ ] Regular security scanning
- [ ] Incident response plan
- [ ] Container forensics capability
- [ ] Log aggregation and monitoring
- [ ] Backup and disaster recovery
- [ ] Compliance reporting
- [ ] Security training
- [ ] Regular security audits
EOF

Production Deployment Guide

# Production readiness script
cat > /usr/local/bin/container-prod-ready.sh << 'EOF'
#!/bin/bash
# Container production readiness check

IMAGE="$1"
NAMESPACE="${2:-default}"

echo "Production Readiness Check for: $IMAGE"
echo "================================"

# Check if running as non-root
echo -n "Non-root user: "
USER_ID=$(docker run --rm --entrypoint="" $IMAGE id -u 2>/dev/null)
if [ "$USER_ID" != "0" ]; then
  echo "✓ (UID: $USER_ID)"
else
  echo "✗ (Running as root)"
fi

# Check for security updates
echo -n "Security updates: "
VULNS=$(trivy image -q --severity HIGH,CRITICAL $IMAGE | grep Total | awk '{print $2}')
if [ "$VULNS" -eq 0 ]; then
  echo "✓ (No high/critical vulnerabilities)"
else
  echo "✗ ($VULNS vulnerabilities found)"
fi

# Check image size
echo -n "Image size: "
SIZE=$(docker images $IMAGE --format "{{.Size}}")
echo "$SIZE"

# Check health endpoint
echo -n "Health check: "
if docker inspect $IMAGE | jq -r '.[0].Config.Healthcheck' | grep -q "test"; then
  echo "✓"
else
  echo "✗"
fi

# Check resource limits
echo -n "Resource limits: "
cat > /tmp/test-deployment.yaml << YAML
apiVersion: apps/v1
kind: Deployment
metadata:
  name: test-deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      app: test
  template:
    metadata:
      labels:
        app: test
    spec:
      containers:
      - name: test
        image: $IMAGE
        resources:
          requests:
            memory: "128Mi"
            cpu: "100m"
          limits:
            memory: "256Mi"
            cpu: "200m"
YAML

if kubectl apply --dry-run=client -f /tmp/test-deployment.yaml >/dev/null 2>&1; then
  echo "✓"
else
  echo "✗"
fi

# Generate report
echo ""
echo "Recommendations:"
echo "- Review and fix any ✗ items before production deployment"
echo "- Implement continuous security scanning"
echo "- Set up monitoring and alerting"
echo "- Document incident response procedures"
EOF

chmod +x /usr/local/bin/container-prod-ready.sh

Conclusion

SUSE's container ecosystem provides enterprise-grade tools for container deployment and management. From basic Docker operations to complex Kubernetes orchestration with Rancher, SUSE offers comprehensive solutions for containerized workloads. Regular security scanning, proper image management, and following best practices ensure reliable and secure container operations in production environments.