Azure Container Instances: Fast and Simple Container Deployment
Azure Container Instances (ACI) offers the fastest and simplest way to run containers in Azure. Perfect for scenarios like batch jobs, simple applications, and task automation, ACI eliminates the need to manage virtual machines or orchestration platforms.
Why Azure Container Instances?
ACI excels in scenarios requiring: - Quick container deployment without infrastructure management - Burst computing for scaling applications - CI/CD pipeline agents - Batch processing jobs - Development and testing environments
Getting Started with ACI
Basic Container Deployment
# Deploy a simple container
az container create \
--resource-group myResourceGroup \
--name mycontainer \
--image nginx:latest \
--dns-name-label myapp \
--ports 80
# Deploy with environment variables
az container create \
--resource-group myResourceGroup \
--name webapp \
--image myapp:latest \
--environment-variables \
DATABASE_URL=server.database.windows.net \
API_KEY=secretkey123 \
--secure-environment-variables \
DB_PASSWORD=SuperSecret123!
Container Groups
Deploy multiple containers that share lifecycle, resources, and network:
# container-group.yaml
apiVersion: 2019-12-01
location: eastus
name: myContainerGroup
properties:
containers:
- name: webapp
properties:
image: myregistry.azurecr.io/webapp:v1
resources:
requests:
cpu: 1
memoryInGb: 1.5
ports:
- port: 80
environmentVariables:
- name: BACKEND_URL
value: http://localhost:5000
- name: backend
properties:
image: myregistry.azurecr.io/backend:v1
resources:
requests:
cpu: 1
memoryInGb: 1
ports:
- port: 5000
osType: Linux
ipAddress:
type: Public
ports:
- protocol: tcp
port: 80
dnsNameLabel: myapp
Deploy the container group:
az container create \
--resource-group myResourceGroup \
--file container-group.yaml
Working with Private Registries
Azure Container Registry Integration
# Create service principal for ACR access
ACR_NAME=mycontainerregistry
SERVICE_PRINCIPAL=$(az ad sp create-for-rbac \
--name acr-service-principal \
--scopes $(az acr show --name $ACR_NAME --query id --output tsv) \
--role acrpull \
--query "{appId:appId, password:password}" \
--output json)
APP_ID=$(echo $SERVICE_PRINCIPAL | jq -r .appId)
PASSWORD=$(echo $SERVICE_PRINCIPAL | jq -r .password)
# Deploy container from ACR
az container create \
--resource-group myResourceGroup \
--name private-app \
--image $ACR_NAME.azurecr.io/myapp:latest \
--registry-login-server $ACR_NAME.azurecr.io \
--registry-username $APP_ID \
--registry-password $PASSWORD
Docker Hub Private Repository
az container create \
--resource-group myResourceGroup \
--name dockerhub-app \
--image myusername/private-app:latest \
--registry-login-server docker.io \
--registry-username myusername \
--registry-password mypassword
Advanced Networking
Virtual Network Integration
# Create VNet and subnet
az network vnet create \
--resource-group myResourceGroup \
--name myVNet \
--address-prefix 10.0.0.0/16 \
--subnet-name mySubnet \
--subnet-prefix 10.0.0.0/24
# Create network profile
SUBNET_ID=$(az network vnet subnet show \
--resource-group myResourceGroup \
--vnet-name myVNet \
--name mySubnet \
--query id --output tsv)
az network profile create \
--resource-group myResourceGroup \
--name myNetworkProfile \
--vnet-name myVNet \
--subnet mySubnet
# Deploy container in VNet
az container create \
--resource-group myResourceGroup \
--name vnet-container \
--image nginx \
--network-profile myNetworkProfile
Application Gateway Integration
# app-gateway-containers.yaml
apiVersion: 2019-12-01
location: eastus
name: backend-containers
properties:
containers:
- name: backend1
properties:
image: myapp/backend:v1
resources:
requests:
cpu: 1
memoryInGb: 1
ports:
- port: 8080
- name: backend2
properties:
image: myapp/backend:v1
resources:
requests:
cpu: 1
memoryInGb: 1
ports:
- port: 8081
osType: Linux
networkProfile:
id: /subscriptions/{subscription-id}/resourceGroups/myResourceGroup/providers/Microsoft.Network/networkProfiles/myNetworkProfile
ipAddress:
type: Private
ports:
- protocol: tcp
port: 8080
- protocol: tcp
port: 8081
Persistent Storage
Azure Files Volume Mount
# Create storage account and file share
STORAGE_ACCOUNT=mystorageaccount$RANDOM
az storage account create \
--resource-group myResourceGroup \
--name $STORAGE_ACCOUNT \
--sku Standard_LRS
az storage share create \
--name myshare \
--account-name $STORAGE_ACCOUNT
STORAGE_KEY=$(az storage account keys list \
--resource-group myResourceGroup \
--account-name $STORAGE_ACCOUNT \
--query "[0].value" \
--output tsv)
# Deploy container with volume
az container create \
--resource-group myResourceGroup \
--name file-share-container \
--image nginx \
--azure-file-volume-account-name $STORAGE_ACCOUNT \
--azure-file-volume-account-key $STORAGE_KEY \
--azure-file-volume-share-name myshare \
--azure-file-volume-mount-path /mnt/data
Git Repository Volume
# git-volume-container.yaml
apiVersion: 2019-12-01
location: eastus
name: git-container
properties:
containers:
- name: webapp
properties:
image: nginx
resources:
requests:
cpu: 1
memoryInGb: 1
volumeMounts:
- name: gitvolume
mountPath: /usr/share/nginx/html
volumes:
- name: gitvolume
gitRepo:
repository: https://github.com/myuser/my-site.git
directory: .
revision: main
osType: Linux
Container Monitoring
Enable Container Insights
# Create Log Analytics workspace
WORKSPACE_ID=$(az monitor log-analytics workspace create \
--resource-group myResourceGroup \
--workspace-name myWorkspace \
--query customerId \
--output tsv)
WORKSPACE_KEY=$(az monitor log-analytics workspace get-shared-keys \
--resource-group myResourceGroup \
--workspace-name myWorkspace \
--query primarySharedKey \
--output tsv)
# Deploy container with monitoring
az container create \
--resource-group myResourceGroup \
--name monitored-container \
--image myapp:latest \
--log-analytics-workspace $WORKSPACE_ID \
--log-analytics-workspace-key $WORKSPACE_KEY
Custom Metrics and Logging
# app.py - Application with custom metrics
import os
import time
import logging
from azure.monitor.opentelemetry import configure_azure_monitor
from opentelemetry import metrics
# Configure Azure Monitor
configure_azure_monitor(
connection_string=os.environ["APPLICATIONINSIGHTS_CONNECTION_STRING"]
)
# Set up metrics
meter = metrics.get_meter(__name__)
request_counter = meter.create_counter(
"requests_total",
description="Total number of requests"
)
response_time = meter.create_histogram(
"response_time_ms",
description="Response time in milliseconds"
)
# Application logic
@app.route('/')
def index():
start_time = time.time()
# Process request
result = process_request()
# Record metrics
request_counter.add(1, {"endpoint": "index"})
response_time.record(
(time.time() - start_time) * 1000,
{"endpoint": "index"}
)
return result
CI/CD Integration
GitHub Actions Deployment
# .github/workflows/deploy-aci.yml
name: Deploy to Azure Container Instances
on:
push:
branches: [main]
env:
REGISTRY: myregistry.azurecr.io
IMAGE_NAME: myapp
jobs:
build-and-deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Login to Azure
uses: azure/login@v1
with:
creds: ${{ secrets.AZURE_CREDENTIALS }}
- name: Login to ACR
uses: azure/docker-login@v1
with:
login-server: ${{ env.REGISTRY }}
username: ${{ secrets.ACR_USERNAME }}
password: ${{ secrets.ACR_PASSWORD }}
- name: Build and push image
run: |
docker build . -t ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:${{ github.sha }}
docker push ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:${{ github.sha }}
- name: Deploy to ACI
run: |
az container create \
--resource-group production \
--name ${{ env.IMAGE_NAME }} \
--image ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:${{ github.sha }} \
--registry-login-server ${{ env.REGISTRY }} \
--registry-username ${{ secrets.ACR_USERNAME }} \
--registry-password ${{ secrets.ACR_PASSWORD }} \
--dns-name-label ${{ env.IMAGE_NAME }} \
--ports 80 \
--environment-variables \
ENVIRONMENT=production \
VERSION=${{ github.sha }}
Azure DevOps Pipeline
# azure-pipelines.yml
trigger:
- main
variables:
azureSubscription: 'MyAzureSubscription'
resourceGroup: 'production'
containerName: 'myapp'
acrName: 'myregistry'
stages:
- stage: Build
jobs:
- job: BuildContainer
pool:
vmImage: 'ubuntu-latest'
steps:
- task: Docker@2
inputs:
containerRegistry: '$(acrName)'
repository: '$(containerName)'
command: 'buildAndPush'
Dockerfile: '**/Dockerfile'
tags: |
$(Build.BuildId)
latest
- stage: Deploy
jobs:
- deployment: DeployToACI
environment: 'production'
strategy:
runOnce:
deploy:
steps:
- task: AzureCLI@2
inputs:
azureSubscription: '$(azureSubscription)'
scriptType: 'bash'
scriptLocation: 'inlineScript'
inlineScript: |
az container create \
--resource-group $(resourceGroup) \
--name $(containerName) \
--image $(acrName).azurecr.io/$(containerName):$(Build.BuildId) \
--registry-login-server $(acrName).azurecr.io \
--cpu 1 \
--memory 1.5 \
--restart-policy Always
Batch Processing with ACI
Event-Driven Container Execution
# function_app.py - Azure Function triggering ACI
import os
import azure.functions as func
from azure.mgmt.containerinstance import ContainerInstanceManagementClient
from azure.identity import DefaultAzureCredential
def main(msg: func.QueueMessage) -> None:
# Parse message
job_data = msg.get_json()
# Create container instance client
credential = DefaultAzureCredential()
client = ContainerInstanceManagementClient(
credential,
os.environ["AZURE_SUBSCRIPTION_ID"]
)
# Define container
container_group = {
"location": "eastus",
"properties": {
"containers": [{
"name": f"job-{job_data['id']}",
"properties": {
"image": "myregistry.azurecr.io/batch-processor:latest",
"resources": {
"requests": {
"cpu": 2,
"memoryInGB": 4
}
},
"environmentVariables": [
{"name": "JOB_ID", "value": job_data['id']},
{"name": "INPUT_FILE", "value": job_data['file']}
]
}
}],
"osType": "Linux",
"restartPolicy": "Never"
}
}
# Create container instance
client.container_groups.begin_create_or_update(
resource_group_name="batch-processing",
container_group_name=f"job-{job_data['id']}",
container_group=container_group
)
Parallel Processing Pattern
#!/bin/bash
# parallel-processor.sh
# Function to create container
create_container() {
local index=$1
local file=$2
az container create \
--resource-group batch-processing \
--name "processor-$index" \
--image myregistry.azurecr.io/processor:latest \
--environment-variables \
INPUT_FILE="$file" \
OUTPUT_PATH="/mnt/output/result-$index.json" \
--azure-file-volume-share-name batchdata \
--azure-file-volume-mount-path /mnt/output \
--restart-policy Never \
--no-wait
}
# Process files in parallel
index=0
for file in /mnt/input/*.csv; do
create_container $index "$file"
((index++))
# Limit concurrent containers
if [ $((index % 10)) -eq 0 ]; then
sleep 30
fi
done
# Wait for completion
az container list \
--resource-group batch-processing \
--query "[?starts_with(name, 'processor-')].name" \
--output tsv | \
while read container; do
az container wait \
--resource-group batch-processing \
--name "$container" \
--updated
done
Security Best Practices
Managed Identity Integration
# managed-identity-container.yaml
apiVersion: 2019-12-01
location: eastus
name: secure-container
identity:
type: SystemAssigned
properties:
containers:
- name: webapp
properties:
image: myapp:latest
resources:
requests:
cpu: 1
memoryInGb: 1
environmentVariables:
- name: AZURE_CLIENT_ID
value: system_assigned_managed_identity
Access Azure services with managed identity:
# app.py
from azure.identity import ManagedIdentityCredential
from azure.storage.blob import BlobServiceClient
# Use managed identity
credential = ManagedIdentityCredential()
blob_service = BlobServiceClient(
account_url="https://mystorage.blob.core.windows.net",
credential=credential
)
# Access storage without keys
container_client = blob_service.get_container_client("mycontainer")
for blob in container_client.list_blobs():
print(blob.name)
Network Isolation
# Create container with private IP only
az container create \
--resource-group myResourceGroup \
--name internal-only \
--image myapp:latest \
--vnet myVNet \
--subnet mySubnet \
--ip-address Private
Cost Optimization
Container Lifecycle Management
# cleanup_script.py
from azure.mgmt.containerinstance import ContainerInstanceManagementClient
from azure.identity import DefaultAzureCredential
from datetime import datetime, timedelta
credential = DefaultAzureCredential()
client = ContainerInstanceManagementClient(credential, subscription_id)
# List all container groups
container_groups = client.container_groups.list_by_resource_group("myResourceGroup")
for group in container_groups:
# Check if container is stopped and older than 1 hour
if (group.instance_view.state == "Stopped" and
group.instance_view.events[-1].time < datetime.now() - timedelta(hours=1)):
# Delete stopped container
client.container_groups.begin_delete(
resource_group_name="myResourceGroup",
container_group_name=group.name
)
print(f"Deleted container group: {group.name}")
Spot Containers (Preview)
# Deploy spot container for cost savings
az container create \
--resource-group myResourceGroup \
--name spot-container \
--image myapp:latest \
--priority Spot \
--cpu 4 \
--memory 16 \
--eviction-policy Delete
Troubleshooting
Container Logs
# View container logs
az container logs \
--resource-group myResourceGroup \
--name mycontainer
# Stream logs
az container attach \
--resource-group myResourceGroup \
--name mycontainer
# Export logs to file
az container export \
--resource-group myResourceGroup \
--name mycontainer \
--file container-logs.txt
Debugging Failed Containers
# Get container events
az container show \
--resource-group myResourceGroup \
--name mycontainer \
--query "properties.instanceView.events[].{time:lastTimestamp, message:message}" \
--output table
# Execute command in running container
az container exec \
--resource-group myResourceGroup \
--name mycontainer \
--exec-command "/bin/bash"
Conclusion
Azure Container Instances provides the simplest way to run containers in Azure. Whether you need quick development environments, batch processing, or burst compute capacity, ACI delivers without the complexity of orchestration platforms.
Next Steps
- Explore Azure Kubernetes Service for complex orchestration
- Implement Azure Container Apps for microservices
- Set up container monitoring with Azure Monitor
- Integrate with Azure Logic Apps for workflows
Remember: ACI is perfect for stateless, short-lived workloads. For persistent applications requiring orchestration, consider AKS or Container Apps.