Skip to content

Deployment Guide

Deploy Mend Media Processing Engine to production environments.

Docker Deployment Recommended

Section titled “Docker Deployment ”
  • Docker 20.10+
  • Docker Compose 2.0+
  • S3-compatible storage (AWS S3, MinIO, or Cloudflare R2)
  1. Clone and Configure

    Terminal window
    cd /path/to/mend
    cp config.example.yaml config.yaml
    cp .env.example .env
  2. Edit Configuration

    config.yaml
    server:
    port: 8080
    mode: release
    api_keys:
    - "${API_KEY_1}"
    redis:
    addr: redis:6379
    password: ""
    db: 0
    s3:
    region: us-east-1
    access_key_id: "${AWS_ACCESS_KEY_ID}"
    secret_access_key: "${AWS_SECRET_ACCESS_KEY}"
    worker:
    concurrency: 10
    processing:
    temp_dir: /tmp/mend
    ffmpeg_path: /usr/bin/ffmpeg
  3. Start Services

    Terminal window
    docker-compose up -d

    This starts:

    • Redis (job queue)
    • API server (port 8080)
    • Worker processes (2 replicas)
    • MinIO (optional, port 9000)
    • Prometheus (optional, port 9091)
  4. Verify Deployment

    Terminal window
    # Check health
    curl http://localhost:8080/health
    # View logs
    docker-compose logs -f api
    docker-compose logs -f worker

Adjust worker replicas in docker-compose.yaml:

docker-compose.yaml
worker:
deploy:
replicas: 5 # Increase for more throughput

Then scale:

Terminal window
docker-compose up -d --scale worker=5

  • Kubernetes 1.20+
  • kubectl configured
  • Helm 3+ (optional)
  1. Create Namespace

    Terminal window
    kubectl create namespace mend
  2. Create Secrets

    Terminal window
    kubectl create secret generic mend-secrets \
    --from-literal=aws-access-key-id=YOUR_KEY \
    --from-literal=aws-secret-access-key=YOUR_SECRET \
    --from-literal=api-key=$(openssl rand -hex 32) \
    -n mend
  3. Create ConfigMap

    Terminal window
    kubectl create configmap mend-config \
    --from-file=config.yaml \
    -n mend
  4. Deploy Redis

    Terminal window
    kubectl apply -f deployments/k8s/redis.yaml -n mend
  5. Deploy API and Worker

    Terminal window
    kubectl apply -f deployments/k8s/api.yaml -n mend
    kubectl apply -f deployments/k8s/worker.yaml -n mend
  6. Expose API

    Terminal window
    kubectl apply -f deployments/k8s/service.yaml -n mend
    kubectl apply -f deployments/k8s/ingress.yaml -n mend
deployments/k8s/api.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: mend-api
spec:
replicas: 2
selector:
matchLabels:
app: mend-api
template:
metadata:
labels:
app: mend-api
spec:
containers:
- name: api
image: mend:latest
command: ["/app/api"]
ports:
- containerPort: 8080
name: http
- containerPort: 9090
name: metrics
env:
- name: AWS_ACCESS_KEY_ID
valueFrom:
secretKeyRef:
name: mend-secrets
key: aws-access-key-id
- name: AWS_SECRET_ACCESS_KEY
valueFrom:
secretKeyRef:
name: mend-secrets
key: aws-secret-access-key
- name: API_KEY_1
valueFrom:
secretKeyRef:
name: mend-secrets
key: api-key
volumeMounts:
- name: config
mountPath: /app/config.yaml
subPath: config.yaml
livenessProbe:
httpGet:
path: /health/live
port: 8080
initialDelaySeconds: 10
periodSeconds: 30
readinessProbe:
httpGet:
path: /health/ready
port: 8080
initialDelaySeconds: 5
periodSeconds: 10
resources:
requests:
memory: "256Mi"
cpu: "250m"
limits:
memory: "512Mi"
cpu: "500m"
volumes:
- name: config
configMap:
name: mend-config
deployments/k8s/hpa.yaml
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: mend-worker-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: mend-worker
minReplicas: 2
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 70
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: 80

For running directly on a Linux server:

  1. Build Binaries

    Terminal window
    make build
  2. Install Binaries

    Terminal window
    sudo cp bin/api /usr/local/bin/mend-api
    sudo cp bin/worker /usr/local/bin/mend-worker
    sudo chmod +x /usr/local/bin/mend-*
  3. Create Service Files

    /etc/systemd/system/mend-api.service
    [Unit]
    Description=Mend API Server
    After=network.target redis.service
    [Service]
    Type=simple
    User=mend
    WorkingDirectory=/opt/mend
    ExecStart=/usr/local/bin/mend-api
    Restart=always
    RestartSec=5
    Environment="AWS_ACCESS_KEY_ID=your_key"
    Environment="AWS_SECRET_ACCESS_KEY=your_secret"
    [Install]
    WantedBy=multi-user.target
  4. Enable and Start

    Terminal window
    sudo systemctl daemon-reload
    sudo systemctl enable mend-api mend-worker
    sudo systemctl start mend-api mend-worker
    # Check status
    sudo systemctl status mend-api
    sudo systemctl status mend-worker

Security

  • Configure API keys
  • Enable TLS/HTTPS
  • Use IAM roles for S3 access
  • Configure firewall rules
  • Implement rate limiting

Reliability

  • Set up Redis persistence
  • Configure health checks
  • Enable auto-restart
  • Set resource limits
  • Configure backups

Monitoring

  • Set up Prometheus
  • Configure Grafana dashboards
  • Enable log aggregation
  • Set up alerting
  • Track queue metrics

Scaling

  • Configure auto-scaling
  • Optimize worker count
  • Use load balancer
  • Enable caching
  • Monitor performance

Available at http://localhost:9090/metrics

Key Metrics:

  • mend_jobs_total - Total jobs processed
  • mend_queue_depth - Current queue depth
  • mend_job_duration_seconds - Job processing time
  • mend_worker_utilization - Worker utilization

Import the provided dashboard for comprehensive monitoring:

Queue Metrics

  • Queue depths by type
  • Jobs per second
  • Processing throughput

Performance

  • Job duration histograms
  • Success/failure rates
  • Worker utilization

System Health

  • CPU and memory usage
  • Disk space
  • Redis connection status

Structured JSON logs to stdout. Use:

  • ELK Stack (Elasticsearch, Logstash, Kibana)
  • Loki + Grafana
  • CloudWatch Logs (AWS)
  • Datadog or New Relic

config.yaml
worker:
concurrency: 20 # Increase for more throughput

Guidelines:

  • CPU-bound: 1-2x CPU cores
  • I/O-bound: 2-4x CPU cores
  • Monitor and adjust based on metrics

Terminal window
# Check logs
docker-compose logs api
# Common issues:
# - Redis not accessible
# - Invalid config.yaml
# - Port already in use
Terminal window
# Check worker logs
docker-compose logs worker
# Verify Redis connection
redis-cli -h localhost ping
# Check queue status
redis-cli LLEN asynq:queues:image
Terminal window
# Verify FFmpeg is installed
docker-compose exec worker ffmpeg -version
# Check file permissions
docker-compose exec worker ls -la /tmp/mend
  • Reduce worker concurrency
  • Increase worker replicas (distribute load)
  • Monitor temp directory cleanup
  • Check for memory leaks in logs