Docker Compose

Full Docker Compose deployment guide with service descriptions, port mappings, volume configuration, and production hardening.

Docker Compose is the primary deployment method for the MCP Hub Platform. It orchestrates all services – application components and infrastructure – in a single configuration file.

Architecture Overview

The Docker Compose deployment runs the following services:

                 +-------------+
                 |  hub-web    |  :8080
                 | (dashboard) |
                 +------+------+
                        |
          +-------------+-------------+
          |                           |
+---------+----------+  +-------------+---------+
| hub-ingestion-     |  | hub-results-          |
| worker             |  | worker                |
| (downloads code,   |  | (scoring, cert,       |
|  uploads to S3)    |  |  publish to registry) |
+---------+----------+  +-------------+---------+
          |                           |
          +--------+    +-------------+
                   |    |
            +------+----+------+
            |     lavinmq      |  :5672 / :15672
            |  (AMQP broker)   |
            +------+----+------+
                   |    |
          +--------+    +-------------+
          |                           |
+---------+----------+  +-------------+---------+
| scan-worker        |  |   minio     |  :9000 / :9001
| (security analysis)|  | (S3 storage)|
+--------------------+  +-------------+---------+

+-----------+  +----------+  +----------+
| postgres  |  |  redis   |  | registry |
| :15432    |  |  :6390   |  | :8081    |
+-----------+  +----------+  +----------+

Services

Application Services

ServiceImagePortDescription
hub-webmcp-hub8080Web dashboard and REST API. Serves the UI and handles user requests.
hub-ingestion-workermcp-hubProcesses ingestion jobs: downloads code from Git/uploads, packages tarballs, uploads to S3, publishes ANALYZE jobs to AMQP.
hub-results-workermcp-hubProcesses analysis results: downloads scan results from S3, runs controls mapping, computes scores, publishes certified artifacts to registry.
scan-workermcp-scan8083Security analysis worker: downloads source tarballs from S3, runs mcp-scan, uploads results to S3, publishes ANALYZE_COMPLETE to AMQP.
registrymcp-registry8081Artifact distribution service: stores and serves certified MCP bundles and manifests.
hub-migratemcp-hubEphemeral container that runs database migrations on startup. Exits after completion.

Infrastructure Services

ServiceImagePort(s)Description
postgrespostgres:1615432PostgreSQL database. Hosts both mcphub and mcp_registry databases.
redisredis:7-alpine6390Cache and rate limiting. Note: port 6390, not the default 6379.
miniominio/minio9000, 9001S3-compatible object storage. Port 9000 is the API, port 9001 is the web console.
minio-initminio/mcEphemeral container that creates required S3 buckets on startup.
lavinmqcloudamqp/lavinmq5672, 15672AMQP message broker. Port 5672 is AMQP, port 15672 is the management UI.

Quick Start

1. Clone and Configure

git clone https://github.com/your-org/mcp-hub-platform.git
cd mcp-hub-platform

2. Start All Services

docker compose -f docker-compose.local.yml up -d

3. Verify Health

docker compose -f docker-compose.local.yml ps

All services should show running (healthy) or Exited (0) (for init containers).

4. Access the Platform

ServiceURLCredentials
Hub Dashboardhttp://localhost:8080Auth0 login
Registry APIhttp://localhost:8081JWT token
MinIO Consolehttp://localhost:9001minioadmin / minioadmin
LavinMQ Adminhttp://localhost:15672guest / guest

Volume Configuration

The Docker Compose file defines persistent volumes for all stateful services:

VolumeServicePurpose
postgres-datapostgresDatabase files
redis-dataredisCache persistence
minio-dataminioS3 object storage
registry-dataregistryRegistry artifact storage
worker-workspacehub-ingestion-workerTemporary workspace for code processing
scan-workspacescan-workerTemporary workspace for security analysis
lavinmq-datalavinmqMessage queue persistence

Backing Up Volumes

# Backup PostgreSQL data
docker compose exec postgres pg_dumpall -U mcphub > backup.sql

# Backup MinIO data
docker compose exec minio mc mirror /data /backup

Resetting Volumes

docker compose -f docker-compose.local.yml down
docker volume rm $(docker volume ls -q --filter "name=mcp-hub-platform")
docker compose -f docker-compose.local.yml up -d

Environment Variables

Hub Web

VariableDefaultDescription
DATABASE_URLPostgreSQL connection string
REDIS_URLRedis connection string
AMQP_URLAMQP broker connection string
AMQP_EXCHANGEmcp.jobsAMQP exchange name
S3_ENDPOINTS3/MinIO endpoint URL
S3_ACCESS_KEY_IDS3 access key
S3_SECRET_ACCESS_KEYS3 secret key
S3_BUCKETmcp-hub-sourcesS3 bucket for source uploads
S3_REGIONus-east-1S3 region
S3_USE_PATH_STYLEtrueUse path-style URLs (required for MinIO)
REGISTRY_URLRegistry internal URL
REGISTRY_SERVICE_TOKENToken for hub-to-registry communication
AUTH0_DOMAINAuth0 tenant domain
AUTH0_CLIENT_IDAuth0 client ID
AUTH0_CLIENT_SECRETAuth0 client secret
AUTH0_CALLBACK_URLAuth0 callback URL
SESSION_SECRETSession encryption secret (min 32 chars)
ADMIN_USERSComma-separated admin email addresses
LOG_LEVELinfoLog verbosity: debug, info, warn, error
SERVER_PORT8080HTTP listen port

Scan Worker

VariableDefaultDescription
AMQP_URLAMQP broker connection string
S3_ENDPOINTS3/MinIO endpoint URL
S3_ACCESS_KEY_IDS3 access key
S3_SECRET_ACCESS_KEYS3 secret key
S3_BUCKET_SOURCESmcp-hub-sourcesBucket for source tarballs
S3_BUCKET_ANALYSISmcp-hub-analysisBucket for analysis results
S3_REGIONus-east-1S3 region
SCAN_MODEdeepAnalysis mode: fast or deep
SCAN_TIMEOUT30mMaximum time per analysis job
MAX_CONCURRENT5Maximum concurrent analysis jobs
MCP_SCAN_WORKER_HEALTH_PORT8083Health check endpoint port
LOG_LEVELinfoLog verbosity

Registry

VariableDefaultDescription
MCP_REGISTRY_SERVER_LISTEN:8081Listen address
MCP_REGISTRY_DB_DSNPostgreSQL connection string
MCP_REGISTRY_STORAGE_TYPEs3Storage backend type
MCP_REGISTRY_STORAGE_S3_BUCKETmcp-registryS3 bucket name
MCP_REGISTRY_STORAGE_S3_ENDPOINTS3 endpoint
MCP_REGISTRY_STORAGE_S3_ACCESS_KEYS3 access key
MCP_REGISTRY_STORAGE_S3_SECRET_KEYS3 secret key
MCP_REGISTRY_AUTH_MODEossAuth mode: oss or enterprise
MCP_REGISTRY_PUBLIC_READ_CATALOGtrueAllow unauthenticated catalog reads
MCP_REGISTRY_PUBLIC_DOWNLOAD_ARTIFACTStrueAllow unauthenticated downloads

Health Checks

All services include Docker health checks:

ServiceHealth CheckInterval
postgrespg_isready -U mcphub5s
redisredis-cli -p 6390 ping5s
miniocurl -f http://localhost:9000/minio/health/live5s
lavinmqlavinmqctl status10s
hub-web/app/bin/mcp-hub version10s
hub-ingestion-worker/app/bin/mcp-hub version30s
hub-results-worker/app/bin/mcp-hub version30s
scan-workerwget --spider -q http://localhost:8083/healthz30s
registrywget --spider -q http://localhost:8081/healthz5s

Startup Dependencies

Services start in dependency order:

  1. postgres, redis, minio, lavinmq (infrastructure)
  2. minio-init (waits for minio healthy, creates buckets)
  3. hub-migrate (waits for postgres healthy, runs migrations)
  4. registry (waits for postgres, minio, redis healthy)
  5. hub-web, hub-ingestion-worker, hub-results-worker (wait for all infrastructure + registry + migrations)
  6. scan-worker (waits for minio, lavinmq, minio-init)

Production Hardening

Change Default Credentials

Replace all default credentials before deploying to production:

# PostgreSQL
POSTGRES_PASSWORD=<generate-strong-password>

# MinIO
MINIO_ROOT_USER=<generate-username>
MINIO_ROOT_PASSWORD=<generate-strong-password>

# Session secret (minimum 32 characters)
SESSION_SECRET=<generate-random-string-64-chars>

# Registry service token
REGISTRY_SERVICE_TOKEN=<generate-random-token>

Enable TLS

For production, place a reverse proxy (Nginx, Caddy, Traefik) in front of the hub-web and registry services to terminate TLS:

# Example: add Caddy as reverse proxy
caddy:
  image: caddy:2
  ports:
    - "443:443"
    - "80:80"
  volumes:
    - ./Caddyfile:/etc/caddy/Caddyfile
    - caddy_data:/data

Resource Limits

Add resource limits to prevent any single service from consuming all host resources:

services:
  hub-web:
    deploy:
      resources:
        limits:
          cpus: '2.0'
          memory: 1G
        reservations:
          cpus: '0.5'
          memory: 256M

  scan-worker:
    deploy:
      resources:
        limits:
          cpus: '4.0'
          memory: 4G
        reservations:
          cpus: '1.0'
          memory: 1G

Persistent Storage

For production, use named volumes with a backup strategy or mount host directories:

volumes:
  postgres-data:
    driver: local
    driver_opts:
      type: none
      o: bind
      device: /data/postgres

Network Isolation

Use Docker networks to restrict communication between services:

networks:
  mcp-network:
    driver: bridge
    internal: false  # Set to true to block external access

  db-network:
    driver: bridge
    internal: true  # Database only accessible within this network

Log Management

Configure centralized logging:

services:
  hub-web:
    logging:
      driver: "json-file"
      options:
        max-size: "50m"
        max-file: "5"

Common Operations

Viewing Logs

# All services
docker compose -f docker-compose.local.yml logs -f

# Specific service
docker compose -f docker-compose.local.yml logs -f hub-web

# Last 100 lines
docker compose -f docker-compose.local.yml logs --tail 100 scan-worker

Scaling Workers

# Scale scan workers for more analysis throughput
docker compose -f docker-compose.local.yml up -d --scale scan-worker=3

Restarting a Service

docker compose -f docker-compose.local.yml restart hub-web

Updating Images

docker compose -f docker-compose.local.yml pull
docker compose -f docker-compose.local.yml up -d