Self-Hosted Deployment
8 min read
This guide covers deploying MCP Hub Platform on your own infrastructure for production use. It walks through Docker Compose configuration, environment hardening, TLS setup, backup strategy, and monitoring.
Before You Begin
You need:
- A Linux server with Docker and Docker Compose installed (see Installation)
- A domain name with DNS configured to point to your server
- TLS certificates (or a plan to use Let’s Encrypt)
- At least 4 GB RAM and 2 CPU cores (8 GB / 4 cores recommended)
Architecture Overview
A self-hosted deployment runs all components on a single server or a small cluster:
Internet
|
v
[Reverse Proxy / TLS Termination]
|
+--- Hub Web (port 8080) --- Public dashboard and API
+--- Registry (port 8081) --- Artifact distribution
|
[Internal Network]
|
+--- Hub Worker --- AMQP consumer/producer
+--- Scan Worker --- Security analysis
+--- PostgreSQL --- Primary database
+--- Redis --- Cache and rate limiting
+--- MinIO --- Object storage
+--- LavinMQ --- Message broker
Step 1: Prepare the Server
Install Dependencies
# Update system packages
sudo apt update && sudo apt upgrade -y
# Install Docker
curl -fsSL https://get.docker.com | sudo sh
# Add your user to the docker group
sudo usermod -aG docker $USER
# Install Docker Compose (if not bundled with Docker)
sudo apt install docker-compose-plugin -y
# Verify
docker --version
docker compose version
Create the Deployment Directory
sudo mkdir -p /opt/mcp-hub-platform
sudo chown $USER:$USER /opt/mcp-hub-platform
cd /opt/mcp-hub-platform
Clone the Repository
git clone https://github.com/your-org/mcp-hub-platform.git .
Step 2: Configure Environment Variables
Create a production environment file:
cp mcp-hub/.env.example .env.production
Edit .env.production with production values:
# ======================
# Application Settings
# ======================
NODE_ENV=production
HUB_PORT=8080
REGISTRY_PORT=8081
# ======================
# Database
# ======================
POSTGRES_USER=mcphub
POSTGRES_PASSWORD=<generate-a-strong-password>
POSTGRES_DB=mcphub
DATABASE_URL=postgres://mcphub:<password>@postgres:5432/mcphub?sslmode=disable
# Registry database
REGISTRY_DATABASE_URL=postgres://mcphub:<password>@postgres:5432/mcp_registry?sslmode=disable
# ======================
# Redis
# ======================
REDIS_URL=redis://redis:6379/0
# Note: In Docker, use the internal port 6379, not the external 6390
# ======================
# S3 Storage (MinIO)
# ======================
MINIO_ROOT_USER=<generate-a-strong-access-key>
MINIO_ROOT_PASSWORD=<generate-a-strong-secret-key>
MINIO_ENDPOINT=minio:9000
MINIO_ACCESS_KEY=${MINIO_ROOT_USER}
MINIO_SECRET_KEY=${MINIO_ROOT_PASSWORD}
MINIO_USE_SSL=false
# ======================
# AMQP (LavinMQ)
# ======================
AMQP_URL=amqp://mcphub:<password>@lavinmq:5672/
# ======================
# Authentication (Auth0)
# ======================
AUTH0_DOMAIN=your-tenant.auth0.com
AUTH0_CLIENT_ID=<your-client-id>
AUTH0_CLIENT_SECRET=<your-client-secret>
AUTH0_CALLBACK_URL=https://your-domain.com/auth/callback
# ======================
# GitHub Integration
# ======================
GITHUB_TOKEN=ghp_<your-token>
GITHUB_WEBHOOK_SECRET=<generate-a-webhook-secret>
# ======================
# Stripe Billing (Optional)
# ======================
STRIPE_API_KEY=sk_live_...
STRIPE_WEBHOOK_SECRET=whsec_...
# ======================
# External URLs
# ======================
HUB_EXTERNAL_URL=https://your-domain.com
REGISTRY_EXTERNAL_URL=https://registry.your-domain.com
Use a cryptographically secure random generator for all passwords and secrets. Never reuse passwords between services. Example: openssl rand -base64 32
Step 3: Create the Production Docker Compose File
Create a docker-compose.production.yml that extends the base configuration with production settings:
# docker-compose.production.yml
version: "3.8"
services:
postgres:
image: postgres:16-alpine
restart: always
environment:
POSTGRES_USER: ${POSTGRES_USER}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
volumes:
- postgres_data:/var/lib/postgresql/data
- ./postgres-init:/docker-entrypoint-initdb.d
ports:
- "127.0.0.1:15432:5432" # Only bind to localhost
healthcheck:
test: ["CMD-SHELL", "pg_isready -U ${POSTGRES_USER}"]
interval: 10s
timeout: 5s
retries: 5
redis:
image: redis:7-alpine
restart: always
command: redis-server --requirepass ${REDIS_PASSWORD:-}
volumes:
- redis_data:/data
ports:
- "127.0.0.1:6390:6379" # Only bind to localhost
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 10s
timeout: 5s
retries: 5
minio:
image: minio/minio:latest
restart: always
environment:
MINIO_ROOT_USER: ${MINIO_ROOT_USER}
MINIO_ROOT_PASSWORD: ${MINIO_ROOT_PASSWORD}
command: server /data --console-address ":9001"
volumes:
- minio_data:/data
ports:
- "127.0.0.1:9000:9000" # Only bind to localhost
- "127.0.0.1:9001:9001"
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:9000/minio/health/live"]
interval: 10s
timeout: 5s
retries: 5
minio-init:
image: minio/mc:latest
depends_on:
minio:
condition: service_healthy
entrypoint: >
/bin/sh -c "
mc alias set myminio http://minio:9000 $${MINIO_ROOT_USER} $${MINIO_ROOT_PASSWORD};
mc mb myminio/mcp-sources --ignore-existing;
mc mb myminio/mcp-results --ignore-existing;
mc mb myminio/mcp-artifacts --ignore-existing;
exit 0;
"
lavinmq:
image: cloudamqp/lavinmq:latest
restart: always
volumes:
- lavinmq_data:/var/lib/lavinmq
ports:
- "127.0.0.1:5672:5672" # Only bind to localhost
- "127.0.0.1:15672:15672"
healthcheck:
test: ["CMD-SHELL", "lavinmqctl status"]
interval: 10s
timeout: 5s
retries: 5
hub-web:
build:
context: ./mcp-hub
dockerfile: Dockerfile
restart: always
env_file: .env.production
ports:
- "127.0.0.1:8080:8080" # Only bind to localhost
depends_on:
postgres:
condition: service_healthy
redis:
condition: service_healthy
minio-init:
condition: service_completed_successfully
hub-worker:
build:
context: ./mcp-hub
dockerfile: Dockerfile.worker
restart: always
env_file: .env.production
depends_on:
postgres:
condition: service_healthy
lavinmq:
condition: service_healthy
minio-init:
condition: service_completed_successfully
scan-worker:
build:
context: ./mcp-scan
dockerfile: Dockerfile
restart: always
env_file: .env.production
depends_on:
lavinmq:
condition: service_healthy
minio-init:
condition: service_completed_successfully
registry:
build:
context: ./mcp-registry
dockerfile: Dockerfile
restart: always
env_file: .env.production
ports:
- "127.0.0.1:8081:8081" # Only bind to localhost
depends_on:
postgres:
condition: service_healthy
minio-init:
condition: service_completed_successfully
volumes:
postgres_data:
redis_data:
minio_data:
lavinmq_data:
Key differences from the development configuration:
- All ports bind to
127.0.0.1only (not exposed to the public internet) - All services have
restart: always - Persistent volumes for all data stores
- Health checks with dependencies for proper startup ordering
Step 4: Set Up TLS with a Reverse Proxy
Use Nginx as a reverse proxy with TLS termination.
Install Nginx and Certbot
sudo apt install nginx certbot python3-certbot-nginx -y
Configure Nginx
Create /etc/nginx/sites-available/mcp-hub:
# Hub dashboard and API
server {
listen 80;
server_name your-domain.com;
location / {
proxy_pass http://127.0.0.1:8080;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
# Registry API
server {
listen 80;
server_name registry.your-domain.com;
client_max_body_size 500M; # Allow large artifact uploads
location / {
proxy_pass http://127.0.0.1:8081;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
Enable the site and obtain TLS certificates:
sudo ln -s /etc/nginx/sites-available/mcp-hub /etc/nginx/sites-enabled/
sudo nginx -t
sudo systemctl reload nginx
# Obtain TLS certificates
sudo certbot --nginx -d your-domain.com -d registry.your-domain.com
Certbot automatically configures Nginx for HTTPS and sets up certificate auto-renewal.
Verify TLS
curl -s https://your-domain.com/health | jq .
curl -s https://registry.your-domain.com/health | jq .
Step 5: Start the Platform
cd /opt/mcp-hub-platform
# Start all services
docker compose -f docker-compose.production.yml --env-file .env.production up -d
# Verify all services are healthy
docker compose -f docker-compose.production.yml ps
# Check logs
docker compose -f docker-compose.production.yml logs -f
Wait for all services to report as healthy. Verify the minio-init container has exited with code 0.
Step 6: Set Up Backups
Database Backups
Create a backup script at /opt/mcp-hub-platform/scripts/backup-db.sh:
#!/bin/bash
set -euo pipefail
BACKUP_DIR="/opt/backups/mcp-hub/postgres"
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
RETENTION_DAYS=30
mkdir -p "$BACKUP_DIR"
# Backup both databases
docker compose -f /opt/mcp-hub-platform/docker-compose.production.yml \
exec -T postgres pg_dump -U mcphub mcphub | gzip > "$BACKUP_DIR/mcphub_${TIMESTAMP}.sql.gz"
docker compose -f /opt/mcp-hub-platform/docker-compose.production.yml \
exec -T postgres pg_dump -U mcphub mcp_registry | gzip > "$BACKUP_DIR/mcp_registry_${TIMESTAMP}.sql.gz"
# Clean up old backups
find "$BACKUP_DIR" -name "*.sql.gz" -mtime +${RETENTION_DAYS} -delete
echo "Backup completed: $TIMESTAMP"
S3 / MinIO Backups
#!/bin/bash
set -euo pipefail
BACKUP_DIR="/opt/backups/mcp-hub/minio"
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
mkdir -p "$BACKUP_DIR"
# Mirror MinIO data
docker run --rm --network=host \
-v "$BACKUP_DIR:/backup" \
minio/mc mc mirror \
--overwrite \
myminio/ /backup/${TIMESTAMP}/
echo "MinIO backup completed: $TIMESTAMP"
Schedule Backups with Cron
# Edit crontab
crontab -e
# Add daily database backup at 2:00 AM
0 2 * * * /opt/mcp-hub-platform/scripts/backup-db.sh >> /var/log/mcp-hub-backup.log 2>&1
# Add weekly MinIO backup at 3:00 AM on Sundays
0 3 * * 0 /opt/mcp-hub-platform/scripts/backup-minio.sh >> /var/log/mcp-hub-backup.log 2>&1
Restoring from Backup
# Restore PostgreSQL database
gunzip < /opt/backups/mcp-hub/postgres/mcphub_20260210_020000.sql.gz | \
docker compose -f docker-compose.production.yml exec -T postgres \
psql -U mcphub mcphub
Step 7: Set Up Monitoring
Health Check Endpoint Monitoring
Create a simple health check script:
#!/bin/bash
# /opt/mcp-hub-platform/scripts/healthcheck.sh
ENDPOINTS=(
"https://your-domain.com/health"
"https://registry.your-domain.com/health"
)
for endpoint in "${ENDPOINTS[@]}"; do
status=$(curl -s -o /dev/null -w "%{http_code}" "$endpoint" --max-time 10)
if [ "$status" != "200" ]; then
echo "ALERT: $endpoint returned status $status" | \
mail -s "MCP Hub Health Check Failed" [email protected]
fi
done
Docker Container Monitoring
Monitor container health and resource usage:
# View container status
docker compose -f docker-compose.production.yml ps
# View resource usage
docker stats --no-stream
# Check container logs for errors
docker compose -f docker-compose.production.yml logs --since 1h | grep -i error
Log Aggregation
For production deployments, configure Docker logging to forward to a log aggregation service:
# Add to each service in docker-compose.production.yml
services:
hub-web:
logging:
driver: "json-file"
options:
max-size: "50m"
max-file: "5"
tag: "mcp-hub-web"
For more advanced setups, integrate with services like Grafana Loki, Elasticsearch, or Datadog.
Key Metrics to Monitor
| Metric | Source | Alert Threshold |
|---|---|---|
| Hub web response time | Nginx access logs | > 2 seconds |
| Registry response time | Nginx access logs | > 5 seconds |
| AMQP queue depth | LavinMQ management API | > 100 pending jobs |
| PostgreSQL connections | pg_stat_activity | > 80% of max |
| Disk usage | Host OS | > 85% |
| Container restarts | Docker events | Any restart |
| TLS certificate expiry | Certbot | < 14 days |
Step 8: Security Hardening
Firewall Configuration
Only expose ports 80 (HTTP) and 443 (HTTPS) to the public internet:
sudo ufw default deny incoming
sudo ufw default allow outgoing
sudo ufw allow ssh
sudo ufw allow 80/tcp
sudo ufw allow 443/tcp
sudo ufw enable
All other ports (PostgreSQL, Redis, MinIO, LavinMQ) are bound to 127.0.0.1 and not accessible from outside.
Regular Updates
# Update system packages
sudo apt update && sudo apt upgrade -y
# Update Docker images
docker compose -f docker-compose.production.yml pull
docker compose -f docker-compose.production.yml up -d
Secrets Management
- Store
.env.productionwith restricted permissions:chmod 600 .env.production - Consider using Docker secrets or HashiCorp Vault for production deployments
- Rotate credentials periodically (database passwords, API keys, JWT secrets)
Scaling Considerations
For higher throughput, you can scale the worker components:
# Scale scan workers for faster analysis
docker compose -f docker-compose.production.yml up -d --scale scan-worker=3
# Scale hub workers for faster job processing
docker compose -f docker-compose.production.yml up -d --scale hub-worker=2
For larger deployments, consider:
- External PostgreSQL: Managed database service (RDS, Cloud SQL)
- External Redis: Managed Redis (ElastiCache, Memorystore)
- External S3: AWS S3, Google Cloud Storage, or Azure Blob instead of MinIO
- Kubernetes: Migrate to Helm charts for orchestrated scaling
Next Steps
- Architecture – Understand the component topology
- Enforce Security Policies – Configure governance for your deployment
- Set Up an Organization – Create your first organization on the new instance