Complete deployment guide for the In Midst My Life platform covering Docker Compose (local/staging) and Kubernetes Helm (production) setups.
All Deployments:
Kubernetes Deployments:
Resource Requirements:
| Component | CPU | Memory | Storage |
|---|---|---|---|
| API | 0.5-2 cores | 512MB-2GB | - |
| Orchestrator | 1-4 cores | 1GB-4GB | - |
| Web | 0.5-1 cores | 512MB-1GB | - |
| PostgreSQL | 1-2 cores | 2GB-8GB | 20GB-100GB |
| Redis | 0.5-1 cores | 512MB-2GB | 5GB-20GB |
Docker Compose is ideal for local development, testing, and small-scale staging environments.
git clone https://github.com/anthropics/in-midst-my-life.git
cd in-midst-my-life
Create .env file from example:
cp .env.example .env
Required environment variables:
# Database
POSTGRES_USER=midstsvc
POSTGRES_PASSWORD=<secure-password>
POSTGRES_DB=midst
# Redis
REDIS_PASSWORD=<secure-redis-password>
REDIS_URL=redis://:${REDIS_PASSWORD}@redis:6379
REDIS_PORT=6379
# API Configuration
DATABASE_URL=postgresql://${POSTGRES_USER}:${POSTGRES_PASSWORD}@postgres:5432/${POSTGRES_DB}
POSTGRES_URL=${DATABASE_URL}
PROFILE_REPO=postgres
# Orchestrator Configuration
ORCH_TASK_STORE=postgres
ORCH_PORT=3002
ORCH_WORKER_ENABLED=true
ORCH_SCHEDULER_ENABLED=true
TASK_QUEUE=redis
ORCH_REDIS_URL=redis://redis:6379
# LLM Configuration (Local by default)
ORCH_AGENT_EXECUTOR=local
LOCAL_LLM_API=ollama
LOCAL_LLM_URL=http://host.docker.internal:11434
LOCAL_LLM_ALLOWED_HOSTS=host.docker.internal
LOCAL_LLM_MODEL=llama3.1:8b
ORCH_LLM_RESPONSE_FORMAT=text
# Web Configuration
NEXT_PUBLIC_API_BASE_URL=http://localhost:3001
NEXT_PUBLIC_ORCH_BASE_URL=http://localhost:3002
WEB_PORT=3000
# Security
NODE_ENV=production
Using 1Password for Secrets:
# Load secrets from 1Password
source scripts/secrets.env.op.sh
# Or use 1Password CLI directly
export POSTGRES_PASSWORD=$(op read "op://Dev/postgres/password")
docker-compose up
Services will be available at:
docker-compose -f docker-compose.prod.yml up -d
Run database migrations and seed data:
# Option 1: Using migrations profile
docker-compose --profile init up migrations
# Option 2: Manually
docker-compose exec api pnpm --filter @in-midst-my-life/api migrate
docker-compose exec api pnpm --filter @in-midst-my-life/api seed
docker-compose exec orchestrator pnpm --filter @in-midst-my-life/orchestrator migrate
docker-compose exec orchestrator pnpm --filter @in-midst-my-life/orchestrator seed
# Check service health
curl http://localhost:3001/health
curl http://localhost:3001/ready
curl http://localhost:3002/health
# View logs
docker-compose logs -f api
docker-compose logs -f orchestrator
docker-compose logs -f web
# Check database
docker-compose exec postgres psql -U midstsvc -d midst -c "\dt"
Open http://localhost:3000 in your browser to access:
For production or staging without Kubernetes, use the dedicated production compose file:
# Required env vars
export POSTGRES_PASSWORD=<secure-password>
export JWT_SECRET=<min-32-char-secret>
# Optional external services
export STRIPE_SECRET_KEY=sk_live_...
export OPENAI_API_KEY=sk-...
export SENTRY_DSN=https://...
# Start all services
docker compose -f docker-compose.prod.yml up -d
This uses individual Dockerfiles for each service (API, Web, Orchestrator) with:
# Check all containers are healthy
docker compose -f docker-compose.prod.yml ps
# API health check
curl http://localhost:3001/health
# Service status (requires admin JWT)
curl -H "Authorization: Bearer <token>" http://localhost:3001/v1/admin/service-status
Kubernetes deployment using Helm charts for production environments.
graph TB
subgraph "Ingress Layer"
ING[Ingress Controller]
end
subgraph "Application Layer"
WEB[Web Deployment<br/>Next.js]
API[API Deployment<br/>Fastify]
ORCH[Orchestrator Deployment<br/>Workers]
end
subgraph "Data Layer"
PG[(PostgreSQL<br/>StatefulSet)]
REDIS[(Redis<br/>StatefulSet)]
NEO4J[(Neo4j<br/>Optional)]
end
subgraph "Jobs"
MIG[Migrations Job]
SEED[Seed Job]
end
ING --> WEB
ING --> API
WEB --> API
WEB --> ORCH
API --> PG
API --> REDIS
ORCH --> PG
ORCH --> REDIS
MIG -.->|init| PG
SEED -.->|init| PG
For GKE (Google Kubernetes Engine):
gcloud container clusters create inmidst-prod \
--zone us-central1-a \
--num-nodes 3 \
--machine-type n1-standard-2 \
--enable-autoscaling \
--min-nodes 3 \
--max-nodes 10
For EKS (Amazon Elastic Kubernetes Service):
eksctl create cluster \
--name inmidst-prod \
--region us-east-1 \
--nodegroup-name standard-workers \
--node-type t3.medium \
--nodes 3 \
--nodes-min 3 \
--nodes-max 10
For local testing (minikube):
minikube start --cpus=4 --memory=8192
Navigate to the Helm chart directory:
cd infra/helm
Review/customize values.yaml:
# infra/helm/values.yaml
global:
domain: inmidstmylife.com
environment: production
api:
replicaCount: 2
image:
repository: gcr.io/inmidst/api
tag: "0.2.0"
resources:
requests:
cpu: 500m
memory: 512Mi
limits:
cpu: 2000m
memory: 2Gi
orchestrator:
replicaCount: 3
image:
repository: gcr.io/inmidst/orchestrator
tag: "0.2.0"
resources:
requests:
cpu: 1000m
memory: 1Gi
limits:
cpu: 4000m
memory: 4Gi
web:
replicaCount: 2
image:
repository: gcr.io/inmidst/web
tag: "0.2.0"
resources:
requests:
cpu: 500m
memory: 512Mi
limits:
cpu: 1000m
memory: 1Gi
postgres:
persistence:
size: 50Gi
storageClass: standard-rwo
resources:
requests:
cpu: 1000m
memory: 2Gi
redis:
persistence:
size: 10Gi
resources:
requests:
cpu: 500m
memory: 512Mi
# Create namespace
kubectl create namespace inmidst
# Create secret for database credentials
kubectl create secret generic postgres-creds \
--from-literal=username=midstsvc \
--from-literal=password=<secure-password> `# allow-secret - placeholder` \
--from-literal=database=midst \
-n inmidst
# Create secret for Redis
kubectl create secret generic redis-creds \
--from-literal=password=<secure-password> `# allow-secret - placeholder` \
-n inmidst
# Create secret for JWT tokens
kubectl create secret generic jwt-secret \
--from-literal=secret=<jwt-secret-key> `# allow-secret - placeholder` \
-n inmidst
Using 1Password for secrets:
# Export secrets from 1Password
export POSTGRES_PASSWORD=$(op read "op://Production/postgres/password")
export REDIS_PASSWORD=$(op read "op://Production/redis/password")
export JWT_SECRET=$(op read "op://Production/jwt/secret")
# Create secrets
kubectl create secret generic postgres-creds \
--from-literal=username=midstsvc \
--from-literal=password=$POSTGRES_PASSWORD \
--from-literal=database=midst \
-n inmidst
# Install (first time)
helm install inmidst . \
--namespace inmidst \
--values values.yaml \
--values values.production.yaml
# Upgrade (subsequent deployments)
helm upgrade inmidst . \
--namespace inmidst \
--values values.yaml \
--values values.production.yaml
# Rollback if needed
helm rollback inmidst -n inmidst
Migrations run automatically via Kubernetes Job on install/upgrade:
# Check migration job status
kubectl get jobs -n inmidst
kubectl logs job/inmidst-migrations -n inmidst
# Manually trigger migrations if needed
kubectl create job --from=job/inmidst-migrations inmidst-migrations-manual -n inmidst
NGINX Ingress Controller:
# Install NGINX ingress controller
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm install nginx-ingress ingress-nginx/ingress-nginx \
--namespace ingress-nginx \
--create-namespace
# Apply ingress rules
kubectl apply -f - <<EOF
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: inmidst-ingress
namespace: inmidst
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
nginx.ingress.kubernetes.io/ssl-redirect: "true"
spec:
ingressClassName: nginx
tls:
- hosts:
- inmidstmylife.com
- api.inmidstmylife.com
secretName: inmidst-tls
rules:
- host: inmidstmylife.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: inmidst-web
port:
number: 3000
- host: api.inmidstmylife.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: inmidst-api
port:
number: 3001
EOF
# Install cert-manager
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.13.0/cert-manager.yaml
# Create Let's Encrypt issuer
kubectl apply -f - <<EOF
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: padavano.anthony@gmail.com
privateKeySecretRef:
name: letsencrypt-prod
solvers:
- http01:
ingress:
class: nginx
EOF
# Check pod status
kubectl get pods -n inmidst
# Check services
kubectl get svc -n inmidst
# Check ingress
kubectl get ingress -n inmidst
# View logs
kubectl logs -l app=inmidst-api -n inmidst
kubectl logs -l app=inmidst-orchestrator -n inmidst
# Test health endpoints
kubectl port-forward svc/inmidst-api 3001:3001 -n inmidst
curl http://localhost:3001/health
The .github/workflows/deploy.yml workflow automates container builds and Kubernetes deployment:
helm upgrade --install with --set flags to inject the Git SHA as the image tag:helm upgrade --install in-midst-my-life ./infra/helm \
--set "api.image.tag=${IMAGE_TAG}" \
--set "web.image.tag=${IMAGE_TAG}" \
--set "orchestrator.image.tag=${IMAGE_TAG}"
Required secrets:
KUBECONFIG — base64-encoded kubeconfig for the target clusterThe workflow runs on pushes to master and can be triggered manually via workflow_dispatch.
| Variable | Required | Default | Description |
|---|---|---|---|
DATABASE_URL |
Yes | - | PostgreSQL connection string |
POSTGRES_URL |
Yes | - | Same as DATABASE_URL (alias) |
REDIS_URL |
No | - | Redis connection string (include password: redis://:pass@host:6379) |
REDIS_PASSWORD |
Prod | - | Redis requirepass value; wired into REDIS_URL in docker-compose.prod |
PROFILE_REPO |
Yes | postgres |
Profile repository type |
NODE_ENV |
Yes | development |
Environment (development/production) |
PORT |
No | 3001 |
API server port |
LOG_LEVEL |
No | info |
Logging level |
JWT_SECRET |
Yes | - | JWT signing secret (min 32 chars) |
STRIPE_SECRET_KEY |
No | sk_test_mock |
Stripe API key (mock if omitted) |
STRIPE_WEBHOOK_SECRET |
No | whsec_test_mock |
Stripe webhook signing secret |
OPENAI_API_KEY |
No | sk-test-mock |
OpenAI key for embeddings (mock if omitted) |
SENTRY_DSN |
No | - | Sentry error reporting DSN |
ALLOWED_ORIGINS |
No | localhost | Comma-separated CORS origins |
| Variable | Required | Default | Description |
|---|---|---|---|
DATABASE_URL |
Yes | - | PostgreSQL connection string |
ORCH_TASK_STORE |
Yes | postgres |
Task store backend |
REDIS_URL |
Yes | - | Redis connection for queue |
ORCH_PORT |
No | 3002 |
Orchestrator port |
ORCH_WORKER_ENABLED |
Yes | true |
Enable background workers |
ORCH_SCHEDULER_ENABLED |
Yes | true |
Enable task scheduler |
ORCH_AGENT_EXECUTOR |
Yes | local |
Agent executor (local/stub) |
LOCAL_LLM_URL |
No | - | Local LLM API endpoint |
LOCAL_LLM_MODEL |
No | llama3.1:8b |
Model name |
ORCH_LLM_RESPONSE_FORMAT |
No | text |
Response format (text/structured-json) |
ORCH_TOOL_ALLOWLIST |
No | - | Comma-separated tool names |
| Variable | Required | Default | Description |
|---|---|---|---|
NEXT_PUBLIC_API_BASE_URL |
Yes | - | API base URL |
NEXT_PUBLIC_ORCH_BASE_URL |
No | - | Orchestrator base URL |
NEXT_PUBLIC_GRAPH_LAYOUT |
No | radial |
Graph layout (radial/force) |
Docker Compose:
# API migrations
docker-compose exec api pnpm --filter @in-midst-my-life/api migrate
# Orchestrator migrations
docker-compose exec orchestrator pnpm --filter @in-midst-my-life/orchestrator migrate
Kubernetes:
# Migrations run automatically on deployment via Job
# Check status:
kubectl get jobs -n inmidst
kubectl logs job/inmidst-migrations -n inmidst
# Manual migration:
kubectl create job --from=job/inmidst-migrations manual-migration -n inmidst
Local Development:
# Start services
./scripts/dev-up.sh
# Run migrations
pnpm --filter @in-midst-my-life/api migrate
pnpm --filter @in-midst-my-life/orchestrator migrate
Development/Staging:
# Docker Compose
docker-compose exec api pnpm --filter @in-midst-my-life/api seed
docker-compose exec orchestrator pnpm --filter @in-midst-my-life/orchestrator seed
# Local
pnpm --filter @in-midst-my-life/api seed
pnpm --filter @in-midst-my-life/orchestrator seed
Production:
⚠️ Seeds use ON CONFLICT DO NOTHING and are safe to re-run, but review seed files before running in production.
All services expose health endpoints:
# API health
curl http://localhost:3001/health
# API readiness (checks DB/Redis)
curl http://localhost:3001/ready
# Orchestrator health
curl http://localhost:3002/health
Prometheus-compatible metrics:
# API metrics
curl http://localhost:3001/metrics
# Orchestrator metrics
curl http://localhost:3002/metrics
⚠️ Security: Restrict /metrics endpoint in production via:
Configured in Helm templates:
livenessProbe:
httpGet:
path: /health
port: 3001
initialDelaySeconds: 30
periodSeconds: 10
readinessProbe:
httpGet:
path: /ready
port: 3001
initialDelaySeconds: 10
periodSeconds: 5
Docker Compose:
# View logs
docker-compose logs -f api
docker-compose logs -f orchestrator
docker-compose logs -f web
# Filter by severity
docker-compose logs api | grep ERROR
Kubernetes:
# View pod logs
kubectl logs -l app=inmidst-api -n inmidst
kubectl logs -l app=inmidst-orchestrator -n inmidst --tail=100 -f
# Stream logs
kubectl logs -f deployment/inmidst-api -n inmidst
API Response Times (p95 targets):
Monitor using:
/metrics endpointAutomated Backups (Kubernetes):
# Create CronJob for daily backups
kubectl apply -f - <<EOF
apiVersion: batch/v1
kind: CronJob
metadata:
name: postgres-backup
namespace: inmidst
spec:
schedule: "0 2 * * *" # 2 AM daily
jobTemplate:
spec:
template:
spec:
containers:
- name: backup
image: postgres:15
env:
- name: PGHOST
value: postgres
- name: PGUSER
valueFrom:
secretKeyRef:
name: postgres-creds
key: username
- name: PGPASSWORD
valueFrom:
secretKeyRef:
name: postgres-creds
key: password
command:
- /bin/sh
- -c
- |
pg_dump -d midst | gzip > /backup/midst-\$(date +%Y%m%d).sql.gz
volumeMounts:
- name: backup
mountPath: /backup
volumes:
- name: backup
persistentVolumeClaim:
claimName: backup-pvc
restartPolicy: OnFailure
EOF
Manual Backup:
# Docker Compose
docker-compose exec postgres pg_dump -U midstsvc midst > backup.sql
# Kubernetes
kubectl exec -n inmidst postgres-0 -- pg_dump -U midstsvc midst > backup.sql
Redis uses AOF (Append Only File) persistence by default in our configuration:
# Trigger manual save
docker-compose exec redis redis-cli BGSAVE
# Copy RDB file
docker cp inmidst-redis:/data/dump.rdb ./backup/
PostgreSQL Restore:
# Docker Compose
docker-compose exec -T postgres psql -U midstsvc midst < backup.sql
# Kubernetes
kubectl exec -i -n inmidst postgres-0 -- psql -U midstsvc midst < backup.sql
Redis Restore:
# Copy RDB file to container
docker cp backup/dump.rdb inmidst-redis:/data/
# Restart Redis
docker-compose restart redis
Docker Compose:
# Use internal networks
networks:
backend:
driver: bridge
internal: true
frontend:
driver: bridge
Kubernetes:
# Network policies
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: inmidst-network-policy
spec:
podSelector:
matchLabels:
app: inmidst
policyTypes:
- Ingress
- Egress
ingress:
- from:
- podSelector:
matchLabels:
app: inmidst
egress:
- to:
- podSelector:
matchLabels:
app: postgres
- podSelector:
matchLabels:
app: redis
Best Practices:
1Password Integration:
# Load secrets from 1Password
source scripts/secrets.env.op.sh
# Or use op CLI directly
export DATABASE_URL=$(op read "op://Production/postgres/url")
Kubernetes Secrets:
# Use sealed-secrets for GitOps
kubeseal --format yaml < secret.yaml > sealed-secret.yaml
Enable HTTPS for all public endpoints:
# Ingress with TLS
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
spec:
tls:
- hosts:
- inmidstmylife.com
secretName: inmidst-tls
# Enable PostgreSQL SSL
# In postgresql.conf:
ssl = on
ssl_cert_file = '/var/lib/postgresql/server.crt'
ssl_key_file = '/var/lib/postgresql/server.key'
Symptom: ECONNREFUSED or connection timeout
Solutions:
# Check PostgreSQL is running
docker-compose ps postgres
kubectl get pods -l app=postgres -n inmidst
# Check connection string
echo $DATABASE_URL
# Test connection
docker-compose exec postgres psql -U midstsvc midst -c "SELECT 1"
# Check network connectivity
docker-compose exec api ping postgres
Symptom: Redis connection failed
Solutions:
# Check Redis is running
docker-compose ps redis
# Test connection
docker-compose exec redis redis-cli ping
# Check URL format
echo $REDIS_URL # Should be redis://redis:6379
Symptom: Migration failed errors
Solutions:
# Check database exists
docker-compose exec postgres psql -U midstsvc -l
# Check migrations table
docker-compose exec postgres psql -U midstsvc midst -c "\dt migrations"
# Re-run migrations (idempotent)
docker-compose exec api pnpm --filter @in-midst-my-life/api migrate
# Check migration logs
docker-compose logs api | grep migration
Symptom: Port already in use
Solutions:
# Check what's using the port
lsof -i :3001
lsof -i :5432
# Change ports in .env
WEB_PORT=3010
REDIS_PORT=6380
# Restart services
docker-compose down && docker-compose up
Symptom: Pods/containers crashing
Solutions:
# Check resource usage
docker stats
kubectl top pods -n inmidst
# Increase memory limits
# In docker-compose.yml or Helm values.yaml:
resources:
limits:
memory: 4Gi
Symptom: Jobs stuck in queue
Solutions:
# Check worker is enabled
echo $ORCH_WORKER_ENABLED # Should be "true"
# Check Redis connection
docker-compose exec orchestrator node -e "require('redis').createClient({url: process.env.REDIS_URL}).ping().then(console.log)"
# Check job queue
docker-compose exec redis redis-cli LLEN "bull:task-queue:waiting"
# View orchestrator logs
docker-compose logs -f orchestrator | grep worker
System Health:
# Check all services
docker-compose ps
kubectl get all -n inmidst
# Check service health
curl http://localhost:3001/health
curl http://localhost:3001/ready
# Database connectivity
docker-compose exec api node -e "const {Pool}=require('pg');new Pool({connectionString:process.env.DATABASE_URL}).query('SELECT NOW()').then(r=>console.log(r.rows[0]))"
Logs & Events:
# Docker Compose logs
docker-compose logs --tail=100 -f
# Kubernetes logs
kubectl logs -f deployment/inmidst-api -n inmidst
kubectl logs -f deployment/inmidst-orchestrator -n inmidst
# Kubernetes events
kubectl get events -n inmidst --sort-by='.lastTimestamp'
Database Inspection:
# Connect to database
./scripts/dev-shell.sh postgres
# Check tables
\dt
# Check row counts
SELECT COUNT(*) FROM profiles;
SELECT COUNT(*) FROM experiences;