How to Deploy OpenClaw with Docker: Complete Guide
Docker is the most reliable way to deploy OpenClaw agents to production. It eliminates "works on my machine" problems, makes scaling from one agent to ten straightforward, and gives you consistent deployments across any VPS, cloud provider, or local server. This guide walks through every step from writing a Dockerfile to running a multi-agent fleet on a production VPS.
Why Docker for OpenClaw
OpenClaw agents depend on Node.js, specific npm packages, API keys, and a running gateway process. On a bare machine, any OS update, Node version change, or missing dependency can break the setup silently. Docker solves this by packaging everything into an isolated container that runs identically everywhere.
Isolation
Each agent runs in its own container with its own filesystem, network namespace, and process tree. A crashing agent does not take down other agents or the host system. You can run different Node.js versions per agent if needed.
Reproducibility
The Dockerfile is your deployment documentation. Anyone on your team can run docker compose up and get an identical environment. No more "did you install the right version of X" conversations.
Easy Scaling
Adding a new agent is a new service block in docker-compose.yml. Need five agents? Copy the service definition, change the SOUL.md mount and port, and run docker compose up -d. No server provisioning required.
Prerequisites
Before starting, make sure you have Docker and Docker Compose installed on your machine. You also need an OpenClaw project directory with at least one agent configured.
# Check Docker version (need 20.10+)
docker --version
# Check Docker Compose (need v2+)
docker compose version
# If not installed on Ubuntu/Debian:
curl -fsSL https://get.docker.com | sh
sudo usermod -aG docker $USER
# Verify OpenClaw is initialized
ls agents/
# Should show your agent directories with SOUL.md filesYou need at least one LLM provider API key (OpenAI, Anthropic, or a running Ollama instance). Docker does not change how OpenClaw connects to model providers, it just runs inside a container instead of directly on your host.
Dockerfile Setup
The Dockerfile defines how your OpenClaw container is built. This is a production-ready configuration that uses a multi-stage build to keep the final image small, runs as a non-root user, and includes only what the agent needs at runtime.
# Stage 1: Install dependencies
FROM node:20-alpine AS builder
WORKDIR /app
# Install OpenClaw globally
RUN npm install -g openclaw@latest
# Copy agent configuration files
COPY agents/ ./agents/
COPY config.yaml ./config.yaml
# Stage 2: Production image
FROM node:20-alpine AS production
# Security: run as non-root user
RUN addgroup -g 1001 openclaw && \
adduser -u 1001 -G openclaw -s /bin/sh -D openclaw
WORKDIR /app
# Copy OpenClaw from builder
COPY --from=builder /usr/local/lib/node_modules /usr/local/lib/node_modules
COPY --from=builder /usr/local/bin/openclaw /usr/local/bin/openclaw
COPY --from=builder /app/agents ./agents
COPY --from=builder /app/config.yaml ./config.yaml
# Create directories for persistent data
RUN mkdir -p /app/memory /app/logs /app/sessions && \
chown -R openclaw:openclaw /app
USER openclaw
# Gateway port
EXPOSE 18789
# Health check
HEALTHCHECK --interval=30s --timeout=10s --retries=3 \
CMD wget -qO- http://localhost:18789/status || exit 1
# Start the gateway
CMD ["openclaw", "gateway", "start", "--port", "18789"]Key decisions in this Dockerfile: Alpine-based images keep the size under 200MB. The multi-stage build means build tools are not included in the final image. Running as a non-root user prevents the agent from modifying the container filesystem in unexpected ways. The health check lets Docker automatically restart unhealthy containers.
Tip: Pin the OpenClaw version in production. Replace openclaw@latest with a specific version like openclaw@2026.3.2 to avoid surprise breaking changes during rebuilds.
docker-compose.yml Configuration
Docker Compose ties everything together. It defines your services, volumes, networks, environment variables, and restart policies in a single file. This configuration runs one OpenClaw agent with persistent storage and automatic restarts.
version: "3.8"
services:
agent:
build:
context: .
dockerfile: Dockerfile
container_name: openclaw-agent
restart: unless-stopped
ports:
- "18789:18789"
env_file:
- .env
volumes:
# Mount SOUL.md for live editing without rebuild
- ./agents/my-agent/SOUL.md:/app/agents/my-agent/SOUL.md:ro
# Persistent memory across restarts
- agent-memory:/app/memory
# Persistent session data
- agent-sessions:/app/sessions
# Logs
- agent-logs:/app/logs
networks:
- openclaw-network
logging:
driver: "json-file"
options:
max-size: "10m"
max-file: "3"
volumes:
agent-memory:
agent-sessions:
agent-logs:
networks:
openclaw-network:
driver: bridgeThe restart: unless-stopped policy means Docker automatically restarts the container if it crashes or if the host reboots. The SOUL.md mount is read-only (:ro) so you can edit it on the host and restart the container to apply changes, but the agent cannot modify it from inside. Log rotation prevents disk space from filling up over time.
Environment Variables
API keys and sensitive configuration belong in a .env file that is never committed to version control. Docker Compose reads this file automatically and injects the variables into your container.
# LLM Provider Keys
OPENAI_API_KEY=sk-proj-your-key-here
ANTHROPIC_API_KEY=sk-ant-your-key-here
# OpenClaw Configuration
OPENCLAW_MODEL=claude-sonnet-4-20250514
OPENCLAW_GATEWAY_PORT=18789
OPENCLAW_LOG_LEVEL=info
# Channel Tokens (if using Telegram, Discord, etc.)
TELEGRAM_BOT_TOKEN=your-telegram-bot-token
DISCORD_BOT_TOKEN=your-discord-bot-token
# Ollama (if using local models)
OLLAMA_HOST=http://host.docker.internal:11434# Copy this to .env and fill in your values
OPENAI_API_KEY=
ANTHROPIC_API_KEY=
OPENCLAW_MODEL=claude-sonnet-4-20250514
OPENCLAW_GATEWAY_PORT=18789
OPENCLAW_LOG_LEVEL=info
TELEGRAM_BOT_TOKEN=
DISCORD_BOT_TOKEN=
OLLAMA_HOST=http://host.docker.internal:11434Important: Add .env to your .gitignore immediately. Leaked API keys in Git history are the most common security incident in AI agent deployments. If a key has been committed even once, rotate it.
Multi-Agent Docker Setup
Running multiple agents is where Docker Compose really shines. Each agent is a separate service with its own SOUL.md, memory volume, and gateway port. They share the same network so agents can communicate with each other, and they share the same .env file for API keys.
version: "3.8"
services:
# Agent 1: Project Manager
orion:
build: .
container_name: openclaw-orion
restart: unless-stopped
ports:
- "18789:18789"
env_file:
- .env
environment:
- OPENCLAW_AGENT_NAME=orion
- OPENCLAW_GATEWAY_PORT=18789
volumes:
- ./agents/orion/SOUL.md:/app/agents/orion/SOUL.md:ro
- orion-memory:/app/memory
- orion-sessions:/app/sessions
networks:
- openclaw-network
# Agent 2: Content Writer
echo:
build: .
container_name: openclaw-echo
restart: unless-stopped
ports:
- "18790:18790"
env_file:
- .env
environment:
- OPENCLAW_AGENT_NAME=echo
- OPENCLAW_GATEWAY_PORT=18790
volumes:
- ./agents/echo/SOUL.md:/app/agents/echo/SOUL.md:ro
- echo-memory:/app/memory
- echo-sessions:/app/sessions
networks:
- openclaw-network
# Agent 3: SEO Analyst
radar:
build: .
container_name: openclaw-radar
restart: unless-stopped
ports:
- "18791:18791"
env_file:
- .env
environment:
- OPENCLAW_AGENT_NAME=radar
- OPENCLAW_GATEWAY_PORT=18791
volumes:
- ./agents/radar/SOUL.md:/app/agents/radar/SOUL.md:ro
- radar-memory:/app/memory
- radar-sessions:/app/sessions
networks:
- openclaw-network
volumes:
orion-memory:
orion-sessions:
echo-memory:
echo-sessions:
radar-memory:
radar-sessions:
networks:
openclaw-network:
driver: bridgeStart all agents with a single command:
# Start all agents in detached mode
docker compose up -d
# Check status of all agents
docker compose ps
# View logs for a specific agent
docker compose logs -f orion
# Restart a single agent after SOUL.md changes
docker compose restart echo
# Stop everything
docker compose down
# Stop and remove volumes (warning: deletes memory/sessions)
docker compose down -vBecause all agents share the openclaw-network, they can reach each other by service name. Orion can delegate tasks to Echo at http://echo:18790 within the Docker network. No port mapping or external networking required.
Persistent Storage and Volumes
Docker containers are ephemeral by default. When a container is removed, everything inside it is lost. For OpenClaw agents, you need to persist three things: agent memory (MEMORY.md and learned context), session data (conversation history), and logs.
| Data Type | Volume Mount | Purpose |
|---|---|---|
| Memory | /app/memory | MEMORY.md, learned facts, agent state |
| Sessions | /app/sessions | Conversation history per channel |
| Logs | /app/logs | Gateway and agent activity logs |
| SOUL.md | bind mount (:ro) | Agent configuration (read-only from host) |
# Backup all agent memory volumes
docker run --rm \
-v orion-memory:/data \
-v $(pwd)/backups:/backup \
alpine tar czf /backup/orion-memory-$(date +%Y%m%d).tar.gz -C /data .
# Restore from backup
docker run --rm \
-v orion-memory:/data \
-v $(pwd)/backups:/backup \
alpine tar xzf /backup/orion-memory-20260310.tar.gz -C /data
# Automated daily backup with cron
# Add to crontab: 0 3 * * * /path/to/backup-volumes.sh
#!/bin/bash
BACKUP_DIR=~/openclaw-backups/$(date +%Y%m%d)
mkdir -p $BACKUP_DIR
for vol in orion-memory echo-memory radar-memory; do
docker run --rm -v $vol:/data -v $BACKUP_DIR:/backup \
alpine tar czf /backup/$vol.tar.gz -C /data .
done
# Keep only last 7 days
find ~/openclaw-backups -maxdepth 1 -mtime +7 -exec rm -rf {} +Health Checks
A container that is running is not necessarily a container that is working. Docker health checks let you define what "healthy" means for your agent. If the health check fails, Docker marks the container as unhealthy and can automatically restart it depending on your restart policy.
# Basic: check if the gateway process is running
HEALTHCHECK --interval=30s --timeout=10s --retries=3 --start-period=15s \
CMD pgrep -f "openclaw" > /dev/null || exit 1
# Better: check if the gateway port is responding
HEALTHCHECK --interval=30s --timeout=10s --retries=3 --start-period=15s \
CMD wget -qO- http://localhost:18789/status || exit 1
# Best: custom health check script
COPY healthcheck.sh /app/healthcheck.sh
HEALTHCHECK --interval=30s --timeout=10s --retries=3 --start-period=15s \
CMD /app/healthcheck.sh#!/bin/sh
# Check if gateway process is running
if ! pgrep -f "openclaw" > /dev/null; then
echo "Gateway process not found"
exit 1
fi
# Check if gateway port is accepting connections
if ! wget -qO- --timeout=5 http://localhost:18789/status > /dev/null 2>&1; then
echo "Gateway not responding on port 18789"
exit 1
fi
# Check if memory directory is writable
if ! touch /app/memory/.healthcheck 2>/dev/null; then
echo "Memory directory not writable"
exit 1
fi
rm -f /app/memory/.healthcheck
echo "Healthy"
exit 0Check the health status of your containers at any time:
# View health status
docker ps --format "table {{.Names}} {{.Status}}"
# Output:
# NAMES STATUS
# openclaw-orion Up 2 hours (healthy)
# openclaw-echo Up 2 hours (healthy)
# openclaw-radar Up 5 minutes (health: starting)Deploying to Production (VPS)
Once your Docker setup works locally, deploying to a VPS is straightforward. You push your project to the server, build the images, and start the containers. Here is a step-by-step deployment to a fresh Ubuntu VPS.
#!/bin/bash
# deploy.sh - Deploy OpenClaw agents to production VPS
set -e
VPS_HOST="your-vps-ip"
VPS_USER="deploy"
PROJECT_DIR="/opt/openclaw"
echo "Syncing project files to VPS..."
rsync -avz --exclude node_modules --exclude .git --exclude .env \
./ $VPS_USER@$VPS_HOST:$PROJECT_DIR/
echo "Building and starting containers on VPS..."
ssh $VPS_USER@$VPS_HOST << 'DEPLOY'
cd /opt/openclaw
# Pull latest base images
docker compose pull 2>/dev/null || true
# Build with no cache for clean deploy
docker compose build
# Start/restart containers
docker compose up -d
# Wait for health checks
sleep 10
docker compose ps
# Show recent logs
docker compose logs --tail=20
DEPLOY
echo "Deployment complete."For the initial VPS setup, install Docker and create the deploy user:
# SSH into your VPS as root
ssh root@your-vps-ip
# Install Docker
curl -fsSL https://get.docker.com | sh
# Create a deploy user (do not run containers as root)
adduser deploy
usermod -aG docker deploy
# Create project directory
mkdir -p /opt/openclaw
chown deploy:deploy /opt/openclaw
# Set up firewall (only expose what you need)
ufw allow 22/tcp # SSH
ufw allow 80/tcp # HTTP (if using reverse proxy)
ufw allow 443/tcp # HTTPS (if using reverse proxy)
ufw enable
# Do NOT expose gateway ports (18789-18791) publicly
# Use a reverse proxy or SSH tunnel instead
# Copy your .env file securely
scp .env deploy@your-vps-ip:/opt/openclaw/.envSecurity: Never expose OpenClaw gateway ports directly to the internet. Use an Nginx reverse proxy with SSL, or access the gateway through an SSH tunnel. The gateway does not have built-in authentication.
Monitoring and Logs
Running containers in production without monitoring is flying blind. Docker provides built-in logging, and you can layer on alerting with simple scripts or tools like Prometheus and Grafana if you want dashboards.
# Follow logs for all agents
docker compose logs -f
# Follow logs for one agent
docker compose logs -f orion
# View last 100 lines
docker compose logs --tail=100
# Check resource usage per container
docker stats --format "table {{.Name}} {{.CPUPerc}} {{.MemUsage}} {{.NetIO}}"
# Output:
# NAME CPU % MEM USAGE / LIMIT NET I/O
# openclaw-orion 0.5% 128MiB / 512MiB 1.2MB / 800kB
# openclaw-echo 0.3% 96MiB / 512MiB 980kB / 650kB
# openclaw-radar 0.4% 110MiB / 512MiB 1.1MB / 720kB
# Set memory limits per container (in docker-compose.yml)
# deploy:
# resources:
# limits:
# memory: 512M
# reservations:
# memory: 128MFor alerting, add a simple cron job that checks container health and sends a notification if something is down:
#!/bin/bash
# Check if any openclaw container is unhealthy or stopped
UNHEALTHY=$(docker ps --filter "name=openclaw" --filter "health=unhealthy" --format "{{.Names}}" 2>/dev/null)
STOPPED=$(docker ps -a --filter "name=openclaw" --filter "status=exited" --format "{{.Names}}" 2>/dev/null)
if [ -n "$UNHEALTHY" ] || [ -n "$STOPPED" ]; then
MESSAGE="OpenClaw Alert: $UNHEALTHY $STOPPED needs attention"
# Send to Telegram
curl -s "https://api.telegram.org/botYOUR_TOKEN/sendMessage" \
-d "chat_id=YOUR_CHAT_ID" \
-d "text=$MESSAGE"
# Attempt auto-restart
docker compose up -d
fiSkip Docker with CrewClaw
Docker is powerful, but it is also a lot of infrastructure to maintain. You need to manage Dockerfiles, compose configs, volumes, health checks, VPS provisioning, SSL certificates, monitoring scripts, and backup jobs. For a solo founder or small team, this overhead can eat into the time you should be spending on what the agent actually does.
CrewClaw lets you skip all of this. You configure your agent in a visual builder, pick a role template, customize the SOUL.md, and deploy with a single click. No Docker, no VPS, no terminal. The agent runs on managed infrastructure with built-in monitoring, automatic restarts, and channel integrations for Telegram, Discord, Slack, and WhatsApp.
If you want full control over the container, network, and deployment pipeline, Docker is the right choice. If you want a working agent in 60 seconds without managing infrastructure, CrewClaw is built for that.
Related Guides
Deploy AI Agents to Production
Compare VPS, Raspberry Pi, Mac Mini, and Docker for production deployment
Self-Hosted AI Agent Setup
Docker, Prometheus monitoring, and watchdog alerts on your own hardware
Best VPS for OpenClaw
Compare Hostinger, Hetzner, and DigitalOcean for running agents 24/7
OpenClaw Gateway as Background Service
nohup, tmux, pm2, launchd, systemd, and Docker options
Frequently Asked Questions
Can I run OpenClaw in Docker without a GPU?
Yes. OpenClaw itself does not require a GPU. It sends requests to external LLM providers like OpenAI, Anthropic, or a remote Ollama instance. The Docker container only needs CPU and around 256MB of RAM for a single agent. If you run Ollama locally inside the same Docker network for local inference, then the Ollama container needs GPU access, but the OpenClaw container does not.
How do I update OpenClaw inside a Docker container?
Rebuild the image with the latest base. Run 'docker compose build --no-cache' to pull the newest OpenClaw version, then 'docker compose up -d' to restart with the new image. Your volumes preserve agent data, SOUL.md files, and memory across rebuilds. If you pin a specific version in your Dockerfile, update the version number before rebuilding.
Can I run multiple OpenClaw agents in one Docker container?
You can, but it is not recommended. Running multiple agents in a single container makes it harder to isolate failures, scale individually, and monitor resource usage. The recommended approach is one container per agent using docker-compose services. Each service gets its own SOUL.md, memory directory, and gateway port. Docker Compose handles networking between them automatically.
What is the minimum VPS spec for running OpenClaw in Docker?
A single OpenClaw agent in Docker runs comfortably on 1 vCPU and 512MB RAM. For a two-agent setup with the gateway, 1 vCPU and 1GB RAM is sufficient. If you add Ollama for local inference, you need at least 4GB RAM and ideally a GPU. Hetzner CX22 (2 vCPU, 4GB) at around 4 EUR/month handles three agents without issues.
How do I pass API keys to OpenClaw in Docker securely?
Use a .env file that is not committed to version control. Reference it in your docker-compose.yml with 'env_file: .env'. Docker loads the variables at container start. For production, use Docker secrets or your cloud provider's secret manager. Never bake API keys into the Docker image itself.
Does OpenClaw support Docker health checks?
OpenClaw does not expose a dedicated health endpoint by default, but the gateway listens on a configurable port. You can add a HEALTHCHECK instruction in your Dockerfile that curls the gateway port or checks if the openclaw process is running. The guide above includes a production-ready health check configuration using wget against the gateway status endpoint.
Skip the Docker Setup Entirely
Get a production-ready OpenClaw agent deployed in 60 seconds. No Dockerfile, no VPS, no terminal. Pick a template, customize your SOUL.md, and your AI employee starts working immediately.
Deploy a Ready-Made AI Agent
Skip the setup. Pick a template and deploy in 60 seconds.