OpenClaw Documentation: Complete Reference Guide (2026)
Everything you need to know about OpenClaw in one place. This guide covers core concepts, every configuration file, all CLI commands, gateway settings, model providers, third-party integrations, deployment options, and common troubleshooting steps.
Table of Contents
1. OpenClaw Documentation Overview
OpenClaw is an open-source, configuration-first AI agent framework. Instead of writing code, you define agents using markdown files. A single SOUL.md file controls an agent's identity, personality, rules, skills, and communication behavior. The OpenClaw gateway runs locally on your machine, routes messages between agents and channels, and connects to the LLM provider of your choice.
This reference guide is structured to take you from zero to a production-ready multi-agent deployment. Whether you are setting up your first agent or configuring a team of ten, every setting, command, and configuration option is documented here.
OpenClaw's documentation ecosystem includes the official docs site, the GitHub repository wiki, community-contributed guides, and this reference. This guide consolidates the most critical information into a single, searchable page that covers the full surface area of the framework.
# Install OpenClaw
curl -fsSL https://get.openclaw.com | bash
# Create an agent workspace with SOUL.md
mkdir -p agents/my-agent && cat > agents/my-agent/SOUL.md << 'EOF'
# My First Agent
## Identity
- Name: Assistant
- Role: General-purpose AI assistant
## Personality
- Helpful and concise
- Responds in clear, actionable language
## Rules
- Always provide sources when citing facts
- Keep responses under 500 words unless asked for more
EOF
# Register agent and start gateway
openclaw agents add my-agent --workspace ./agents/my-agent
openclaw gateway start2. Core Concepts
Before diving into configuration files and commands, it is important to understand the five core concepts that make up the OpenClaw architecture.
Agents
An agent is an AI entity defined by a SOUL.md file. Each agent has its own identity, personality, rules, and skills. Agents are registered with the CLI and run through the gateway. An agent can operate independently or as part of a multi-agent team. Each agent lives in its own workspace directory containing its configuration files.
Gateway
The gateway is the local server that powers all OpenClaw agents. It handles message routing between agents, manages channel connections (Telegram, Slack, Discord, Email), processes incoming requests, and communicates with the configured LLM provider. The gateway runs on your machine and listens on a configurable port (default 18789). All agent interactions flow through the gateway.
Channels
Channels are the communication interfaces through which users interact with agents. OpenClaw supports Telegram, Slack, Discord, and Email as built-in channels. Each channel is configured with a single line in the SOUL.md file and requires the appropriate bot token or API credentials. An agent can be connected to multiple channels simultaneously.
Skills
Skills are plug-and-play capabilities that extend what an agent can do. Built-in skills include browser (web search and browsing), scraper (web page content extraction), file manager (read/write local files), and code runner (execute code snippets). Skills are enabled in the SOUL.md file and require no additional installation. The skills system is extensible, and the community contributes new skills regularly.
Workspace
A workspace is the directory that contains all configuration files for an agent or team. At minimum, a workspace contains a SOUL.md file. Multi-agent workspaces also include AGENTS.md for team coordination, and optionally TOOLS.md, HEARTBEAT.md, MEMORY.md, and USER.md. The workspace path is specified when registering an agent with the CLI.
3. Configuration Files Reference
OpenClaw uses markdown files for all agent configuration. Each file serves a specific purpose. Here is a complete reference for every configuration file in the OpenClaw ecosystem.
SOUL.md -- Agent Identity and Behavior
SOUL.md is the primary configuration file for every OpenClaw agent. It is the only required file. It defines who the agent is, how it behaves, what it can do, and what rules it follows.
# Agent Display Name
## Identity
- Name: Agent Name
- Role: What this agent does
- Model: claude-sonnet-4-20250514
- Provider: anthropic
## Personality
- Describe the agent's communication style
- Tone, formality, and approach to tasks
- How the agent interacts with users
## Rules
- Specific instructions the agent must follow
- Constraints on behavior and output
- Formatting requirements
- Domain-specific guidelines
## Skills
- browser: Search the web and browse pages
- scraper: Extract content from URLs
- file_manager: Read and write local files
- code_runner: Execute code snippets
## Channels
- telegram: @YourBotUsername
- slack: #channel-name
- discord: server-id/channel-id
- email: agent@yourdomain.com
## Knowledge
- Reference documents the agent should know about
- Links to internal wikis or documentation
- Domain-specific context filesThe Identity section sets the agent's name, role, and model configuration. The Personality section defines communication style in natural language. Rules are hard constraints the agent must follow. Skills enable specific capabilities. Channels connect the agent to messaging platforms. Knowledge provides reference context.
AGENTS.md -- Multi-Agent Teams and Handoffs
AGENTS.md defines how multiple agents collaborate within a workspace. It lists all team members, describes their specializations, and specifies handoff rules that determine how work flows between agents.
# Content Production Team
## Agents
- @researcher: Finds data, statistics, and source material
- @writer: Creates blog posts and articles from research
- @editor: Reviews drafts for quality and accuracy
- @publisher: Formats and publishes final content
## Workflow
1. @researcher gathers information on the assigned topic
2. @researcher hands findings to @writer
3. @writer drafts the article using the research
4. @writer sends the draft to @editor
5. @editor reviews, provides feedback, and approves
6. @editor passes approved content to @publisher
7. @publisher formats and delivers the final version
## Handoff Rules
- @researcher always passes to @writer, never directly to @editor
- @editor can send content back to @writer for revisions
- @publisher is the final step and does not hand off to other agents
- All agents respond in English onlyTOOLS.md -- Available Tools and Commands
TOOLS.md documents the tools and external commands available to agents in the workspace. This file serves as a reference that agents consult when they need to perform actions beyond their built-in skills.
# Available Tools
## Analytics
- `node scripts/analytics/ga4-traffic.cjs` — Fetch GA4 traffic data
- `node scripts/analytics/mixpanel-funnel.cjs` — Get funnel metrics
- `node scripts/analytics/stripe-report.cjs` — Stripe revenue report
## Content
- `node scripts/content/publish.cjs` — Publish content to CMS
- `node scripts/content/screenshot.cjs [url]` — Capture page screenshot
## Notifications
- `node scripts/notify/telegram.cjs [message]` — Send Telegram alert
- `node scripts/notify/slack.cjs [channel] [message]` — Post to SlackHEARTBEAT.md -- Scheduled Tasks
HEARTBEAT.md configures scheduled, recurring tasks for agents. It works like a cron system built into OpenClaw. You define when tasks run and what the agent should do at each interval.
# Heartbeat Schedule
## Every Morning at 9:00 AM
- Check analytics dashboard for anomalies
- Generate daily traffic report
- Send summary to Telegram
## Every Monday at 10:00 AM
- Compile weekly performance report
- Compare metrics to previous week
- Flag any metrics that dropped more than 20%
## Every 6 Hours
- Monitor uptime for all tracked services
- Alert immediately if any service is down
## First Day of Month
- Generate monthly revenue report
- Calculate MoM growth rates
- Send full report to email channelMEMORY.md -- Persistent Context
MEMORY.md provides persistent context that the agent retains across sessions. Unlike conversation history which resets, MEMORY.md content is always available to the agent. Use it for project details, key decisions, user preferences, and accumulated knowledge that the agent should never forget.
# Agent Memory
## Project Context
- Product: SaaS analytics dashboard
- Stack: Next.js, TypeScript, PostgreSQL
- Launch date: March 2026
- Current phase: Beta testing
## Key Decisions
- Chose Stripe over Paddle for payments (Jan 15)
- Migrated from REST to GraphQL API (Jan 22)
- Switched from Vercel to self-hosted (Feb 1)
## User Preferences
- Prefers concise responses with bullet points
- Wants code examples in TypeScript
- Reports should include week-over-week comparison
- Never include emojis in reportsUSER.md -- User Preferences
USER.md stores preferences and context about the user or team that interacts with the agent. While MEMORY.md is about project and agent context, USER.md is specifically about the human users. This separation keeps user-specific information organized and easy to update.
# User Profile
## Communication
- Language: English
- Timezone: UTC+3
- Availability: Weekdays 9 AM - 6 PM
- Preferred channel: Telegram
## Expertise Level
- Technical background: Senior developer
- Familiar with: Python, TypeScript, Docker, AWS
- New to: Machine learning, Kubernetes
## Notification Preferences
- Critical alerts: Immediate via Telegram
- Daily reports: 9 AM via Email
- Weekly summaries: Monday 10 AM via Slack4. CLI Commands Reference
The OpenClaw CLI is the primary interface for managing agents, the gateway, configuration, and channels. Here is the complete command reference.
Agent Management
# Add a new agent from a workspace directory
openclaw agents add <name> --workspace <path>
# List all registered agents
openclaw agents list
# Remove a registered agent
openclaw agents remove <name>
# Send a message to a specific agent
openclaw agent --agent <name> --message "Your message here"
# Show agent details and configuration
openclaw agents info <name>Gateway Management
# Start the gateway (background daemon)
openclaw gateway start
# Stop the running gateway
openclaw gateway stop
# Restart the gateway (stop + start)
openclaw gateway restart
# Run gateway in foreground (useful for debugging)
openclaw gateway run
# Check gateway status
openclaw gateway statusConfiguration
# Get a configuration value
openclaw config get <key>
# Set a configuration value
openclaw config set <key> <value>
# List all configuration values
openclaw config list
# Examples
openclaw config set gateway.port 18789
openclaw config set provider anthropic
openclaw config set model claude-sonnet-4-20250514
openclaw config get gateway.portChannel Management
# List all configured channels
openclaw channels list
# Show channel connection status
openclaw channels status
# Connect a channel to an agent
openclaw channels connect <channel-type> --agent <name>
# Disconnect a channel
openclaw channels disconnect <channel-type> --agent <name>Utility Commands
# Check OpenClaw version
openclaw version
# Update to latest version
openclaw update
# View logs
openclaw logs
# Clear agent session data
openclaw sessions clear --agent <name>
# Health check
openclaw doctor5. Gateway Configuration
The gateway is the backbone of OpenClaw. It runs as a local server that processes all agent interactions. Here are the key configuration options for tuning the gateway to your environment.
| Setting | Default | Description |
|---|---|---|
| gateway.port | 18789 | Port the gateway listens on |
| gateway.bind | 127.0.0.1 | Bind address. Use 0.0.0.0 for network access |
| gateway.auth | disabled | Enable authentication for gateway API endpoints |
| gateway.auth_token | none | Bearer token for authenticated requests |
| gateway.tailscale | disabled | Expose gateway over Tailscale network |
| gateway.max_concurrent | 5 | Maximum concurrent agent requests |
| gateway.timeout | 120000 | Request timeout in milliseconds |
# Change the gateway port
openclaw config set gateway.port 8080
# Allow network access (not just localhost)
openclaw config set gateway.bind 0.0.0.0
# Enable authentication
openclaw config set gateway.auth enabled
openclaw config set gateway.auth_token your-secret-token-here
# Enable Tailscale integration for remote access
openclaw config set gateway.tailscale enabled
# Increase concurrent request limit
openclaw config set gateway.max_concurrent 10
# Apply changes
openclaw gateway restartFor remote access without exposing your machine to the public internet, the Tailscale integration is the recommended approach. It creates a secure, encrypted tunnel through your Tailscale network. This is particularly useful when running agents on a home server or Raspberry Pi that you want to access from anywhere.
6. Model Providers
OpenClaw supports four model providers out of the box. Each provider requires an API key (except Ollama) and is configured either globally or per-agent in the SOUL.md file.
Anthropic (Claude)
Anthropic's Claude models are the default provider for OpenClaw. Claude excels at following complex instructions, understanding nuance, and producing well-structured output, making it ideal for agent configurations that rely on detailed SOUL.md rules.
# Global configuration
openclaw config set provider anthropic
openclaw config set anthropic.api_key sk-ant-your-key-here
# Available models
# claude-sonnet-4-20250514 (recommended for most agents)
# claude-opus-4-20250514 (highest capability)
# claude-haiku-235 (fastest, lowest cost)
# Per-agent in SOUL.md
## Identity
- Model: claude-sonnet-4-20250514
- Provider: anthropicOpenAI (GPT-4)
OpenAI's GPT models are fully supported. GPT-4o is recommended for agents that require strong reasoning and tool use. GPT-4o-mini is a cost-effective option for simpler agents.
# Global configuration
openclaw config set provider openai
openclaw config set openai.api_key sk-your-key-here
# Available models
# gpt-4o (recommended)
# gpt-4o-mini (lower cost)
# o1 (advanced reasoning)
# Per-agent in SOUL.md
## Identity
- Model: gpt-4o
- Provider: openaiGoogle (Gemini)
Google's Gemini models provide strong multi-modal capabilities and competitive pricing. Gemini Pro is the recommended model for most agent use cases.
# Global configuration
openclaw config set provider google
openclaw config set google.api_key your-google-api-key
# Available models
# gemini-2.5-pro (recommended)
# gemini-2.5-flash (faster, lower cost)
# Per-agent in SOUL.md
## Identity
- Model: gemini-2.5-pro
- Provider: googleOllama (Local Models)
Ollama enables fully offline, local model inference. No API key required. No data leaves your machine. This is ideal for privacy-sensitive workflows, air-gapped environments, or reducing operational costs to zero. Performance depends on your hardware.
# Install Ollama first (https://ollama.com)
# Pull a model
ollama pull llama3.1
ollama pull mistral
ollama pull codellama
# Global configuration
openclaw config set provider ollama
openclaw config set ollama.host http://localhost:11434
# Available models (depends on what you have pulled)
# llama3.1 (general purpose, recommended)
# mistral (fast, good for simple agents)
# codellama (code-focused tasks)
# deepseek-coder (coding agents)
# Per-agent in SOUL.md
## Identity
- Model: llama3.1
- Provider: ollamaYou can mix providers within a multi-agent team. For example, use Claude for your main decision-making agent, GPT-4o for your research agent, and Ollama for a local code runner agent. Each agent specifies its own provider and model in its SOUL.md file, and the gateway routes requests to the correct provider automatically. See our Ollama local agents guide for a detailed setup walkthrough.
7. Integration Setup
OpenClaw agents can connect to external services through skills, tools, and channel configurations. Here is how to set up the most common integrations.
Google Analytics 4 (GA4)
# 1. Create a service account in Google Cloud Console
# 2. Download the JSON key file
# 3. Add the service account email as a Viewer in GA4 Admin
# Configure in OpenClaw
openclaw config set integrations.ga4.property_id "123456789"
openclaw config set integrations.ga4.credentials_path "./credentials/ga4-key.json"
# Reference in TOOLS.md for agent access
# `node scripts/analytics/ga4-traffic.cjs` — Fetch GA4 traffic dataMixpanel
# 1. Get your project token and API secret from Mixpanel Settings
# 2. Configure in OpenClaw
openclaw config set integrations.mixpanel.token "your-project-token"
openclaw config set integrations.mixpanel.api_secret "your-api-secret"
# Agents can query Mixpanel for funnel data, event counts,
# and user analytics through TOOLS.md scriptsStripe
# 1. Get your Stripe API key from the Stripe Dashboard
# 2. Use a restricted key with read-only access for safety
openclaw config set integrations.stripe.api_key "sk_live_your-key"
# Agents can fetch revenue reports, subscription counts,
# cancellation data, and payment eventsGitHub
# 1. Create a personal access token (PAT) with repo scope
# 2. Configure in OpenClaw
openclaw config set integrations.github.token "ghp_your-token"
openclaw config set integrations.github.default_repo "username/repo"
# Agents can create issues, read PRs, check CI status,
# and interact with your repositoriesNotion
# 1. Create an integration at https://www.notion.so/my-integrations
# 2. Share your Notion pages/databases with the integration
# 3. Configure in OpenClaw
openclaw config set integrations.notion.token "ntn_your-token"
openclaw config set integrations.notion.workspace_id "your-workspace-id"
# Agents can read pages, update databases, create entries,
# and search across your Notion workspacePostgreSQL
# Configure database connection
openclaw config set integrations.postgres.host "localhost"
openclaw config set integrations.postgres.port 5432
openclaw config set integrations.postgres.database "your_database"
openclaw config set integrations.postgres.user "your_user"
openclaw config set integrations.postgres.password "your_password"
# Use read-only credentials for safety
# Agents can query data, generate reports, and monitor metrics
# Configure allowed tables in TOOLS.md to prevent unintended access8. Deployment Options
OpenClaw is designed to run anywhere. Here are the supported deployment targets, from your laptop to a cloud VPS.
macOS
macOS is the primary development and deployment platform for OpenClaw. The gateway runs as a background daemon managed by launchd, which automatically starts it on boot and restarts it if it crashes.
# Install OpenClaw
curl -fsSL https://get.openclaw.com | bash
# Configure your agents
openclaw agents add my-agent --workspace ./agents/my-agent
# Start gateway (registers with launchd automatically)
openclaw gateway start
# To manage with launchd directly
launchctl load ~/Library/LaunchAgents/ai.openclaw.gateway.plist
launchctl unload ~/Library/LaunchAgents/ai.openclaw.gateway.plist
# Prevent sleep when on AC power (for always-on agents)
sudo pmset -c sleep 0Linux (Ubuntu/Debian)
Linux deployment uses systemd for process management. The gateway runs as a user-level systemd service.
# Install OpenClaw
curl -fsSL https://get.openclaw.com | bash
# Configure agents
openclaw agents add my-agent --workspace ./agents/my-agent
# Create systemd service
cat > ~/.config/systemd/user/openclaw-gateway.service << 'EOF'
[Unit]
Description=OpenClaw Gateway
After=network.target
[Service]
ExecStart=/usr/local/bin/openclaw gateway run
Restart=always
RestartSec=10
[Install]
WantedBy=default.target
EOF
# Enable and start
systemctl --user daemon-reload
systemctl --user enable openclaw-gateway
systemctl --user start openclaw-gateway
# Check status
systemctl --user status openclaw-gatewayRaspberry Pi
Raspberry Pi is an excellent low-cost, always-on deployment target for OpenClaw agents. Pair it with Ollama for a fully self-contained, offline agent server. A Raspberry Pi 5 with 8GB RAM can handle most models comfortably.
# Install on Raspberry Pi (ARM64)
curl -fsSL https://get.openclaw.com | bash
# Install Ollama for local model inference
curl -fsSL https://ollama.com/install.sh | sh
ollama pull llama3.1:8b # Use smaller models for Pi
# Configure for local-only operation
openclaw config set provider ollama
openclaw config set ollama.host http://localhost:11434
# Use systemd for auto-start (same as Linux)
# Enable Tailscale for remote access from anywhere
openclaw config set gateway.tailscale enabledDocker
Docker deployment packages OpenClaw and all its dependencies into a portable container. This is the recommended approach for teams that want consistent environments across development, staging, and production.
# Using the official Docker image
docker pull openclaw/gateway:latest
# Run with mounted agent workspace
docker run -d \
--name openclaw-gateway \
-p 18789:18789 \
-v $(pwd)/agents:/app/agents \
-e ANTHROPIC_API_KEY=sk-ant-your-key \
openclaw/gateway:latest
# Docker Compose for multi-service setup
# docker-compose.yml
version: "3.8"
services:
gateway:
image: openclaw/gateway:latest
ports:
- "18789:18789"
volumes:
- ./agents:/app/agents
environment:
- ANTHROPIC_API_KEY=${ANTHROPIC_API_KEY}
restart: unless-stoppedVPS (Cloud Server)
Any VPS provider (Hetzner, DigitalOcean, Linode, AWS EC2) works with OpenClaw. A $5-10/month instance is sufficient for most agent workloads. Combine with Tailscale or a reverse proxy for secure remote access.
# On your VPS (Ubuntu)
curl -fsSL https://get.openclaw.com | bash
# Configure agents
openclaw agents add my-agent --workspace ./agents/my-agent
openclaw config set gateway.bind 127.0.0.1
# Set up nginx as reverse proxy
sudo apt install nginx certbot python3-certbot-nginx
# /etc/nginx/sites-available/openclaw
server {
server_name agents.yourdomain.com;
location / {
proxy_pass http://127.0.0.1:18789;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
}
}
# Enable HTTPS
sudo certbot --nginx -d agents.yourdomain.com
# Start everything
openclaw gateway start
sudo systemctl restart nginx9. Troubleshooting Common Issues
Here are the most common issues users encounter with OpenClaw and how to resolve them.
Gateway fails to start
Check if another process is using the configured port: 'lsof -i :18789'. If the port is occupied, either stop the other process or change the gateway port with 'openclaw config set gateway.port 8080'. Also verify your API key is set correctly with 'openclaw config get provider'.
Agent not responding to messages
First check gateway status with 'openclaw gateway status'. If the gateway is running, verify the agent is registered with 'openclaw agents list'. Check logs with 'openclaw logs' for error messages. A common cause is an expired or invalid API key for the configured model provider.
Telegram bot not receiving messages
Verify the bot token is correct and the bot is active on Telegram (message @BotFather to check). Ensure the gateway is running and accessible. If you are behind a firewall, Telegram webhooks need port 443, 80, 88, or 8443. Alternatively, configure polling mode instead of webhooks.
Session data causing stale responses
OpenClaw stores session data in JSON files. If an agent seems stuck or is referencing outdated context, clear its session: 'openclaw sessions clear --agent <name>'. For a full reset, delete the sessions directory: 'rm -rf ~/.openclaw/agents/<name>/sessions/sessions.json'.
Ollama models responding slowly
Local model performance depends on your hardware. Ensure you have enough RAM for the model size (7B models need about 8GB RAM, 13B models need 16GB). On Raspberry Pi, use quantized models (e.g., llama3.1:8b-q4). Check that no other heavy processes are competing for resources with 'top' or 'htop'.
Multi-agent handoffs not working
Verify that AGENTS.md is in the workspace root directory and lists all agents with the correct @mention names. The agent names in AGENTS.md must match the registered agent names exactly. Check that handoff rules are clear and unambiguous. Run 'openclaw agents list' to confirm all agents are registered.
High API costs from agent interactions
Review your model choices. Switch high-volume, simple agents to cheaper models (Claude Haiku, GPT-4o-mini, or Ollama). Add rules in SOUL.md to keep responses concise. Disable unnecessary skills that trigger additional API calls. Monitor usage through your provider's dashboard.
Gateway crashes after macOS update
macOS updates can invalidate launchd plists. Unload and reload the gateway service: 'launchctl unload ~/Library/LaunchAgents/ai.openclaw.gateway.plist && launchctl load ~/Library/LaunchAgents/ai.openclaw.gateway.plist'. If that fails, run 'openclaw gateway restart' to regenerate the service configuration.
10. Skip the Docs -- Use CrewClaw
This guide covers over a dozen configuration files, dozens of CLI commands, multiple deployment targets, and several integration setups. That is a lot of documentation to read, understand, and implement correctly.
CrewClaw generates all of this for you. Pick a role, select your integrations, and CrewClaw produces a complete, production-ready agent configuration package. Every SOUL.md section is filled out. The right skills are enabled. Channel configurations are pre-wired. Deployment instructions are included.
Instead of reading documentation and writing configuration files by hand, you answer a few questions and download a ready-to-deploy agent package. What takes 30-60 minutes of manual configuration takes less than 5 minutes with CrewClaw.
SOUL.md generated automatically
Identity, personality, rules, skills, and channels are all configured based on the role you select. No need to write markdown by hand or reference documentation for the correct section format.
AGENTS.md for multi-agent teams
When you build a team, CrewClaw generates the AGENTS.md with proper handoff rules, workflow sequences, and @mention configurations. Multi-agent coordination without manual setup.
Model provider pre-configured
Select your preferred provider (Anthropic, OpenAI, Google, or Ollama) and CrewClaw configures it in the SOUL.md. No need to look up model IDs or provider-specific settings.
Deployment-ready package
The downloaded package includes everything: configuration files, deployment commands, and setup instructions for your target platform. Unzip, register, and start.
Frequently Asked Questions
Where is the official OpenClaw documentation?
OpenClaw documentation is available at docs.openclaw.com and within the GitHub repository. This guide serves as a comprehensive companion reference that covers all configuration files, CLI commands, gateway settings, model providers, integrations, and deployment options in one place.
What is SOUL.md and why is it important?
SOUL.md is the core configuration file that defines everything about an OpenClaw agent. It is a plain markdown file that contains the agent's identity, personality, rules, skills, and behavioral instructions. Every OpenClaw agent requires a SOUL.md file. It replaces the need for programming by letting you define agents in natural language markdown sections.
How many agents can I run with OpenClaw?
There is no hard limit on the number of agents you can run with OpenClaw. The practical limit depends on your hardware resources and the LLM API costs. Each agent consumes memory when active through the gateway, and each interaction incurs API costs from your chosen model provider. Teams of 5-10 agents are common for production workloads.
Can I use OpenClaw without an internet connection?
Yes, if you configure OpenClaw to use Ollama as the model provider. Ollama runs LLMs locally on your hardware, so the entire agent stack operates offline. You need internet access only for the initial model download. After that, the gateway, agent, and model all run locally without any external API calls.
What is the difference between SOUL.md and AGENTS.md?
SOUL.md defines a single agent: its identity, personality, rules, and skills. AGENTS.md defines how multiple agents work together as a team. It lists all agents in the workspace, describes their roles, and specifies handoff rules for how they pass work between each other. You need SOUL.md for every agent, but AGENTS.md is only needed when running multi-agent teams.
How do I update OpenClaw to the latest version?
Run 'openclaw update' from your terminal. This downloads and installs the latest version while preserving your existing agent configurations and workspace settings. After updating, restart the gateway with 'openclaw gateway restart' to apply any new features or fixes.
Does OpenClaw support Windows?
OpenClaw runs on macOS and Linux natively. Windows support is available through WSL (Windows Subsystem for Linux). Install WSL 2, set up a Linux distribution like Ubuntu, and then install OpenClaw inside the WSL environment. All features including the gateway, channels, and Ollama integration work within WSL.
Skip the documentation. Build your agent now.
CrewClaw generates all configuration files for you. Pick a role, connect tools, download your production-ready agent package. No manual config writing required.