OpenClaw Provider Setup: Anthropic, Ollama, OpenAI, and Gemini
OpenClaw supports every major AI provider out of the box. This guide covers how to configure Anthropic Claude, OpenAI, Ollama local models, and Google Gemini step by step. Including API key setup, SOUL.md model config, switching providers without changing your agent, and troubleshooting auth errors.
Supported Providers Overview
OpenClaw treats every AI model as a configurable provider. You define one or more providers in your config file, assign them to agents, and swap them at any time without touching your SOUL.md identity or rules. Here is what each provider is best suited for.
| Provider | Best Models | Cost | Best For |
|---|---|---|---|
| Anthropic | Claude Sonnet 4, Claude Haiku 3.5 | $0.25–$15 / 1M tokens | Coding agents, reasoning, long context |
| OpenAI | GPT-4o, GPT-4o-mini | $0.15–$5 / 1M tokens | Creative agents, function calling, tools |
| Ollama | Llama 3.1, Mistral, Gemma 2 | Free | High-volume agents, private data, offline |
| Google Gemini | Gemini 1.5 Pro, Gemini Flash | $0.075–$3.5 / 1M tokens | Multimodal tasks, large context, cost efficiency |
Provider configuration lives in ~/.openclaw/config.json. Your agent identity and rules live in SOUL.md. The two are completely separate, which means you can change models without rewriting your agent.
Anthropic Setup: API Key and Claude Model Config
Anthropic is the most popular provider for OpenClaw agents because Claude handles long SOUL.md instructions extremely well, follows complex rules reliably, and has a 200K token context window. Here is how to get set up from scratch.
Step 1: Get your Anthropic API key
Go to console.anthropic.com, sign in, and navigate to API Keys. Create a new key and copy it. Keys start with sk-ant-.
Step 2: Configure OpenClaw with your key
# Option A: Via CLI (recommended)
openclaw config set models.anthropic.api_key sk-ant-your-key-here
openclaw config set models.anthropic.provider anthropic
openclaw config set models.anthropic.model claude-sonnet-4-20250514
# Option B: Via environment variable
export ANTHROPIC_API_KEY=sk-ant-your-key-here
# Add to ~/.zshrc or ~/.bashrc for persistence
# Verify the key works
openclaw models test anthropicStep 3: Full config.json for Anthropic
{
"models": {
"claude-sonnet": {
"provider": "anthropic",
"model": "claude-sonnet-4-20250514",
"api_key": "sk-ant-...",
"max_tokens": 4096,
"temperature": 0.7
},
"claude-haiku": {
"provider": "anthropic",
"model": "claude-haiku-3-5-20241022",
"api_key": "sk-ant-...",
"max_tokens": 2048,
"temperature": 0.5
}
},
"default_model": "claude-sonnet"
}Setting the model in SOUL.md
The SOUL.md file can optionally specify a preferred model in its metadata header. OpenClaw will use this as a hint unless overridden by the config.
---
model: claude-sonnet-4-20250514
provider: anthropic
---
# Orion — Project Manager
## Identity
You are Orion, a senior project manager who keeps teams on track.
You are direct, data-driven, and always focused on shipping.
## Rules
- Open every response with the most important status update
- Flag blockers before they become problems
- Keep responses under 200 words unless asked for detail
- Always suggest a next action at the end of each messageModel selection tip: Use Claude Haiku for fast, lightweight agents (status checks, simple Q&A, notifications). Use Claude Sonnet for agents that need to reason through complex problems, write long-form content, or orchestrate other agents.
OpenAI Setup: GPT-4o and GPT-4o-mini
OpenAI models are a good choice for agents that need strong function-calling capabilities, creative writing, or compatibility with tools built around the OpenAI API. GPT-4o-mini is also one of the cheapest cloud options for high-volume agent tasks.
# Set your OpenAI API key
openclaw config set models.openai.api_key sk-your-openai-key
openclaw config set models.openai.provider openai
openclaw config set models.openai.model gpt-4o
# Or via environment variable
export OPENAI_API_KEY=sk-your-openai-key
# Test connectivity
openclaw models test openai{
"models": {
"gpt4o": {
"provider": "openai",
"model": "gpt-4o",
"api_key": "sk-...",
"max_tokens": 4096,
"temperature": 0.8
},
"gpt4o-mini": {
"provider": "openai",
"model": "gpt-4o-mini",
"api_key": "sk-...",
"max_tokens": 2048,
"temperature": 0.5
}
}
}Get your API key at platform.openai.com/api-keys. OpenAI keys start with sk- (without the Anthropic ant prefix).
Ollama Setup: Local Models at Zero Cost
Ollama lets you run open-source models entirely on your own machine. No API key required. No per-token cost. OpenClaw connects to Ollama through a local HTTP endpoint, identical to how it connects to cloud providers.
Step 1: Install Ollama and pull a model
# macOS / Linux: one-line install
curl -fsSL https://ollama.com/install.sh | sh
# Windows: download from ollama.com/download
# Verify Ollama is running
ollama --version
# Pull models for your agents
ollama pull llama3.1 # Best general-purpose (4.7 GB)
ollama pull mistral # Best for writing agents (4.1 GB)
ollama pull codegemma # Best for coding agents (4.8 GB)
ollama pull phi3 # Fastest / lightest (2.3 GB)
# Verify the local API is accessible
curl http://localhost:11434/api/tagsStep 2: Configure OpenClaw to use Ollama
{
"models": {
"ollama-llama": {
"provider": "ollama",
"endpoint": "http://localhost:11434",
"model": "llama3.1",
"temperature": 0.7,
"context_length": 8192,
"timeout": 120
},
"ollama-mistral": {
"provider": "ollama",
"endpoint": "http://localhost:11434",
"model": "mistral",
"temperature": 0.8
}
},
"default_model": "ollama-llama"
}Step 3: Test and assign to agents
# Verify OpenClaw can reach Ollama
openclaw models test ollama-llama
# Assign Ollama to a specific agent
openclaw agents update support-bot --model ollama-llama
# Run your agent
openclaw agent --agent support-bot --message "Hello, are you running locally?"
# Check which model each agent uses
openclaw agents listRemote Ollama: To use Ollama on a different machine (like a home server or Raspberry Pi), change the endpoint to http://192.168.1.x:11434 and start Ollama with OLLAMA_HOST=0.0.0.0 ollama serve on the remote machine.
Google Gemini Setup
Gemini 1.5 Pro offers one of the largest context windows available (up to 2M tokens) at competitive pricing. Gemini Flash is the fastest and cheapest Gemini option for high-volume agents. Both work through the Google AI API.
# Get your API key from aistudio.google.com/app/apikey
# Then configure OpenClaw
openclaw config set models.gemini.api_key AIza-your-key-here
openclaw config set models.gemini.provider google
openclaw config set models.gemini.model gemini-1.5-pro
# Or via environment variable
export GOOGLE_AI_API_KEY=AIza-your-key-here
# Test the connection
openclaw models test gemini{
"models": {
"gemini-pro": {
"provider": "google",
"model": "gemini-1.5-pro",
"api_key": "AIza-...",
"max_tokens": 8192,
"temperature": 0.7
},
"gemini-flash": {
"provider": "google",
"model": "gemini-1.5-flash",
"api_key": "AIza-...",
"max_tokens": 4096,
"temperature": 0.5
}
}
}Choosing the Right Model for Your Agent Role
The best model for an agent depends on what that agent actually does. Matching the model to the task is the single biggest lever you have over both quality and cost.
Claude handles complex instructions and long codebases better than any other model. It follows SOUL.md rules precisely and rarely hallucinates API signatures.
GPT-4o produces natural, varied prose with a strong creative voice. Better for marketing copy, storytelling, and content that needs personality over logic.
Handles hundreds of FAQ-style messages per day at zero cost. Works offline and keeps data on your hardware. Good enough for structured, rule-based responses.
Both respond in under 1 second and cost less than $0.25 per million tokens. Ideal for agents that check statuses, send alerts, and handle simple lookups.
2M token context window handles entire codebases, long documents, and extended conversation history without truncation. Best for analysis-heavy agents.
Use Ollama for routine agents (free), Haiku for agents that need current knowledge ($0.25/1M tokens). Full team under $5/month for most workloads.
Cost Comparison Table
Prices as of early 2026. All costs are per 1 million tokens. Typical agent messages average 500–2,000 tokens combined input and output.
| Model | Input / 1M tokens | Output / 1M tokens | Est. 10K messages/day |
|---|---|---|---|
| Ollama (local) | $0 | $0 | $0 / month |
| Claude Haiku 3.5 | $0.80 | $4.00 | ~$7 / month |
| GPT-4o-mini | $0.15 | $0.60 | ~$2 / month |
| Gemini Flash | $0.075 | $0.30 | ~$1 / month |
| Claude Sonnet 4 | $3.00 | $15.00 | ~$27 / month |
| GPT-4o | $2.50 | $10.00 | ~$19 / month |
| Gemini 1.5 Pro | $3.50 | $10.50 | ~$21 / month |
Estimates assume 1,000 input tokens and 500 output tokens per message. Actual costs vary by agent complexity and response length.
Environment Variables and Secrets Management
Hardcoding API keys in config.json is convenient but risky if you share configs or check them into version control. OpenClaw supports environment variable references so your keys never appear in plain text files.
# Add to your shell profile for persistence
export ANTHROPIC_API_KEY="sk-ant-your-key"
export OPENAI_API_KEY="sk-your-openai-key"
export GOOGLE_AI_API_KEY="AIza-your-gemini-key"
# Reload your shell
source ~/.zshrc
# OpenClaw reads these automatically if config.json
# references them or if no key is set in config{
"models": {
"claude-sonnet": {
"provider": "anthropic",
"model": "claude-sonnet-4-20250514",
"api_key": "{process.env.ANTHROPIC_API_KEY{'}'}",
"max_tokens": 4096
},
"gpt4o": {
"provider": "openai",
"model": "gpt-4o",
"api_key": "{process.env.OPENAI_API_KEY{'}'}",
"max_tokens": 4096
}
}
}Security tip: Add ~/.openclaw/config.json to your .gitignore if your agent workspace is version controlled. Never commit API keys to a repository, even a private one.
Switching Providers Without Changing Your Agent Config
One of OpenClaw's core design decisions is that your agent's identity (SOUL.md) is completely separate from its model provider. You can move an agent from Anthropic to Ollama to OpenAI without editing a single line of SOUL.md. Only the config.json changes.
# See your agent's current model
openclaw agents list
# Switch from Anthropic to Ollama (no SOUL.md changes needed)
openclaw agents update orion --model ollama-llama
# Switch back to Claude Sonnet
openclaw agents update orion --model claude-sonnet
# Switch all agents to a different provider at once
openclaw config set default_model claude-haiku
# Test after switching to confirm behavior
openclaw agent --agent orion --message "What is your current task queue?"This separation lets you test prompts cheaply with Ollama, then deploy to Claude for production without duplicating any agent configuration. It also means you can run the same agent against multiple providers to benchmark quality differences for your specific use case.
Troubleshooting: Auth Errors, Rate Limits, Model Not Found
Provider issues fall into a few predictable categories. Here is how to diagnose and fix the most common ones.
Authentication error (401)
The most common cause is a key with trailing whitespace from copy-paste. Check with:
# Check for whitespace issues
openclaw config get models.anthropic.api_key | cat -A
# Should end with $ not ^M$ or space$
# Re-set the key cleanly
openclaw config set models.anthropic.api_key "$(echo sk-ant-your-key | tr -d '[:space:]')"
# Test after setting
openclaw models test anthropicRate limit error (429)
You have hit the provider's request limit. Options:
- Add retry logic: OpenClaw supports
max_retries: 3in the model config - Upgrade your API tier with the provider for higher rate limits
- Route high-volume agents to Ollama to reduce cloud API pressure
- Add
rate_limit_delay: 1000(ms) to slow down agent requests
Model not found error
The model name in config.json does not match the provider's API. Common mistakes:
# Correct Anthropic model names (2026)
claude-sonnet-4-20250514 # Claude Sonnet 4
claude-haiku-3-5-20241022 # Claude Haiku 3.5
claude-opus-4-5-20250514 # Claude Opus 4.5
# Correct OpenAI model names
gpt-4o # GPT-4o latest
gpt-4o-mini # GPT-4o mini
gpt-4-turbo # GPT-4 Turbo
# For Ollama: model name must match what you pulled
ollama list # See exactly what's availableOllama connection refused
Ollama is not running or is not accessible at the configured endpoint.
# Start Ollama if it's not running
ollama serve
# Verify it's listening
curl http://localhost:11434/api/tags
# If using a remote machine, check it's bound to all interfaces
OLLAMA_HOST=0.0.0.0 ollama serve
# Check OpenClaw's endpoint setting
openclaw config get models.ollama-llama.endpointFrequently Asked Questions
Where do I set my ANTHROPIC_API_KEY for OpenClaw?
You can set it in three places. The recommended way is via the CLI: openclaw config set models.anthropic.api_key sk-ant-your-key. Alternatively, export ANTHROPIC_API_KEY=sk-ant-your-key in your shell profile (~/.zshrc or ~/.bashrc) and OpenClaw will pick it up automatically. The third option is to add it directly to ~/.openclaw/config.json under models.anthropic.api_key. The CLI method is preferred because it scopes the key to OpenClaw without polluting your shell environment.
Can I use different AI providers for different agents in the same OpenClaw setup?
Yes. Each agent in OpenClaw can be assigned its own model provider independently. You configure multiple providers in config.json, then assign a model to each agent using openclaw agents update agent-name --model provider-alias. A typical setup uses Ollama for high-volume support agents (zero cost), Claude Haiku for fast interactive agents, and Claude Sonnet or GPT-4o for complex reasoning agents. The SOUL.md file is completely independent of the provider, so switching an agent's model does not require changing its rules or identity.
How do I configure OpenClaw to use Ollama running on a different machine on my network?
Change the endpoint in your Ollama provider config from http://localhost:11434 to your remote machine's IP: http://192.168.1.x:11434. Make sure Ollama is running with OLLAMA_HOST=0.0.0.0 on the remote machine so it accepts connections from outside localhost. You can also expose Ollama over a VPN or SSH tunnel for secure remote access. Once the endpoint is updated in OpenClaw's config.json, all agents assigned to that provider will route through the remote Ollama instance.
What is the cheapest provider setup for running OpenClaw agents at scale?
Ollama with local models is free for any volume. If local hardware is not an option, Claude Haiku and GPT-4o-mini are the cheapest cloud providers for agent workloads. Haiku costs roughly $0.25 per million input tokens, making it practical for high-volume agents that handle hundreds of messages per day. For a mixed team: use Ollama for routine agents, Haiku for interactive agents that need recent knowledge, and Sonnet or GPT-4o only for your most complex reasoning agents.
My OpenClaw agent gives an authentication error even though my API key looks correct. What should I check?
Start by confirming the key has no extra whitespace or newline characters, which can happen when pasting. Run openclaw models test anthropic to validate the key is accepted. Check that the key has the right permissions for the model you are targeting — some keys are restricted to specific models. If you set the key via environment variable, verify it is exported correctly with echo $ANTHROPIC_API_KEY. If you have multiple profiles or shell sessions, make sure the key is set in the right context. Rate limit errors (429) can look like auth errors if the response body is not checked carefully.
Skip provider setup entirely
CrewClaw handles model configuration for you. Pick your provider, paste your API key, deploy. Your SOUL.md, config.json, and Docker setup are generated automatically.
Create Your Agent