Most AI agents forget everything between conversations. You give them context, they use it, and it is gone. The next session starts from zero. OpenClaw has had persistent memory for a while, but v2026.4.5 introduces something fundamentally different: memory dreaming.
The idea is borrowed from neuroscience. When humans sleep, the brain replays the day's experiences, strengthens important connections, and prunes irrelevant ones. OpenClaw now does the same thing for AI agents — during idle periods, the agent processes its accumulated memories in three distinct phases and wakes up smarter.
Persistent memory stores facts. An agent remembers that you prefer Python over JavaScript, that your deploy target is AWS, and that your last project was a REST API. But it does not understand the relationships between those facts.
It does not realize that your preference for Python plus your AWS target plus your API experience means you would probably benefit from a Lambda + API Gateway suggestion. It has the data but has not connected the dots.
Memory dreaming solves this. It takes isolated facts and builds a connected knowledge graph — automatically, in the background, without you doing anything.
Memory dreaming runs in three cooperative phases, each with its own schedule and purpose. They are designed to work together, not compete.
Light sleep is housekeeping. It runs most frequently and handles the basics:
Think of light sleep as organizing your desk at the end of the day. Nothing creative happens, but everything is in the right place for tomorrow.
Deep sleep is where the real consolidation happens. It runs less frequently and does the heavy lifting:
Deep sleep is replay-safe. If it runs twice on the same data, it reconciles instead of duplicating entries in MEMORY.md. This means you can restart the gateway without worrying about corrupted memory state.
REM is the experimental, creative phase. This is where surprising insights emerge:
REM is the most powerful phase and the most risky. It can occasionally create false associations — connecting things that should not be connected. This is why OpenClaw includes tooling to review and correct REM output.
Everything dreaming produces is written to a dreams.md file at the project root. This is your agent's dream diary — a human-readable log of what the dreaming process discovered, promoted, and connected.
The Dreams UI in the OpenClaw control panel shows the diary content with the lobster animation (a nice touch). You can review what your agent "dreamed about" and delete any false associations before they influence future conversations.
There is also a CLI tool for power users: openclaw memory rem-harness lets you preview what REM would produce without actually committing it to memory, and promote-explain shows you why a specific memory was promoted — what evidence and patterns led to that decision.
Memory dreaming is disabled by default. To enable it, add this to your OpenClaw config:
# settings.yaml
memory:
dreaming:
enabled: true
frequency: "daily" # or "hourly", "weekly"
# Optional: tune memory aging
recencyHalfLifeDays: 14 # memories lose half their weight every 14 days
maxAgeDays: 180 # memories older than 180 days are candidates for pruningThat is the simple version. The three phases are treated as an implementation detail — you enable dreaming and set the frequency, and OpenClaw handles the rest. For advanced users, verbose logging is available to inspect promotion decisions.
You can also trigger dreaming manually:
# In chat /dreaming # Via CLI openclaw memory rem-harness # preview REM output openclaw memory promote-explain # explain why a memory was promoted
Not every agent benefits equally from dreaming. It shines in specific scenarios:
Dreaming adds computational overhead during idle periods. Skip it if:
Memory dreaming relies on embeddings to find connections between memories. v2026.4.5 expanded embedding support significantly with Amazon Bedrock embeddings for Titan, Cohere, Nova, and TwelveLabs models, plus AWS credential-chain auto-detection.
If you are already using a provider that supports embeddings (OpenAI, Anthropic, Google, or now Bedrock), dreaming works out of the box. For local setups, Ollama embeddings work too.
Memory dreaming is experimental, and the OpenClaw team is clear about that. But it points to where AI agents are heading: systems that do not just store information but actively process it, find patterns, and build understanding over time.
Today's agents are reactive — they respond to what you ask. Dreaming agents are proactive — they think about what they have learned and come back with insights you did not ask for but probably need.
That is a fundamental shift. And it is happening in open source, where anyone can inspect, modify, and improve how their agent thinks.
Full technical details are in the v2026.4.5 release notes.
Memory dreaming is an experimental feature in OpenClaw v2026.4.5 that consolidates agent memory during idle periods. It runs in three phases — light, deep, and REM — inspired by human sleep cycles. The agent indexes recent memories, cross-references them with older ones, and makes creative connections, all without explicit retraining.
No. Dreaming runs during idle periods when the agent is not handling conversations. It is a background process that does not interfere with real-time agent performance. You can also trigger it manually with the /dreaming command.
Yes. The REM phase can occasionally create false associations between unrelated memories. This is why the feature is marked experimental. OpenClaw includes a promote-explain tool and a Dream Diary surface so you can review what the dreaming process produced and correct any mistakes.
Memory dreaming is disabled by default. Enable it in your OpenClaw config by setting dreaming.enabled to true. You can optionally configure the frequency. The three phases run automatically once enabled — you do not need to configure them individually.
Dreaming works with any provider that supports embeddings for memory search. OpenClaw v2026.4.5 added Amazon Bedrock embeddings (Titan, Cohere, Nova, TwelveLabs) alongside existing providers. If your provider supports embeddings, dreaming will work.
Skip the setup. Pick a template and deploy in 60 seconds.
Pick a role. Your AI employee starts working in 60 seconds. WhatsApp, Telegram, Slack & Discord. No setup required.
Get Your AI Employee